(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38852360

关于许可证的选择,我和您一样感到失望。 然而,记住开源软件通常以宽松的许可证开始,然后通过大众的需求或公司的支持过渡到 Copyleft 许可证,这可能会有所帮助。 此外,一开始选择宽松的许可证并不一定是软弱的表现。 有时,由于其简单性和缺乏官僚主义,它可以更好地接触新受众并提高采用率。 还需要注意的是,许可证并不相互排斥,因此未来的版本可能会添加更多与项目价值更相符的要求。 最终,许可证的选择取决于几个因素,包括技术考虑、业务目标和个人偏好等。 两种类型的许可证都各有利弊,最终由开发人员决定哪种许可证最能满足他们的需求和目标。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Maestro: A Linux-compatible kernel in Rust (lenot.re)
671 points by Uriopass 23 hours ago | hide | past | favorite | 287 comments










So many thank to all of you for your support! This project has represented a lot of efforts for me and it means a lot!

Right now the website seems to be pretty slow/down. There is a lot of traffic, which was not expected. I also suspect there might be a DoS attack going on.

I will try to make it work better when I get home! (I am currently at work so I cannot give much attention to it right now)

Sorry for the inconvenience, but glad you appreciate the project!



DOS from HN's very own Slashdot effect...


For those who are wondering WTF is a Slashdot, and how does it effect websites.

https://tech.slashdot.org/story/24/01/03/0017242/25-years-si...

https://en.wikipedia.org/wiki/Slashdot_effect



Weird to think there are now people so young in these types of discussions that wouldn't know about Slashdot.


I get that all the time, as most of my collegues don't get jokes related to TV series that predate them.

I have learned to avoid such remarks, unless surrounded by 40y+ folks.



I was only vaguely aware of Slashdot (and never had heard this effect referred to by that name), and at 30, I'm probably more than a decade older than at least someone in this thread. If you want "weird", think of the fact that a baby named after Hurricane Katrina would be old enough to vote in this year's US presidential election.




Slashdot was the GOTO site in the late 90's early 00's for tech news...


This is a cool project; getting bash to run on it seems like a real milestone. I dabbled with my own unix-like OS right out of college in C (no rust in 2010) and it was great fun.

I'm jealous you were able to make time to get this far!



Please test your website on mobile.

The navbar takes like 33% screen state and can't be removed.

I never understand why people want to make them sticky and steal valuable reading screen space. You can, if you want, always scroll to the top in like 300 ms.



75% zoom alleviated most of the ux issues I had


I agree, I have the Kill Sticky extension installed on my mobile browser and it works great for these kinds of situations.


> The navbar takes like 33% screen state and can't be removed.

very svelte compared to most cookie notices.



What a cool little project. It's astonishing how far this can boot with less than a third of the syscalls of Linux implemented.

However, my guess is that the ones that are missing are the more complicated ones. The TTY layer, for example, looks rather basic at the moment. Getting this right will probably be a lot of work.

So don't hold your breath for Maestro running your Linux applications in the next 3 years or so (even without taking into account all the thousands of drivers that Linux has)



>It's astonishing how far this can boot with less than a third of the syscalls of Linux implemented.

It's a great project, but I don't find this ratio surprising at all. Any mature platform builds up logic to enable scenarios such that most things don't need most of the system. As the saying goes, no one uses more than 10% of Excel, but it's a different 10% for everyone.

You could implement 30% of Excel functions and probably have an engine which opens 99% of spreadsheets out there.....though if you wanted full doc compatibility you would still have a long journey ahead of you.



> You could implement 30% of Excel functions and probably have an engine which opens 99% of spreadsheets out there.....though if you wanted full doc compatibility you would still have a long journey ahead of you.

Isn't this what effectively googles docs did? For a ton of use-cases google sheets is enough, I've heard of companies that basically were extra stringent about excel licenses (as a cost cutting measure no doubt), instead heavily pushing users toward using google sheets instead.



No. Excel/Docs is user-facing. Most of the way is enough, people will figure out the edge cases or just give up on them. To be viable as an OS, you have to support applications. An application running on your OS won't go "well, I guess this syscall doesn't work. I'll try to go around it somehow"...


> What a cool little project. It's astonishing how far this can boot with less than a third of the syscalls of Linux implemented.

My hobby OS is more or less a FreeBSD compatible kernel for one specific language VM[1]; it looks like I support 61 syscalls out of 424, and it's been a while since I ran across one I missed (sometimes syscalls are only called in some code paths, or when I target a newer kernel, there may be newer syscalls)

There are a lot of syscalls, and some of them are pretty esoteric; eventually a fully openended replacement will get to most of them, but a third is a good start.

[1] I wanted VM on metal and/or boot to VM, and it became apparent that this is the least effort way to get there, other than probably just having init=/path/to/the/vm; but that doesn't get me what I really want (hardware drivers and tcp stack in the VM language).



Is there maybe a subset of Linux applications that it could run soon? A proxy, nfs, some database server, http server, firewall?

I think it doesn't need to run Steam, libreoffice and Firefox to be useful. Many parts in a common server or microservices architecture are relatively simple in what they do and would probably benefit a lot from a safe, simple kernel.



> Is there maybe a subset of Linux applications that it could run soon? A proxy, nfs, some database server, http server, firewall?

You first need to port drivers for your -specific- network and io chipset. And if you want adoption and performance you also need the manufacturer on board. My guess is not quite soon.



A good first target is a VM.


tahts actually a pretty huge market


And virtio will get you really far in hypervisor compatibility as well.


Indeed, as I stated in the blog post, I am not very far from being able to run a text editor such as Vim or a compiler :)


It'd be cool to see it get "security/penetration tested/fuzzed" to see if "just because it's in Rust, is a good hacker able to find anything"


Rust is not able to shield from everything. On top of that, there is a lot of unsafe code. I am 100% sure that fuzz-testing or pentests would reveal a lot of issues. This is not something I had time to work on yet.


There's also Kerla [1] (Monolithic kernel in Rust, aiming for Linux ABI compatibility), but that seems to have gone dormant for few years.

[1] https://news.ycombinator.com/item?id=28986229



Or Redox OS, which is still there: https://www.redox-os.org/. It has a micro kernel design. But it is a bit more mature probably. And also MIT licensed so, there probably is some opportunity for code sharing.


Last time I had a look at it Redox didn't (even want to) implement Linux ABI compatibility.


Seems that the project is dead. The repository does not receive any commit for two years.




The redox-os post mentions cosmic desktop, and future wayland support, which may now already be almost implemented?

The System76 blog appears to have updates regarding COSMIC DE: https://blog.system76.com/post/the-spirit-of-cosmic-december...

Components of Cosmic Desktop Rust-based Desktop Environment: https://github.com/pop-os/cosmic-epoch#components-of-cosmic-...

cosmic-comp/src/wayland/handlers https://github.com/pop-os/cosmic-comp/tree/master_jammy/src/...



They were talking about Kerla but replied to the wrong comment.


I think this had already been attempted by the now-discontinued project [Kerla](https://github.com/nuta/kerla).


I didn't know this project. I will check it out!


Sounds like a fun project. Curious though: most of the drawbacks to using C and difficulties with developing an OS are around debugging.

I assume that the switch to Rust eliminated a certain class of memory error but is debugging still a pain? Or is there less of it than before the switch making debugging more tolerable?



A lot of memory and concurrency issues have been eliminated. It is still a pain to debug, but a lot less than it was before though.

As an example, there is not a lot of chances you forget to use a mutex since the compiler would remind it to you by an error.

This is not a silver bullet though, things such as deadlocks are still present. Especially with interruptions.

To give an example, if you decide to lock a mutex, then an interruption happens, the code that locks the mutex will stop running until the interruption is over. If the interruption itself tries to lock the same mutex, then you have a deadlock, and the typing system cannot help you with this kind of problem.

The solution is to disable interruptions handling while the mutex is locked, but the compiler cannot enforce it.



If you’re willing to implement your own mutex, it actually is possible to enforce! You could make disabling interrupts emit a token and then require the mutex to accept that token as a parameter to its locking behavior.


But what forces you to declare the correct interrupt policy as it may not be the same for all uses of even the same mutex?


How would you enforce use of (only) the correct kind of mutex in interrupt context?


In Haskell this should be "easy" to do using monads. You would have a process monad and an atomic monad (using Linux terminology). The operation to lock a sleeping mutex would live in the process monad. It would be possible to lift an atomic computation into process context (disabling interrupts) but not the other way around.

I don't think rust has such a concept of different computation contexts (does it?)

Of course Haskell may not be the most suitable kernel implementation language for other reasons...



You would probably need to pass around a context type that encodes information about the current context and which interrupts are possible. You would then acquire the lock via that context, which would handle disabling those interrupts.


Still need to set the context correctly on entry to your interrupt handlers, I guess, and passing it around is kind of ugly. In C kernels you often stash contexts in CPU-locale variables, maybe you could do something like that instead of passing it around.


I suspect those sort of liveness properties (and likely some safety properties in unsafe code) cannot be encoded in Rust's type system and you'd have to use a model checker at some point.

Still, it's cool to see such a system used and providing immediate benefits. Happy hacking!



"Since the OS started as a school project, I had to cut corners in order to finish it on time. But now is the time to pay back the technical debt I accumulated."

Please try to keep this mindset.



Tangent, but I love this Gource thing that the author made the contribution video with. I'd never seen it before but had an idea to try making something like it a couple of years back - no original ideas it seems!


On my side, I discovered it a while ago with this video: https://www.youtube.com/watch?v=zRjTyRly5WA


I really like the idea of building a kernel, especiallly for learning purposes. Curious about the resources you used to understand the whole kernel/OS thing




Awesome, thank you!


Syscalls are easy. Drivers will be tough.


Great number of Linux hosts run in virtual machines, reducing the number of different devices drivers for that purpose.

For running on bare iron.. I suppose there's no short-term solution for that.



Drivers are difficult if you need to support lots of them. If you pick just one or a few pieces of hardware then it should be fairly straightforward. Target VMs only for example and you probably cut away 99% of the driver complexity.


Drivers are the tough part and the lack of a stable interface in Linux makes them hard to reuse.


There have been attempts to create kernel-agnostic interfaces for drivers such as: https://en.wikipedia.org/wiki/Uniform_Driver_Interface

For my case, I am planning to re-implement them. I like doing this.

I sure am not going to be able to re-implement everything myself though. I will concentrate on what I need, and I will consider implementing others if anyone else other than me is willing to use the OS (which would be incredible if it happened)



Or try to adopt Fuchsia/Android model of userspace drivers, maybe there is something to be reusable from them.


You can implement a virtual machine monitor (e.g. KVM) and then launch a Linux virtual machine to run drivers you lack.


Or NetBSD drivers via rump:

https://github.com/rumpkernel/wiki



Doesn't FreeBSD borrow graphics drivers from Linux? If I'm remembering that right, it can't be quite that bad.


People who want stable interfaces should not touch anything Linux with a ten-foot pole.


Care to elaborate on this?

I clearly understand nothing of this, but it always felt confused about it. Why won't Linux aim for ABI stability? Wouldn't that be a win for everyone involved?





The Linux Kernel Driver Interface

(all of your questions answered and then some)

https://github.com/torvalds/linux/blob/master/Documentation/...



Cyclic logic that says you're wrong for wanting a stable kernel interface, because the kernel keeps changing so the solution is to just get your code merged into mainline. As a tautology, it's true, but it's also a cover for "because we don't want to".

See Windows or android GKI for existence proof that it can be done if so motivated.



From what I understood, I think the big difference here is the human factor: Windows and Android are maintained by employees, who have no choice but to work on things even if they don't like doing it. Linux on the other hand is a collective effort of people doing what they want to do on their free time.


That's a myth. Most Linux contributions come from paid employees from various companies not unpaid volunteers.


Great to see that Greg Kroah-Hartman dedicated a whole article to answering my questions. Thanks!


TL;DR: maintaining a stable driver ABI is more work because you have to deal with backwards compatibility, and it mainly benefits vendors that don't make their drivers open source.

So the Linux devs are really against it both from a lack of resources point of view, and from an ideological "we hate closed source" point of view.

Unfortunately, most vendors with closed source drivers don't give a shit about ideology and simply provide binaries for very specific Linux releases. That means users end up getting screwed because they are stuck on those old Linux versions and can't upgrade.

The Linux devs have this strange idea that vendors will see this situation as bad and decide that the best option is to open source their code, but that never happens. Even Google couldn't get them to do that. This is one of the main reasons that Android OS updates are not easy and universal.



That's only in terms of the driver interface, right? My understanding is that the userspace interface is extremely stable.


Depends on systemd's considerations these days.


Unless you consider systemd's security features (filtering available syscalls), there's literally no interposition between apps and the kernel.


Aren't most drivers kernel modules? In theory, the goal to aim for is that Maestro is able to compatibly load C Linux kernel modules. Then, whether or not the driver module is written in C or Rust is orthogonal to which kernel is used.

(Just bs'ing here, haven't written drivers in over a decade. What other complexity am I missing?)



Your sibling comments and responses to them about ABI stability talk about why this is not possible. For example,

https://news.ycombinator.com/item?id=38853047



> Computers are amongst the most complex tools that humanity has ever built.

This does not bode well for computers.



This is obviously impressive. Did you think from the beginning monolithic/module-based like linux was the way to go or did you consider making it a hybrid/micro kernel.


The monolithic/module thing was imposed by the subjects at my school (since it started as a school project).

However, a part of me is feeling like it could make sense to do a big refactor to turn all of this into a micro kernel. However I am not willing to do this until I have a plan to make it right.

By the way, the 32 bits thing too was imposed by the school. I am now wondering if it still relevant to support it and just support 64 bits only...



Great. Hope he keeps doing this until he finds enough supporters


Compatible means "syscall compatible" (I get that from the article). I wonder if it also means kernel module compatible (I dont think so, as the API touch point surface is much larger), but if it strives to be that'd be great (use all hardware that works on Linux).


Not even Linux itself is kernel module compatible from version to version, so it would be exceedingly difficult to try to be compatible with it.


Sure, but the benefit from being even partly compatible (same structs with same names, etc -- or maybe some compatibility layer) are great as Linux' devices drivers can then be ported more easily.

On one end device drivers in Rust are now possible, OTOH the Meastro kernel. I wonder if there come be a day in my life that I run a non-C-kernel in prod/ on dev laptop.



I hope that there would be architectural improvements possible that would not be realistic for current Linux, and implementing those changes would make also the internals—and thus the kernel module interfaces—look quite a bit different.


Dunno what I hope for more: better internal architecture or more HW compatibility. I think the latter drives adoption more than the first.


A memory safe linux kernel would be a fairly incredible thing. If you could snap your fingers and have it, the wins would be huge.

Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers. The reason why a VM like Firecracker is so much safer is that it removes the kernel as the primary security boundary.

Imagine if containers were actually vm-level safe? The performance and operational simplicity of a container with the security of a VM.

I'm not saying this is practical, at this point the C version of Linux is here to stay for quite a while and I think, if anything, Fuschia is the most likely successor (and is unlikely to give us the memory safety that a Rust kernel would). But damn, if Linux had been built with safety in mind security would be a lot simpler. Being able to trust the kernel would be so nice.

edit: OK OK. Yeesh. I meant this to be a hypothetical, I got annoyed at so many of the replies, and this has spiraled. I'm signing off.

I apologize if I was rude! Not a fun start to the morning.



Memory safety isn’t why containers are considered insufficient as a security boundary. It’s exposing essentially the entire Linux feature surface, and the ability to easily interact with the host/other containers that makes them unsafe by themselves. What you’re saying about VMs vs containers makes no sense to me. VMs are used to sandbox containers. You still need to sandbox containers if your kernel is written in rust

Even just considering Linux security itself: there are so, so many ways OS security can break besides a slight (you’re going to have to use unsafe a whole lot) increase in memory safety



The culture around memory safe languages is a positive improvement for programmer zeitgeist. Man though the overreach all the way to "always safe forever" needs to be checked.


JS and Rust are memory safe languages with a culture of pulling in hundreds if not thousands of dependencies. So unfortunately, in terms of culture, at least those languages are not Pareto improvements.


Also a lot of "in Rust" rewrites include a substantial amount of unsafe FFI calls to the original C libraries...


Can you expound in this some? I am not fully grasping your point. Are you saying "building safe by default" is a bad thing or assuming "safe forever" is a bad thing. Or are you saying something entirely different?


I expect it's likely more of "memory safety in a language doesn't make it _safe_, it makes it less vulnerable". It removes _some_ issues, in the same way that a language with static types removes some ways a program can be wrong, it doesn't make it correct.


The problem is the word "safe," which is inherently ambiguous. Safe from what? A better term would be "correct," because at least that implies there is some spec to which the developer expects the program to conform (assuming the spec itself is "correct" and devoid of design flaws).


While I agree with your point in general, I don't think the choice of wording is "the problem" here. But I do agree your wording is more correct.

It is almost impossible to prove a program is "correct". But things like memory safe languages, static typing, and automated tests allow us to prove the ways in which it is not "incorrect". And the more places it is not incorrect, the more likely it is to be correct (even if we can't prove it).



I said "the idea of using memory safe languages is great!" And "using memory safe languages does not eliminate attack surface". (It's pre coffee here so I appreciate your probe)

I meant that it's over-reach to say it's completely trustworthy just bc it's written in a GC/borrow checked language.



The premise of my post was "imagine a memory safe kernel". I repeatedly use the word "imagine".


The disagreement is that you wrote "imagine a memory safe kernel" but appear to have meant "imagine a kernel with zero vulnerabilities of any kind", and those things are not equivalent.


serious question: how much additional safety do you get over best practices and tooling in modern c++?


It's not possible for me to say.

Clearly you can only do worse in Rust than you'd have with perfect C. But what's that?

The question is: what is the expected loss (time, bugs, exploits that lead to crashes or injury or death or financial catastrophe) with Rust vs other languages.

Unfortunately that's not the conversation we have. We instead have absolutism re managed memory, which does account for about half of known security flaws that have been discovered and patched. Removing half of bugs in one fell swoop sounds amazing.

It's not that we can't remove those bugs other ways. Maybe modern c++ can cut bugs in half at compile time too. But Rust seems nicer to play with and doesn't require much more than what it has out of the box. Also it's shiny and new.

Given that Rust is making its way into platforms that Cpp struggled into, it's potentially moot. I sincerely doubt Linux will accept Cpp, but we're on the cusp of Rust in Linux.



Rust has better diagnostics when everything does not go according to plan. Just like the automobile industry improved a lot the diagnostics for repairing cars the last decade, the same should happen in software.

Rust and Cpp both have the same characteristics when it comes to speed, they are both very fast, or executable size, they are both minimal size, or reasonable high level compared to say, C.

When it comes to program correctness and diagnostics, one is better than the other, and it is the one created the last 10 years. Linux kernel should definitely move past C, to Rust, Zig or something like that.



> Clearly you can only do worse in Rust than you'd have with perfect C.

Is this clear? Why would the best Rust be worse than the best C?



I assume parent probably meant that the worse you can do in rust (e.g use of unsafe... etc) would just put you back at C level of memory safety.


Just the other day they were full kum ba yah over a holiday-time-released feature that's likely going to greatly increase the likelihood of building race conditions and deadlocks.

https://news.ycombinator.com/item?id=38721039



I know it's the curmudgeon NIMBY in me, and I love Rust and use it daily, but it's starting to feel like the worst of the JS crowd scribbled in grandpa's ANSI C books just to make him mad.

I am super happy with most features, but demand is demand and people demand runtimes. As a chimera it perfectly represents the modern world.



To be clear you're talking about async fn and impl trait in return position in traits? If so, how does that impact the likelihood one way or the other of race conditions or deadlocks?


This is such a bizarre take on the stabilization of async that I can't even tell if you are being serious or just hate Rust.


How do you figure?

Is it just because it makes async possible?



https://rust-lang.github.io/async-book/03_async_await/01_cha...

Is this all information that should be communicated to a programmer, or is all this info something that should be automatically assessed by a compiler with clear responses automatically to all possible situations specified, black and white, in the code?

If so, if it is the responsibility of the compiler, why release a version of the language, stabilize a spec, where that's not the case?



[flagged]



> Memory safety is the primary issue with containers as a security boundary.

I don't know where you're getting your information, but this is NOT the consensus on the LMKL, among most Linux kernel people or at any serious large scale tech company.

If you wish to learn about this stuff, lwn.net has a good series of articles on the problems people are actually working on. Most of the problems are related to namespace confusion, privilege escalation through, e.g. block-level access to the filesystem, etc.

> Sometimes the issues are not memory safety ones! But many, most are.

Huge citation needed. In 15 years of security, 8 in Linux kernel security, I have seen maybe one practical exploit related to containers that boiled down to a C-level memory issue.

> That's on you, but I'd be happy to explain more to you if you have questions.

No, you're very confidently stating things that are at the very least debatable. This thread has people doing kernel security as a day job.



Respectfully, do you know what a container actually is? (I’m guessing you think it’s docker, which is a common misconception)

The kernel itself does very little to prevent containers from interacting with the host (yes, via syscalls) in a way that affects other containers or the host itself. Containers are not insecure/composed with VMs to protect against memory safety issues so much as to implement sandboxing preventing these syscalls from doing bad shit.



> Respectfully, do you know what a container actually is?

I am extremely familiar with containers, the linux kernel, and virtual machines. In particular from a security perspective.

> The kernel itself does very little to prevent containers from interacting with the host (yes, via syscalls) in a way that affects other containers or the host itself.

Namespaces, such as process namespaces, file namespaces, user namespaces, etc, will prevent a container from interacting with another container without even getting into the fact that you can leverage DAC to do so further.



>> Memory safety isn’t why containers are considered insufficient as a security boundary.

> Memory safety is the primary issue with containers as a security boundary.

"No it isn't" "Yes it is" "No it isn't" "Yes it is"



This is getting a bit defensive. I think people are interpreting your post as saying all safety is guaranteed by using memory safety, but you rightly walk it back in comments to mean it addresses "primary" security problems.

That's it.



It's just silly. I wrote an "imagine if we could trust the kernel as a boundary" and I get 100 posts about the same misconceptions. If people read into my post that I think a Rust kernel would solve all problems, perhaps I was overly simplistic with my language.


People here are overly nitpicky, your language was fine.


I don’t think people are nit picking. The issue of container vs VM security is unrelated to memory safety. So bringing them into the discussion here, even if just in a hypothetical, is odd. You could say the language might be misleading but Occam’s Razor would suggest they’re just conflating the two topics.

Frankly, their comments since (especially those around namespacing) doesn’t do much to convince me that understand the distinction either. So it’s not just the opening post.

The best think they could do in this situation is to ask detailed questions from those that reply, as to why people disagree. Rather than stating they do understand and everyone else misunderstands them, while continuing to post the same misconceptions but phrase differently. At least if they asked questions, there’s a chance they might learn something.



If you find this amazing, perhaps you should take a look at seL4, which has formal proofs of correctness, going all the way down to the generated assembly code still satisfying the requirements.

It also has a much better overall architecture, the best currently available: A third generation microkernel multiserver system.

It provides a protected (with proof of isolation) RTOS with hard realtime, proof of worst case timing as well as mixed criticality support. No other system can currently make such claims.



Ok, but can I run a desktop on it? Not knocking seL4, it's damn amazing, but it's not exactly a Linux killer.


I think it's possible to run Genode[1] as a desktop on top of seL4 (Genode supports different kernels). However, I'm struggling to find a tutorial to get that up and running.

[1] https://en.wikipedia.org/wiki/Genode



Genode runs on sel4 and has a desktop gui.


Eh, seL4 has a suite of tools that turn their pile of C and ASM into an obscure intermediate language that has some formally verifiable properties. IMO this is just shifting the compiler problem somewhere else, into a dark corner where no one is looking.

I highly doubt that it will ever have a practical use beyond teaching kids in the classroom that formal verification is fun, and maybe nerd-sniping some defense weirdos to win some obscene DOD contracts.

Some day I would love to read a report where some criminal got somewhere they shouldn't, and the fact that they landed on an seL4 system stopped them in their tracks. If something like that exists, let me know, but until then I'm putting my chips on technologies that are well known to be battle tested in the field. Maestro seems a lot more promising in that regard.



See here[0] one of Gernot Heiser's comments (part of the seL4 foundation) talking about how "there are seL4-based devices in regular use in several defence forces. And it's being built in to various products, including civilian, eg critical infrastructure protection".

There is also an interesting case study[1][2] where seL4 was shown to prevent malicious access to a drone. Using seL4 doesn't necessarily make an entire system safe but for high security applications you have to build from the ground up and having a formally proven kernel is the first step in doing that.

I have been fortunate enough to play a small role in developing some stuff to be used with seL4 and it's obvious that the team are passionate about what they've got and I wish them the best of luck

0 - https://news.ycombinator.com/item?id=25552222 1 - https://www.youtube.com/watch?v=TH0tDGk19_c 2 - http://loonwerks.com/publications/pdf/Steal-This-Drone-READM...



seL4 actually has an end-to-end proof, which proves that the final compiled binary matches up with the formal specification. There are not many places that bugs can be shifted---probably the largest one at this point is the CPU itself.


>(Eh, ) I highly doubt that it will ever have a practical use beyond teaching kids in the classroom that formal verification is fun, and maybe nerd-sniping some defense weirdos to win some obscene DOD contracts.

Uh, perhaps take a look at the seL4 foundation's members[0], who are using it in the wild in very serious scenarios.

You can learn more about them as well as ongoing development work in seL4 Summit[1].

0. https://sel4.systems/Foundation/Membership/home.pml

1. https://sel4.systems/Foundation/Summit/home.pml



I wish L4 had taken off for general purpose computing. That and Plan9 are things I'd really like to try out but I don't have space to fit operating systems in amongst the other projects. They both strike me as having the Unix nature, either "everything is messages in userspace processes" or "everything is a file."


I don't think I've ever seen an argument for why "everything is a file" is a desirable thing. It seems like a kitchen where "everything is a bowl". Fine when you want to eat cereal, mix a cake, do some handwashing up, store some fruit; tolerable when you want to bake a cake in the oven. Intolerable when you've got a carrot in a bowl-shaped chopping board and you've got to peel and cut it using a bowl.

Why in principle should everything we do on computers behave and beshaped like a file?



the file is inconsequential. it could be any other universal abstraction, e.g. HTTP POST.

it's just something that every program running on a computer knows how to do, so why bother with special APIs you have to link against if you can just write to a file? (note you can still develop those layers if you wish, but you can also write a device driver in sh if you wish, because why not?)



>the file is inconsequential. it could be any other universal abstraction

Bad abstractions are notoriously problematic, and no abstraction is fit for every purpose.



So everything-is-a-file is a bit like REST - restrict the verbs, and move the complexity that causes into the addressing scheme and the clients.


I don't think the metaphor applies because kitchen utensils' forms are dictated are their purpose, but software interfaces are abstractions.

A fairer analogy would be if everything in the kitchen was bowl-shaped, but you could do bowl-like actions and get non-bowl behavior. Drop the carrot in the peelbowl and it is peeled, drop the carrot in the knifebowl and it is diced, drop the cubes in the stovebowl and they are cooked. Every manipulation is placing things in bowls. Every bowl is the same shape which means you can store them however is intuitive to you (instead of by shape).



The counter would be that different tasks are different and that's why everything isn't bowl shaped. Analogously, programming languages which are strongly opinionated (purely functional, array-based, strictly static) are far less generally popular than programming languages which are multi-paradigm and flexibly typed.

People have access to user-mode filesystems and FUSE, people could have made everything have a file-like interface by now if that was desirable - apparently it isn't. And because different tasks are different, it makes sense that they would fit different abstractions and interfaces, wouldn't it?



> I don't think I've ever seen an argument for why "everything is a file" is a desirable thing.

A file system is a tree of named objects. These objects are seamlessly part of the OS and served by a program or kernel driver called a file server which can then be shared over a network. Security is then handled by file permissions so authentication is native through the system and not bolted on. It fits together very well and removes so much pointless code and mechanisms.

A great example is a old uni demo where a system was built to control X10 outlets and switches (early home automation gear). Each device was a file in a directory tree that represents a building with sub directories for floors and rooms - e.g. 'cat /mnt/admin-building/2fl/rm201/lights' would return 'on' or 'off' (maybe it's a dimmer and its 0-255, or an r,g,b, value or w/e, sky's the limit, just put the logic in the fs). To change the state of the lights you just echo off >/mnt/admin-building/2fl/rm201/lights.

Now you can make a script that shuts all the lights off in the building by walking directories and looking for "lights" then writing off to those files. Maybe it's a stage, all your lights are on DMX and you like the current settings so then you 'tar -c /mnt/auditorium/stage/lighting|gzip >student_orientation_lighting_preset.tar.gz' and do the reverse over-writing all the archived settings back to their respective files. You could even serve those files over smb to a windows machine and turn lights on and off using notepad or whatever. And the file data doesn't have to be text, it could be binary too. It's just that for some things like the state of lights, temperature or w/e can easily be stored and retrieved as human readable text.

That is the beauty and power of 9p - its removes protocol barriers and hands you named objects seamlessly integrated into your OS which you can read/write using regular every day tools. It's a shame so many people can't grasp it.



I like the example and the detailed explanation but "it works in some scenarios" is only enough to say that some things work well as files, it's not enough to convince that everything should be a file (or that everything should be the same, whatever the same is).

> "sky's the limit, just put the logic in the fs"

You can, but why is it better to do that? "just" put the logic in a Python script seems much simpler than putting it in a low level pretend filesystem driver, on the other side of an arbitrary interface you had to come up with to satisfy an ideology that everything should have to go through this interface, right?

Over Christmas I set some LED displays running, controlled by an RS-232 serial port. It would be possible to echo a line of text to a pretend filename and have it appear on the sign, but there is a command for whether the text is static or scrolling and one for whether the text appears on the top line or bottom line. That information has to be somewhere, either as different paths in the filesystem e.g. /mnt/sign/static/row/top vs /mnt/sign/scrolling/row/top or as formatted content inside the file, or both.

There's a command to change the colour of the subsequent text, which can be used anywhere in the message, many times over - so that can't easily go in the filesystem path so now there needs to be some intermediate structure which both the shell script and the filesystem driver speak, to be able to pass this command through the file interface. With a COM port in Windows, if one script opens it and another script tries to open it then there's a clear error that the port is in use. If a usermode filesystem driver held the com port open, what would happen if two scripts tried to write to these files? Does one get a mysterious "write failed" filesystem error to catch and debug instead of a proper relevant error message? Do they have to proxy error messages through /mnt/sign/meta/error ?

This is one of the most trivial things a computer can do, sending a short text, which is possible with echoing to a com port /dev/ttyS0, and the abstraction idea of making it a file is already creaking under the weight of the task, adding the burden of arbitrary path decisions and data formatting and task-specific escape codes and control sequences, and bodgy-error handling, while assisting with basically nothing - while you can find /mnt/* you get no help with that, no link to any documentation, no further discovery, no interactivity like you get from a command line binary program interface. Much much easier to put the sign's commands directly in a short Python script, isn't it? And other more complex tasks just get more and more intense, as the "plain text" files in /etc/ are an ad-hoc mix of formats and data types and parsing rules and symbolic links to /opt/ and other random places.

As I say in another comment, usermode filesystems never became the default or popular way to do things, why not?



Container vulnerabilities are rarely related to memory bugs. Most vulnerabilities in container deployments are due to logical bugs, misconfiguration, etc. C-level memory stuff is absolutely NOT the reason why virtualization is safer, and not something Rust would greatly improve. On the opposite end of the spectrum, you have hardware vulnerabilities that Rust also wouldn't help you with.

Rust is a good language and I like using it, but there's a lot of magical thinking around the word "safe". Rust's definition of what "safe" means is fairly narrow, and while the things it fixes are big wins, the majority of CVEs I've seen in my career are not things that Rust would have prevented.



> Container vulnerabilities are rarely related to memory bugs.

The easiest way to escape a container is through exploitation of the Linux kernel via a memory safety issue.

> C-level memory stuff is absolutely NOT the reason why virtualization is safer

Yes it is. The point of a VM is that you can remove the kernel as a trust boundary because the kernel is not capable of enforcing that boundary because of memory safety issues.

> but there's a lot of magical thinking around the word "safe"

There's no magical thinking on my part. I'm quite familiar with exploitation of the Linux kernel, container security, and VM security.

> the majority of CVEs I've seen in my career are not things that Rust would have prevented.

I don't know what your point is here. Do you spend a lot of time in your career thinking about hardening your containers against kernel CVEs?



This discussion has been had a thousand times over back when people said "chroot is not security boundary". Now people say "containers are not a security boundary", but they mean essentially the same thing.

The thing is, chroots are pretty secure, if you know what you're doing. As long as you run each process as a dedicated uid, with readonly filesystems, without access to /proc or /dev, bar any kernel exploit you should be safe.

The know what you're doing part was where the problems arose. And that's why chroot was considered insecure in practice. People generally put whole Linux installations in chroots, complete with bind mounts or suid binaries. Either way could be a way to get open file handles outside your filesystem, which would make any namespaces a useless spectacle.

Containers are like that. I've seen people doing all sorts of crazy bind mounts, leaving the docker socket accessible, sharing filesystems, or running processes as root.

The kernel exploits are something else, they exist too, and something you at least in theory would patch after they get known. But the sidechannels are a hundred times more prevalent, in any containerized workload that I've seen.

Most kernel exploits are also related to device drivers or file systems, and are often written by third parties. Microkernels were said to contain those by running most of them as processes. That's a good idea, at least in theory. In practice it's tricky because you are dealing with buggy hardware that has DMA access. Any mismatch between a driver and a hardware state risks a system hang, data loss, or security exploit.



> I don't know what your point is here. Do you spend a lot of time in your career thinking about hardening your containers against kernel CVEs?

Yes, I literally led a team of people at a FAANG doing this.

You're saying the easiest way to escape a container is a vulnerability normally priced over 1 million USD. I'm saying the easiest way is through one of the million side channels.



OK, I apologize if I was coming off as glib or condescending. I will take your input into consideration.

I'm not looking to argue, I was just annoyed that I was getting so many of the same comments. It's too early for all of this negativity.

If you want to discuss this via an avenue that is not HN I would be open to it, I'm not looking to make enemies here, I'd rather have an earnest conversation with a colleague rather than jumping down their throats because they caught me in the middle of an annoying conversation.



Same, re-reading my replies I realize I phrased things in a stand-offish way. Sorry about that.

Thanks for being willing to take a step back. I think possibly we are talking about two different things. IME most instances of exploitation are due to much more rudimentary vulnerabilities.

My bias is that, while I did work on mitigations for stuff like Meltdown and Rowhammer, most "code level" memory vulnerabilities were easier to just patch, than to involve my team, so I probably under-estimate their number.

Regardless, if I were building computation-as-a-service, 4 types of vulnerability would make me worry about letting multiple containers share a machine:

1. Configuration bugs. It's really easy to give them access to a capability, a mount or some other resource they can use to escape.

2. Kernel bugs in the filesystems, scheduling, virtual memory management (which is different from the C memory model). It's a big surface. As you said, better use a VM.

3. The kernel has straight up vulnerabilities, often related to memory management (use after free, copy too much memory, etc.)

4. The underlying platform has bugs. Some cloud providers don't properly erase physical RAM. x86 doesn't always zero registers. Etc.

Most of my experience is in 1, a bit of 2 and mitigation work on 4.

The reason I think we're talking past each other a bit is that you're generating CVEs, while I mostly worked on mitigating and detecting/investigating attacks. In my mind, the attacks that are dirt cheap and I see every week are the biggest problem, but if we fix all of those, and the underlying platform gets better, I see that it'll boil down to trusting the kernel doesn't have vulnerabilities.



You two seem to have figured this out, but as far as I can tell, the disconnect here is that the vast majority of security issues related to the separation difference between VMs and containers isn't due to container "escapes" at all. It's due to the defaults of the application you're running assuming it's the only software on the system and it can run with any and all privileges. Lazy developers don't give you containers that work without running as privileged and demand from users to use that application after migrating from a primarily VM-based IT infrastructure to a primarily container-based one is too great to simply tell them no, and if it's free software, you have no ability to tell the developers to do anything differently.

Discussions on Hacker News understandably lean toward the concerns of application developers and especially greenfield projects run by startups who can take complete control of the full stack if they want to. But running applications using resources partially shared by other applications encompasses a hell of a lot of other scenarios. Think some bank or military department that has to self-host ADP, Rocket Chat, a Git server, M365, and whatever other hundreds of company-wide collaboration tooling the employees need. Do you do it on VMs or containers? If the application in question inherently assumes it is running on its own server as root, the answer to that question doesn't really depend on kernel CVEs potentially allowing for container escapes.

If we're just reasoning from first principles, applications in containers on the same host OS share more of a common attack surface than applications in VMs on the same physical host, and those share more than applications running on separate servers in the same rack, which in turn share more than servers in separate racks, which in turn share more than servers in separate data centers. The potential layers of separation can be nearly endless, but there is a natural hierachy on which containers will always sit below VMs, regardless of the kernel providing the containers.

Even putting that aside, if we're going to frame a choice here, these are not exactly kernels on equal footing. A kernel written in C that has existed for nearly four decades and is used on probably trillions of devices by everything from hobbyists to militaries to Fortune 50 companies to hospitals to physics labs is very likely to be safer on any realistic scale compared to a kernel written in Rust by one college student in his spare time that is tested on Qemu. The developer himself tells you don't use this in production.

I think the annoyance here is it often feels when reading Hacker News that a lot of users treat static typing and borrow checking like it's magic and automatically guarantees a security advantage. Imagine we lived in the Marvel Multiverse and vibranium was real. It might provide a substrate with which it is possible to create stronger structures than real metals, but does that mean you'd rather fly in an aircraft constructed by Riri Williams when she is 17 that she built in her parents' garage or would you rather trust Boeing and whatever plain-ass alloy with all its physical flaws they put into a 747? Maybe it's a bad analogy because vibranium pretty much is magic but there is no magic in the real world.



I largely agree, but this seems quite unfair to Linux.

> But damn, if Linux had been built with safety in mind security would be a lot simpler. Being able to trust the kernel would be so nice.

For its time, it was built with safety in mind, we can't hold it to a standard that wasn't prevalent until ~20 years later



*30 years...

Yes, we're that old.



I don't think it's that unfair, but I don't want to get into a whole thing about it, people get really upset about criticisms of the Linux kernel in my experience and I'm not looking to start my morning off with that conversation.

We can agree that C was definitely the language to be doing these things in and I don't blame Linus for choosing it.

My point wasn't to shit on Linux for its decisions, it was to think about a hypothetical world where safety built in from the start.



why not ada? Sure rust didn't exist when linux was first being built, but ada did and had a number of memory safety features. (not the same as rust's, but still better than C)


Simple, when Linux was created gcc was Free, Ada compilers were expensive..


> where safety built in from the start

Don't worry, in 30 years people will write the same thing about using Rust, assuming that Rust will still be in use 30 years from now.



Yeah, how naive we were building operating systems without linear and dependent types. Savages.


Apollo Computer is notable for having had a UNIX compatible OS, written in a Pascal dialect, as was Mac OS (migrating to C++ later on).


Clarification: The Apollo computer series by Apollo Domain is meant here, not the Apollo space mission, just to be sure.

The Pascal-based operating system is Aegis (later DomainOS), which - with UNIX - is a joint precursor of HP-UX: https://en.wikipedia.org/wiki/Domain/OS#AEGIS .



More like memory safer. A kernel necessarily has a lot of unsafe parts. See: https://github.com/search?q=repo%3Allenotre%2Fmaestro+unsafe...

Rust is not a magic bullet, it just reduces the attack surface by isolating the unsafe parts. Another way to reduce the attack surface would be to use a microkernel architecture, it has a cost though.



You're not really illustrating your point well with the link. If you look through the examples, they're mostly trivial and there's no clear way to eliminate them. Some reads/writes will interact with hardware and the software concepts of memory safety will never reach there because hardware does not operate at that level.

Check a few of the results. They range from single assembler line (interrupts or special registers), array buffer reads from hardware or special areas, and rare sections that have comments about the purpose of using unsafe in that place.

Those results really aren't "look how much unsafe code there is", but rather "look how few, well isolated sections there are that actually need to be marked unsafe". It's really not "a lot" - 86 cases across memory mapping, allocator, task switching, IO, filesystem and object loader is surprisingly few. (Actually even 86 is overestimated because for example inb is unsafe and blocks using it are unsafe so they're double-counted)



Practically speaking, even with `unsafe` the exploitability of rust programs is extremely difficult. With modern mitigation techniques it is required that you be able to chain multiple vulnerabilities and primitives together in order to actually reliably exploit software.

Bug density from `unsafe` is so low in Rust programs that it's just radically more difficult.

My company (not me, Chompie did the work, all credit to her for it) took a known bug, which was super high potential (write arbitrary data to the host's memory), and found it extremely difficult to exploit (we were unable to): https://chompie.rip/Blog+Posts/Attacking+Firecracker+-+AWS'+...

Ultimately there were guard pages where we wanted to write and it would have taken other vulnerabilities to actually get a working POC.

Exploitation of Rust programs is just flat out really, really hard.



While I agree, do note that a significant portion of a kernel is internal logic that can be made much safer.


> Imagine if containers were actually vm-level safe? The performance and operational simplicity of a container with the security of a VM.

As far as I know, the order of magnitudes of container security flaws from memory safety is the same as security flaws coming from namespace logic issues, and you'll have to top that with hardware issues. I'm sorry but rust or not, there will never be a world where you can 100% trust running a malware.

> Fuschia [...] is unlikely to give us the memory safety that a Rust kernel would

Well being micro kernel make it easier to migrate bits by bits, and not care about ABI



> the order of magnitudes of container security flaws from memory safety is the same as security flaws coming from namespace logic issues,

Memory safety issues are very common in the kernel, namespace logic issues are not.



Isn't gVisor kind of this as well?

"gVisor is an application kernel for containers. It limits the host kernel surface accessible to the application while still giving the application access to all the features it expects. Unlike most kernels, gVisor does not assume or require a fixed set of physical resources; instead, it leverages existing host kernel functionality and runs as a normal process. In other words, gVisor implements Linux by way of Linux."

https://github.com/google/gvisor



Containers became popular because it doesn't make much sense to be running full blown virtual machines just to run simple single process services.

You can lock down the allowed kernel syscalls with seccomp and go further with confining the processes with apparmor. Docker has good enough defaults for these 2 security approaches.

Full fat VMs are not immune to malware infection (the impact still applies to the permitted attack surface). Might not be able to easily escape to host but the risk is still there.



> Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers.

No, Docker container was never meant for that. Never use containers with untrustable binary. There is Vagrant and others for that.



> if Linux had been built with safety in mind security would be a lot simpler

I'm replying simply because you're getting defensive with your edits, but you're missing a few important points, IMO.

First of all, the comment I quoted falls straight into the category of if only we knew back then what we know now.

What does it even mean "built with safety in mind" for a project like Linux?

No one could predict that Linux (which was born as a kernel) would run on billions of devices that people keep in their pockets and constantly use for everything, from booking a table at the restaurant to checking the weather, from chatting with other people to accessing their bank accounts. And that said banks would use it too.

Literally no one.

Computers were barely connected back then, internet wasn't even a thing outside of research centers and universities.

So, what kind of safety should he have planned for?

And to safeguard what from what and who from who?

Secondly, Linux was born as a collaborative effort to write something already old: a monolithic Unix like kernel, nothing fancy, nothing new, nothing experimental, just plain old established stuff for Linus to learn how that kernel thing worked.

The most important thing about it was to be a collaborative effort so he used a language that he and many others already knew.

Did Linus use something more suited for stronger safety guarantees, such as Ada (someone else already mentioned it), Linux wouldn't be the huge success it is now and we would not be having this conversation.

Lastly, the strongest Linux safety guarantee is IMO the GPL license, that conveniently all these Rust rewrites are turning into more permissive licenses. Which steers away from what Linux was, and still largely is, a community effort based on the work of thousands of volunteers.



> Lastly, the strongest Linux safety guarantee is IMO the GPL license, that conveniently all these Rust rewrites are turning into more permissive licenses. Which steers away from what Linux was, and still largely is, a community effort based on the work of thousands of volunteers.

There is nothing about permissive licenses which prevents the project from being such a community effort. In fact, most of the Rust ecosystem is a community effort just like you describe, while most projects have permissive licenses. There's no issue here.



> There's no issue here

Tell that to Microsoft.

GPL protected Linux when Linux started to become a threat to the most powerful tech company on Earth of the time, famous for their "embrace and extend" tactic.

Rust had no such problem, it was started by a company that every year takes hundreds of millions (81% of their revenues in 2022) from one of the most powerful tech companies on Earth.



I genuinely have no idea what your point is.


Hasn't Kata containers solved this probl: https://github.com/kata-containers/kata-containers ?


Kata is an attempt at solving this problem. There are problems:

1. If using firecracker then you can't do nested virtualization

2. You still have the "os in an os" problem, which can make it operationally more complex

But Kata is a great project.



I didn't know Firecracker existed, that's really awesome. Looks to be in Rust as well. I'll have to look at how this differs from the approach that Docker uses, my understanding is that Docker uses cgroups and some other built-in Linux features.


I’m interested of reading more. Where can I find the blog posts?


https://web.archive.org/web/20221130205026/graplsecurity.com...

The company no longer exists so you can find at least some of them mirrored here:

https://chompie.rip/Blog+Posts/

The Firecracker, io_uring, and ebpf exploitation posts.

Chompie was my employee and was the one who did the exploitation, though I'd like to think I was at least a helpful rubber duck, and I did also decide on which kernel features we would be exploiting, if I may pat myself on the back ever so gently.



> Consider that right now a docker container can't be relied upon to contain arbitrary malware, exactly because the Linux kernel has so many security issues and they're exposed to containers

If you don't run docker as root, it's fairly ok for normal software. Kernel memory safety is not the main issue with container escapes. Even with memory safety, you can have logical bugs that result in privilege escalation scenarios. Is docker itself in Rust?

Memory safety is not a magic bullet, the Linux kernel isn't exactly trivial to exploit either these days, although still not as hardened as windows (if you don't consider stuff like win32k.sys font parsing kernel space since NT is hybrid after all) in my humble opinion.

> Linux had been built with safety in mind security would be a lot simpler

I think it was, given the resources available in 1993. But if Trovalds caved in and allowed a mini-kernel or NT like hybrid design instead if hard-core monolithic unix, it would have been a game changer. In 1995, Ada was well accepted mainstream, it was memory safe and even Rust devs learned a lot from it. It just wasn't fun to use for the devs (on purpose, so devs were forced to do tedious stuff to prevent even non-memory bugs). But since it is developed by volunteers, they used what attracts the most volunteers.

The main benefit of Rust is not it's safety but its popularity. Ada has been running on missiles, missile defense, subways, aircraft, etc... for a long time and it even has a formally verified subset (SPARK).

In my opinion, even today Ada is a better suit technically for the kernel than Rust because it is time tested and version stable and it would open up the possibility easily formal-verifying parts of the kernel.

Given how widely used Linux is, it would require a massive backing fund to pay devs to write something not so fun like Ada though.



> . Kernel memory safety is not the main issue with container escapes.

I disagree, I think it is the primary issue. Logical bugs are far less common.

> the Linux kernel isn't exactly trivial to exploit either these days

It's not that hard, though of course exploitation hasn't been trivial since the 90s. We did it at least a few times at my company: https://web.archive.org/web/20221130205026/graplsecurity.com...

Chompie certainly worked hard (and is one of if not the most talented exploit devs I've met), but we're talking about a single exploit developer developing highly reliable exploits in a matter of weeks.



A single talented developer taking weeks sounds about right, that's what I meant by difficult but also you have vulns that never get a cve issued or exploit developed because of kernel specific hardening.

As for container escapes, there are tools like deepce:

https://github.com/stealthcopter/deepce

I can't honestly say I've heard of real life container escapes by attackers or pentesters using kernel exploits. Although I am sure it happens and there are people who won't update the host's kernel to patch it.



If I remember correctly, Ada was much slower compared to C. Stuff like boundary checks on arrays has a cost.


Runtime checks can be disabled in Ada. They’re useful for debug builds though!


But that elimites purpose for Ada. Rust has better type system to deal with this.


I thought both Ada and rust have good compile time checks for memory safety that eliminates the need for run time checks?


I like Rust and work in it fulltime, and like its memory-safety aspects but I think it's a bit of a stretch to be able to claim memory safety guarantees of any kind when we're talking about low-level code like a kernel.

Because in reality, the kernel will have to do all sorts of "unsafe" things even just to provide for basic memory management services for itself and applications, or for interacting with hardware.

You can confine these bits to verified and well-tested parts of the code, but they're still there. And because we're human beings, they will inevitably have bugs that get exploited.

TLDR being written in Rust is an improvement but no guarantee of lack of memory safety issues. It's all how you hold the tool.



Yep. And tooling to secure C improved a lot in recent years. The Address-Sanitizer is a big improvement. I’m looking forward that C++ improves as language itself because it was already improved (smart-pointers, RAII, a lot of edge cases regarding sequencing) and they seem to be willing to modify the actual language. This opens a path for project to migrate from C to C++. A language inherits a lot from its introduction (strength/weak) but also changes a lot.

Every interaction with hardware (disk, USB, TCP/IP, graphics…) need to do execute unsafe code. And we have firmware. Firmware is probably a underestimate issue for a long time :(

Aside from errors caused by undetected undefined behavior all kinds of errors remain possible. Especially logic errors. Which are probably the biggest surface?

Example:

https://neilmadden.blog/2022/04/19/psychic-signatures-in-jav...

Honestly I struggle to see the point in rewriting C++ code with Java just for the sake of doing it. Probably improving test coverage for the C++ implementation would have been less work and didn’t created the security issue first.

That being said. I want to see an #unsafe and #safe in C++. I want some hard check that the code is executing only defined. And modern compilers can do it for Rust. Same applies to machine-dependent/implementation defined code which isn’t undefined but also can be dangerous.



One of the inspirations for Rust, as I recall, was Cyclone: https://cyclone.thelanguage.org/

Which was/is a "safe" dialect of C; basically C extended with a bunch of the stuff that made it into Rust (algebraic datatypes, pattern matching, etc.) Though its model of safety is not the borrow checker model that Rust has.

Always felt to me like something like Cyclone would be the natural direction for OS development to head in, as it fits better with existing codebases and skillsets.

In any case, I'm happy to see this stuff happening in Rust.



I've responded to the central point of "there will still be 'unsafe'" here: https://news.ycombinator.com/item?id=38853040


Has there ever been any examples of malware/viruses jumping around through levels like this?

I'm honestly interested to know, because it sounds like a huge deal here, but in my laymans ears very cool and sci fi!



> a docker container can't be relied upon to contain arbitrary malware

"to not contain"?

Edit to contain (ahem!) the downvotes: I was genuinely confused by the ambiguous use of "contain", but comments below cleared that up.



They're using 'contain' to mean 'keep isolated'. If you put some malware in a docker container, you can't rely on docker to keep the rest of your system safe.


Got it, thanks.


Does the fact that docker runs as a root have something to do with it?


Yes, but even rootless containers rely on user namespaces, which are a recurring source of privilege escalation vulnerabilities in Linux.


The issue of root vs rootless is unrelated to escaping the container. User namespaces lead to privescs because attackers who can enter a namespace and become the root within that namespace have access to kernel functionality that is far less hardened (because upstream has never considered root->kernel to be a privesc and, of course, most people focus on unprivileged user -> kernel privesc). The daemon running as root doesn't change anything there


No, it's because the malware would have direct access to the (privileged) Linux Kernel via system calls.


a docker image can’t be relied on to not contain malware and a docker container can’t be relied on to contain malware.


My two cents-- make the license be exactly whatever Linux is-- gplv2 only IIRC

If you do this then you'll never waste another moment discussing licenses for the rest of your life. It's just "because it's what Linux uses" to the end of time.

And even if there's some future question about license enforcement or whatever wrt gplv2, it will get decided within Linux/Linux Foundation/etc. and you just surf in on whatever happens without a care in the world.

Same with what-ifs about, say, code potentially going back and forth between your project and whatever part of Linux becomes written in Rust. With MIT you'll get GPL zealots and/or MIT trolls chatting your head off about legal things they don't understand. With GPLv2 GPLv2, it all gets optimized out. :)

In any case, MIT 3-clause is a fine license so use that if you have your reasons. But trust me, optimizing out low-effort discussions of software licenses is worth it if you can do it. :)



I love it and hope it will catch on.

I reminds me of what Linus Torvalds once said when asked about fearing competition, though.

From my memory his answer was something like: I really like writing device drivers. Few people like that and until someone young and hungry comes along who likes that I'm not afraid of competition.



But GPU drivers require a fairly big team. We must have a GPU hardware programming "standard" first. And I would favor a risc-v kernel instead to avoid the dependency on a super complex syntax (rust) compiler and do gcc-dependency-like mistake all over again. For this reason, eating the bullet and moving on a modern worldwide standard ISA would actually be the real move forward. We have already linux and others tied to gcc extensions and very recent ISO tantrums (porting to not-inline assembly and back to c89/99 is carefully made not reasonable, not to mention assembly code paths tied to specific stack alignment features from the compiler). Namely there is a serious imbalance between the compiler complexity and what it is actually bringing on the table.


What do you mean a risc-v kernel? One written in RISC v assembly? Because that would be terrible.

Risc-v is the instruction set architecture, rust is a programming language. You can port languages to target ISAs. Linux can already run in riscv. The ISA of the hardware and the language the software it runs are completely different issues.



Well, I think you are wrong, and that would actually be the real way forward: an assembly written kernel using a worldwide standard ISA, aka RISC-V.

Of course, it would have not to abuse any preprocessor, because moving the issue which is the complexity of the compiler dependency to a preprocessor complexity dependency would nullify everything.

Doing that in rust, is just doing the mistake of linux all over again, actually even worse, since rust syntax is much more complex than C with gcc extensions.



What you are proposing is so terrible in practice that people have invented hundreds of programming languages to escape it. Millions of person-hours have been spent on getting away from what you say is desirable.


You are right in a legacy context where the mess of ISAs required an abstraction of the assembly language.

But where you are wrong is, moving forward in a world with a modern worldwide standard ISA (RISC-V) is actually writing a kernel in assembly (without abusing any preprocessing).



People have had the ability to do that for decades, and they have almost always chosen not to when they had the option, because it's terrible. RISC-V is nothing revolutionary on that front, and doesn't magically make it better. Without preprocessing you won't have variable names or jump label names or strings, which is even more terrible.

Please give us some actual evidence, instead of just saying "you're wrong", because the position you're taking seems extreme and, frankly, bonkers.



I said without abusing the preprocessing, not without preprocessing at all.

My opinion is this is wrong, and I did voice my disagrement and gave my own view on the matter. If there is something extreme here is the karma slash upon displeasing the pro rust people and AI bots on HN.



> "If there is something extreme here is the karma slash upon displeasing the pro rust people and AI bots on HN."

It's refusing to backup your position with anything more than saying "you're wrong" "you're wrong" "you're wrong". You've repeatedly been asked to explain why RISC_V would be better, why RISC_V changes anything, what you propose for how it would work (e.g. emulators), and you haven't. That's annoying, timewasting, and downvoteworthy.



Again, this is wrong, actually a lie in regards of what I have been saying on this thrtead. I have been giving my opinion and explaining all along in all parts of this threads my views and opinions.

This is so bluntly disregading everything, I think you may very probably be a AI bot with a very small context window.



RISC-v assembly is not substantially easier to write than any other assembly. RISC assemblies in general (aka MIPS, Arm, RV64 etc) require more instructions to accomplish the same tasks. I would argue they are designed with compilers in mind more than human authors. Older assemblies expected the programmer to be programming in assembly directly, which is why they allow you to express your intent more directly than RISC. Humans of course can and do write assembly in all of these, but risc-v has not somehow made assembly programming any safer or more portable than it ever was.


RISC-V is a modern load-store ISA. Is is very clean, much more than the mess of x86_64 for instance. It is actually better to write RISC-V than x86_64, even if the later is CISC.


Even if this is true, which I agree it is, it’s still a 100x more painful than writing a programming language and compiling to RISC-V. That also gives you the benefit of compiling to multiple arch’s


This is where you are wrong: it will take more time, be a bit more painful and require a different training... but your code will be shielded against compiler and language complexity and planned obsolescence, which has stellar value on the long run.

Short term-ists won't understand as this is going for the long run, and that requires perspective of what happened in software in the last decades.



> it will take more time, be a bit more painful and require a different training... but your code will be shielded against compiler and language complexity and planned obsolescence, which has stellar value on the long run.

This is a bit of a stretch. Not one ISA has lived longer than C without making any changes whatsoevever, but I admit that this could at least be possible for some kinds of devices. What is utterly ludicrous is to suggest that people write code only in RISC-V because all hardware ever built in future will somehow agree to only use RISC-V. If any hardware ever uses any other ISA, then we're back where we started, writing in higher-level languages that can compile to multiple ISAs. What do you believe will fundamentally change this?

Anyway, your argument about churn is flawed from a simpler angle anyway. When C, C++, or Rust evolves in future, existing code continues to work. That's the very opposite of "planned obsolescence", it's ensuring that past investment continues to pay future rewards. The ISO working groups and the Rust team go to enormous lengths to ensure code remains compatible while making new advancements. I don't know how much of these processes you've observed in the past, but if you're calling them "planned obsolescence" I would urge you to learn more about them before producing critiques.

An ISA can also continue to evolve in strictly backwards-compatible ways, but history has shown there are many reasons that multiple ISAs can coexist and that newer ones may supersede older ones without backwards compatibility. RISC-V may avoid some of those factors, but as it's barely been adopted in the real world at all yet, it's a bit premature to somehow presume it's the last word on ISAs and no new ISAs should ever be created ever again.



I agree that rv64 is easier to write than x86_64. My point was more that earlier ISAs were designed with human authors as a target audience, and that results in differences that are arguably (subjective) easier to hand write. Modern load store architectures are cleaner, but also quite verbose. 6502 is probably more representative of my point than 32bit x86, and most things in the 64 bit era are more compiler focused.


What makes RISC-V special here? You speak as if RISC-V has some fundamental differences that invalidates old limitations that keep us trapped. Why is this the case? I'm very curious, and don't understand.

Also, why does preprocessing matter so much?



RISC-V is a real free, worldwide standard for a modern and good enough ISA, and that changes everything, namely "moving forward" is not going to be the same than in a legacy context with those locked, non free ISAs.

Preprocessing does matter because it would be pointless to get rid of the compiler complexity to get instead preprocessing complexity.

For instance x86_64 assembler fasmg has the most powerful preprocessor out there.... because the assembler is actually written using this preprocessor language! So it is very easy to "slip" and to end up using excessively this prepropressor to actually write assembly code which is not an assembler!



Are you proposing a kernel that would only run on risc-v hardware, or expecting that people would run some kind of emulator?

....or do you think that because RISC-V is "standard", assembly for RISC-V would run on any hardware?



The "right way" would be CPU vendors to support that standard. But I have thought about running a 64bits RISC-V interpreter on x86_64 (Mr Bellard, ffmpeg, tinycc, etc, wrote a risc-v emulator which could be as a based for that), and that in the kernel. Basically, you would have RISC-V assembly for x86_64 arch: at least, RISC-V here would be stellar more robust and stable that all the features creeps we have in the linux kernel because of the never ending gcc extensions addition and latest ISO C tantrums...


So the "right way" is to replace all hardware with new hardware, and the second-best solution is for CISC systems to emulate a specific RISC architecture? And you think this will be more maintainable, performant, etc? Do you have even a shred of evidence that this makes any sense at all, beyond "RISC is a good standard"?


You can get your assembly-written kernel right now for every OS out there: just compile Linux using a RISC-V backend of your favourite compiler.


This is severely wrong: you cannot compare hand written and properly commented assembly with compiler generated one.


Compare in what way? Handwritten and commented assembly will be much easier for a human to maintain. However modern optimizers are much more likely to apply the correct micro optimization tricks to get the best performance - and they can output different assembly for different variations of the same instruction set with ease, something that would make the hand written assembly much more complex.


"Handwritten and commented assembly will be much easier for a human to maintain"

It seems some people here have issue acknowledging that.

But where you are wrong: for modern micro-archs, everything mostly happens at runtime. Specific micro-archs optimizations are not done anymore, the linux kernel do not bother anymore and is compiled for "generic" x86_64 for instance, it is not worth it (and may cause more harm in the end). Usually, you only just care of basic static optimizations, like cache line, fetch code window, alignment, which are more writing "correct" assembly code than anything else.

And even with that, in the worst case scenarios, one could write some specific micro-arch code paths, not an issue while thinking long term of many software components life cycle, which would be "installed"/"branched to" at runtime. At least that knowledge would not be hidden deep in the absurd complexity of an optimizing compiler...



> the linux kernel do not bother anymore and is compiled for "generic" x86_64 for instance, it is not worth it (and may cause more harm in the end)

This is not the reason why. Indeed x86_64 has a much broader instruction & register baseline than i386 did, so the impact of per-CPU tuning is less than it used to be. But even a generic compiled Linux selects CPU instruction sets at runtime for things like hardware-accelerated cryptography, because those instruction sets actually matter. If you have evidence that modern microcode magically recognizes hand-written AES and replaces it with the equivalent AES-NI instructions, please be sure to send Linus your patches with benchmarks.



Unless you are building your own custom kernel (ie gentoo) CPU specific optimizations are not worth it as they are worse for any other CPU even if the code is still run. Most software never had those micro optimizations applied, because most software wants to have one build that runs any many different CPUs, but if you want to make an exception it is still possible with compiled code.

While you can write those micro optimizations for each CPU by hand, they not worth the human cost except in very rare situations. In most cases of course you can't measure the difference, as only a couple CPU cycles are saved.



This is what I just said. Then we agree on that matter.


What do you mean? most complicated parts of the gpu driver are in userland and handled by Mesa or equivalent. Kernel drivers expose a standard interface(DRM) which userspace drivers use to upload compiled gpu programs and manage gpu memory. Also, linux is not tied to gcc extensions, it can be compiled with llvm(clang) for a long time now.


What you said is mostly wrong.

The kernel part of the GPU driver is massive. It is well known, maybe you were misguided: for instance the AMD GPU drivers are gigantic compared to the actual kernel.

clang(llvm) is playing cat and mouse with gcc extensions and recent ISO C tantrums which creeps into the kernel:Linus T. does not resist those, he does resist only the linux useland ABI(syscalls) breakers.



It's also well known the driver is massive because there are hundreds of thousands of lines of auto-generated register access code in there. Not because it's inherently very complex.


And again you are wrong. It was debunked not a long time ago, and I think it was here on HN. The AMD drivers, kernel side, are still gigantic even excluding the generated register descriptions. Is this what we call a AI lie?

But where you may be right: it seems nvidia hardware programming interface is much cleaner than AMD one and may require much much less code.



>We must have a GPU hardware programming "standard" first. Isn't that what WebGPU has become?


The WebGPU API is very far removed from the hardware. It's the common subset of Vulkan, D3D12 and Metal, each of those APIs also being fairly high level abstractions over different GPU architectures.


I don't know about webGPU, but it seems you missed the word "hardware" in "hardware programming interface", like NVMe is for non volatile memory device.


I was replying to the "Isn't that what WebGPU has become?".

The WebGPU programming model is already too high level for a "hardware programming interface". WebGPU is designed to sit on top of other 3D APIs, which in turn sit on GPU vendor drivers, and most of the complexity and 'hidden magic' is in those drivers.



oops! wrong reply button, my bad.


Small feedback: On mobile the back button (and nav bar) block 1/6th of the page, probably could use a bit less padding


Thank you for the feedback. I will fix that!


MIT license? If by chance this evolves into something big, it will be eaten alive by commercial interests. Look at the conflict between Linux devs and nVidia for example. Look at the IBM/RedHat stuff trying to circumvent the spirit of the GPL, if maybe not the text of it.

If it becomes a thing, the most active developers will be paid by corporations and they will not be sharing code with you when it suits them - which can be at the drop of a hat.

I'd recommend changing to GPLv3 while your number of contributors is low enough to do it. Otherwise you're just doing free work for your future masters.



It seems to me that your examples rather show the futility of trying to use a license to force good behaviour rather than a reason to change licenses.


The only reason there is one Linux kernel everyone uses is because of the license. If it wasn't GPL2 there would be "Microsoft Linux", "Google Linux", "Oracle Linux" all with different features and potential incompatibilities. At least with the GPL2 license those flavors have to contribute changes back upstream so everyone gets the benefits.


They don't have to contribute them upstream: they just have to give their users the permission to do so.


>> It seems to me that your examples rather show the futility of trying to use a license to force good behaviour rather than a reason to change licenses.

If not for the license there would be NO good behavior. Notice that nVidia is relatively Linux friendly with some exceptions and RedHat seems to be under pressure to make more money but is otherwise very Linux friendly. Without the license, all sorts of others would be blatantly ripping it off.

I contend the difference in popularity and success between the BSDs and Linux is most likely due to the GPL license.



The amount of hate for GPL on HN is disturbing.


>> The amount of hate for GPL on HN is disturbing.

I agree. I'm really baffled by the Rust community pretty much standardizing on MIT license. People laugh at "Rewrite it in Rust" which I think is a good thing but completely ignore the whole "strip users of their freedom" that is coming with it one day.



This FUD always comes up from people who support the GPL and it's so annoying. Let's say that Maestro takes off and gains popularity, and then someone decides to turn it into a commercial OS. What, exactly, is supposed to be so bad about that? Nobody loses a damn thing - if you want to use the MIT licensed original project, it hasn't gone anywhere. And for those who find that there's value in the commercial version, they have that available to them too. Everyone wins in that scenario.

There's no loss of freedom with permissive licenses.



> What, exactly, is supposed to be so bad about that?

The eventual users of the hypothetical commercial OS lose their freedom. (Or don't get their freedom, depending on your point of view.) Permissive licences grant users freedom, but don't preserve those freedoms in derivative works, so the original work can end up helping derived software disrespect users' freedom. For those who believe that users' freedom should always be respected, this is a very bad thing that needs to be avoided.



Copy left protects against embrace extend extinguish techniques.

It ensures everyone benefits from improvements, not just users who are willing to pay bigco.



But... in fairness its not feasible to extinguish rando person's school / hobby project.

Even if it "takes off", how can a commercial entity extinguish a person's motivation to keep building something they want?



> I'd recommend changing to GPLv3

I'd recommend AGPLv3, to avoid the Windows 365 loophole. (As I understand, you'd still be able to run a web server without sharing the source code of the kernel.)



Writing alternatives to GPL software under MIT/Apache licenses is really harmful for the FOSS ecosystem.

We need to protect end users from more and more proprietarization, tracking and privacy breaching, SaaS and untrusted IoT devices.



The road paved with good intentions and all.

Sure, users are 1-bit entities in need of protection, no questions 'bout that, but also given that premise they are best served by good software that helps them get their job done. If a kick ass GPL software can do that, great. They will even pay for it. If not? They will pay for the non-OSI one that bundles the GPL and will laugh at GPL enforcement attempts.

Licenses are intellectually cute, but unless it's well-enforced AGPL3++ it doesn't matter much. (See the recent thread about 3D printer https://news.ycombinator.com/item?id=38768997 )



Such snarky tone sounds unnecessary on HN.

> unless it's well-enforced AGPL3++

GPL has been successfully enforced in various occasions, and it can be enforced effectively especially when large companies need to protect their R&D investments from freeloading competitors.

A new, stronger "AGPL3++" can be written and enforced. Many companies have been experimenting with new licenses to find more sustainable options than the status quo.



This sounds more than great.

Unrelated but at same time related, feel your self absolutelly free to ignore this message,

Linux needs a HISP with firewall. I comment it here because this need to be supported by a/the kernel, its needed to limit the functions that allow process injections, and also a way for to canalize all the process executions in a supervised mode.

As an [put operative system name here] user, I need (desire) to know when a process/program wants to access the network or internet, if it wants to act as a server, what port, what IP's wants to call at that moment, and to be able to block the operation before happen, limit what IP's are allowed to serve or not to the program, being able to sniffing the program behavior.

In that moment/event, I need to know how was launched the process/program, what parent process launched it. To know if the process wants to inject over another one own resource something, or wants to access not natural system resources. And before it happens, being able to block such intention for folder/files/disk access, keyboard, screenshots, configuration system files, console commands and so on.

If that program wants to launch another program, or service and so on, it's needed to control even if it is allowed to launch an executable in its own folder. Absolutely supervise the program and system access.

As user, I need to be prompted about all of this before happens, with information, for to give permission or not, temporally at that moment, or session, or to save it as decision that will taken the next time the program run.

Being able to configure latter it is essential, a UI more or less like a uMatrix UI point of view, and so on, designed for usability.

When one run a program, the gears of the HISP always are runing:

    - Why is trying to inject this program the browser memory? of course I do not allow it, it's more, I kill the process right now . System scan now, we are in troubles. Log, were are the logs!! Damn, the next two days are going to be miserable... I'll probably format the whole system when I find from were entered this.

    - Why is this trying to connect to internet? it's more, this IP is from XXXXX, isn't it? sorry, I do not allow it, run without this requests or die.

    - What, this is requesting DNS?, And now it is requesting a local network IP address? Houston...

    - Ehhh, what are you doing with that keyboard capture try? unnecessary, akta gammat.

    - Ok server installed running for first time, but only under this specific port, and only the loopback IP is allowed to access, this computer and anyone else. This was fast.

    - Ok, I allow you to access such internet IP, but only this time, keep asking the next time you run, I'll decide.

    - Thanks for warning about the port scan, I guess with IPv6 this would be even worst. Thankfully I have all the services limited to IPv4 localhost, but I'll keep one eye over those bots if they insist much.

    - and so on.
This does not exist in Linux. Currently it is a Windows users thing, after installing and configuring tools, with exception of the console command filtering and uMatrix UI, that I added because they are also necessary (In windows, HISP's configuring interfaces are just.. very rustic and hidden, they don't have usability in mind, it is like an available legacy feature, unfortunately).

Whatever. In Linux, this require kernel custom modifications, and the whole HISP with firewall does not exist, and ironically, when separated one from the another are just useless.

So, humbly but from an selfish way, I would ask to consider design the kernel with this thing in mind. ( I do not mean to design the HISP with firewall application).

As I started saying, feel your self absolutely and totally free to ignore this message.



It already exists: SELinux or AppArmor. They build on infrastructure that allows implementing other solutions to that effect.

However, on a typical system there is so much going on that this is unlikely to be of much use to anybody not willing to just spend their time reviewing arcane internals of their applications. The above does not how I'd want to spend my day at the computer.

Android and iOS presents a middle ground. But even their requests get tiresome after some time, and users are pretty quickly seduced to just allow everything.



Thanks for your comment. SELinux and AppArmour don't do exactly what I commenting, those tools are more or less homologous to create users with different permissions for each application. RSBAC kernel extension would get closer, but not enough.

I really consider a need the system I commented, remarking process injections active supervision and internet access control, so I've been searching along one year or so for it. And I am afraid it does not exists, kernel modification is necessary for to obtain it, so the derived tools doesn't exist.

I guess the same way it does not exist something like SystemInformer(ProcessHacker) or Sysinternals' ProcessExplorer and Procmon (I talk about the advanced features, libs tracking/search, etc, not just show a process list). I mean, the philosophy about "my system could be infected" lets try to look whats going on.

>users are pretty quickly seduced to just allow everything

Certainly. In my case it requires a routine and a desire to follow it. Maybe I should have used the word advanced desktop user.



SELinux precisely allows things like restricting applications to specific ports or specific files though. Pretty sure that it would also be possible to restrict internet access and things like that. What it needs is better UIs.

The potential for process injections is severely reduced by design on systems such as Android or when containerizing applications, for example with Flatpak. Modulo kernel bugs or being root, it's simply not possible to mess with the processes of another user.

Advanced desktop users have better things to do than reviewing security warnings all day long. I get paid to write code instead of investigating what my applications are doing.



This sounds like an absolute nightmare from a user perspective. The current popup-galore on Windows and macOS when running a program for the first time is already bad enough.


Some words of encouragement in the sea of pessimism on HN which brought down the previous attempt at this [0]. Keep going, ignore the FUD and continue where others have left off.

We need alternative and safer kernels, and attempts like this should be encouraged. Rust is suitable for that guarantee.

Keep going.

[0] https://news.ycombinator.com/item?id=28986377



Thank you very much! Even if nobody liked the project, I would not be planning to stop it. I am doing this as a hobby first!

Having even one other user than me would be terribly difficult but if it happens that would be super cool! If it does not happen, then I just have my own system and I am happy with it anyways!



[flagged]



Good that I did a CTRL+F before posting.

It indeed seems to be a trend to try and supplant GPL software that has served well for decades with hip and trendy probably safer software, that meeeerely has a small catch ... MIT license. I wonder how these projects even attract enough people to be developed. Especially developers of a kernel should be aware of the license side of things, that has protected us all.



These days, I don't think people care about the principles that made Linux a success in the first place. From what I can tell, most people seem to take "open source" as "code I can reuse" rather than some kind of ideology about software freedom.

Luckily, most Linux alternatives seem to go absolutely nowhere, so I doubt this project will have much of an impact beyond people thinking "huh, cool project".



>rather than some kind of ideology about software freedom.

And correctly so. "Open source" was explicitly intended to strip the ideology in order to more effectively market the bazaar development model to businesses. That's the difference between Open Source and Free Software.



Even though I advocate the GPL, I am not convinced Linux was a success because of the GPL.

It's more a success because of the amazing determination and maintainer skills of Linus Torvalds and the group of enthusiasts that rallied around it in the early days. I have been using Linux (as a user, not a contributor) since pre 1.0 (0.9something on a 486, installed off floppies, fun times.) And it's just amazing where we've gotten to.

Perhaps the GPL played a role in the early day in preventing a parasitical fork by a commercial entity, and by acting as a method of funneling in contributions. Hard to say for sure why Linux won over FreeBSD, though.



I am not quite sure this discussion applies to this project: "Linux-compatible" and "Unix-like" seems to mean "a POSIX-like kernel" rather than being a direct rewrite of the Linux kernel.

The Rust rewrite of coreutils is more problematic because it advertises itself as a drop-in replacement and they are going for complete binary compatibility. I don't think Maestro is actually going for kernel compatibility with Linux, just POSIX compatibility in general.



If I understand this project correctly it's going for syscall compatibility, and can run some subset of Linux binaries already.

Which is cool, but...

In any case, I wish the author luck, and it'd be cool if it went somewhere, but I am disappointed by the choice of license. But that's up to them, in the end.



While I disagree with you about licenses (I prefer permissive licenses as they grant more freedom), I at least give you kudos for having the good grace to not shit on the author for using a license model other than what you prefer. So many people in this thread are displaying a truly remarkable level of rudeness about the license when it is (as you said) the author's right to decide what he wants to use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com