(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40759277

本文以 MINIX 操作系统 (OS) 为例讨论低级计算知识的价值。 MINIX3 面临着保持升级兼容性的挑战和古老的设计问题,最终导致开发停滞。 然而,MINIX 的早期版本展示了显着的功能,例如透明地处理驱动程序崩溃和动态更新系统服务。 尽管有其局限性,MINIX3 仍然激发了有价值的出版物和研究。 作者对学术界的集群化、网络透明化、分布式系统的趋势表示怀疑。 相反,他们主张使用适当的工具和编程方法独立管理多个系统。 后 TCP/IP 时代,业界主导了这种方法。

相关文章

原文


Along with K&R (and K and Plauger’s “Software Tools”), the Dragon book, Bentley’s Programming Pearls, and Holzmann’s Beyond Photography, AST’s books were the most formative in my life (I started coding in 1976 but was self-taught until early 80s when I got to college and read all these brilliant works). Long overdue recognition; so many people benefited from the lucidity of these minds.



I’ll never forget this. I was listening to a talk by Reed Hastings (Netflix Founder/CEO) at (I think was) Stanford. He was explaining how he came up with the idea of Netflix. A student asked: “when did you realize you had to switch to the internet”. At which he replied: “that was the idea from the beginning. We knew networks were going to become what they are today. Look, there’s a saying in a CS textbook that says: ‘never underestimate the bandwidth of a truck full of tapes over the interstate’. We knew we had to ship the DVDs first until at some point the network would reach our desired level”.

While I was watching that i said: “DUDE! I remember that quote (and that illustration)”. Went to my text book and there it was. In Tanenbaum’s networking textbook.

Aside from the anecdote, this guy has had a huge influence in the whole industry (not even mentioning the Kernel debates).



Well-deserved, congrats Andrew. I still have his distributed systems textbooks from way back when, and still wish Minix had won and its microkernel model had become the basis of the FOSS *nix ecosystem.

Also in case anyone is not aware, Andrew runs the election science blog Electoral Vote [1], using an electoral college poll model to analyze and predict US elections. One of the better US political sites out there.

[1]:https://www.electoral-vote.com/evp2024/Info/welcome.html



This is a richly deserved award for a great educator who makes computer science both accessible and enjoyable.

Structures computer organisation is supposed to be a textbook but it's written so well I found myself reading it cover to cover like a thriller.

You won't find many people saying that about Knuth for example (not to say anything against Knuth who is amazing in his own way).



Not to mention MINIX is hidden away in almost every modern Intel CPU as part of its Management Engine. This little known fact makes it one of the most widely distributed operating systems.



Why did they use Minix and not e.g. L4 or sel4?

Which version of Minix did they actually use? There is Minix v3.1 (released in 2005 with the book), 3.2 (released in 2012) and 3.3 (released in 2014).



As to why, no idea. I guess some engineer was just familiar with it from their undergrad days like the rest of us.

And which version, I know it’s MINIX 3 but beyond that? No idea. They probably heavily modified it and as Minix is not GPL, Intel never published it. Based on the timelines it’s likely 3.1 as the ME platform has heen around since approximately 2007 iirc.



The original L4 was written in assembler and replaced by different other implementations long before the ME platform was developed. Pistachio was in development around that time and available under BSD.



I was failing badly in my computer architecture courses. Received a 5% score in one of the mid-terms. Switched from the recommended book to Andrew's book and did nothing apart from read it everyday for 2 hours. Received 100% in the final. Such an amazingly approachable book. :-)



Not OP but if it was computer architecture i imagine it would be “structured computer organisation”.

One if my favourite books on computers and looking at the comments, many people here too.

Source: this book got me top marks too



I see ASTs books, in particular the hands-on Minix ones, as sitting on the same “plane” as the philosophy espoused in The Night Watch paper. Ultimately the paper is about a level of comfort with reality that is at it’s core rooted in familiarity with rather than ignorance due to abstractions, and having learned fearlessness rather than helplessness. While it is highly unlikely you will be having a debugging session that has you executing kernel-level code alongside having an oscilloscope/logic probe hooked up to the pins of a processor chip to monitor data lines (though we all know someone who does this without a second thought), having this level of knowledge and comfort with being ever so slightly closer to the silicone, the data sheet of the processor somewhere near by, the memory segmentation modes not too alien of a concept, is a great boon to a software developer. It is a leap that I think everyone should try just once, and with it, abolish any notion of mental barriers that prevent one from understanding how things really work.



I read Operating Systems: Design and Implementation in 1988 or 1989, and it was an insightful and pleasing experience. I only wished, at the time, that there was some Unix-like OS that was "free" (for some intuitive value of the word "free", rather than the formal definition, which I hadn't heard of yet at the time). This could have been Minix.



One of the best textbooks I had to read for my degree, back in the 80s. The appendix containg the Minix source code was my first exposure to a large body of well-written C code.



His book “Computer Networks” was one of my favourites in my CompSci study days. Many years later I gave lectures on Distributed Systems at a business school and based the material on the book. Still feels relevant, even today.



Although this is a thread about Tanenbaum, personally I feel that Data Communications and Networking by Forouzan helps explain things in a better way and goes into the details of each network layer.

Modern Operating Systems is great though, and when I was in college I've recommended it to peers whose feedback was along the same lines.



This was my favourite textbook of my entire undergrad CS studies. I still have it on my shelf to this day. I've never gone deep into networking but the broad knowledge has stayed with me and comes in useful again and again. I would say it sets me apart from many other engineers.



I want to appreciate Andrew Tanenbaum for making possible the world's most popular OS, Intel ME. This system has been made absolutely right in every its piece except license. Minix under GPL would not let megagorporations to backdoor every functional x86 chip on the planet.



I have read Tanenbaum's book twice. Really great book. Very dense in information but enjoyable as well. That and the Common Lisp Reference Manual were at some point my favorite CS readings. I was reading them in printed form.



Yeah i didn't specify, i think it is the "Modern Operating Systems". I read it in Greek and I don't remember the title, maybe the title was translated a little bit different. It was not the Minix book, i have not read that unfortunately.

Now however, i am sold on the idea of the Lisp Machine. Hopefully some day a lisp OS and hardware will be a viable way to use a computer.



I came across Tanenbaum's Operating Systems book during my CS and it had a huge influence over me, till then I was a huge Windows nerd and after reading the book I felt like I was being cheated by Windows, like I was denied something which was rightfully mine.

I hated my labs as it had only Windows, started exploring *nix systems post class and never went back.

Thank you Mr.Andrew and congratulations.



Err, perhaps I'm missing something, but if the ACM Software System Award is presented to an institution or individual(s) recognized for developing a software system that has had a lasting influence, how come Linus hasn't got his yet?



It's funny to see how they highlight that it inspired Linux, ehile Tanenbaum heavily criticized it for not being a microkernel :D



Well, it “inspired” Linux because Linus was unhappy with Minix. Linus wanted UNIX and Minix was not what he was looking for. I don’t think emulating Minix itself was ever his goal. He chose the Minix file system originally but this was just pragmatic as that is what his drive was formatted with.

The earliest versions of Linux were written on Minix though. Credit where credit is due.

Interestingly, Linus was unaware of BSD. He has said that, if he had known about it, he may never have written Linux to begin with.



IIRC the very first installations of Linix started with Minix as a base, that then was "patched" into Linux. So it was more than just the filesystem. But yes, Torvalds wasn't happy with Minix, and Tanenbaum wasn't happy with Linux.



My favorite thing about that whole thread is the unanimous consensus among everyone involved that we will all be running GNU/Hurd in 2 years so these stopgap OSes are just academic hobbies.



Another awesome fact about Tanenbaum is that he was the person behind electoral-vote.com. Prior to everyone having their own model and Nate Silver (err, should I say Poblano?) running the table in 2008, this was the place to go understand the 2004 US Presidential Election between Bush and Kerry. Hugely helpful for many people to understand polling and statistics.



> he was the person behind electoral-vote.com

electoral-vote.com is still going strong: https://www.electoral-vote.com/

I'm sure you know this, but I want to emphasize it for anyone who is not aware.

Prof Tanenbaum has a co-writer now (Prof Bates, history at UCLA/Cal Poly), and the site is published every day (used to be weekdays only and only during election cycles).



I was myself taught Computer Architecture in 1991 using the Tanenbaum SCO book, and many years later taught the Computer Architecture course for four years using the SCO book (a later edition, but still!). A true classic, and if anything, it is a wonder that Tanenbaum had not already received the award.



Every student should read Tanenbaum's "Structured Computer Organization". It was the first book which showed me the logical layering involved in a "Computer System" which is absolutely essential to understanding this field.



is MINIX abandonware now? Many years ago I tried to install the release that works (or comes bundled) with a light window manager but it was not trivial and it looked pretty abandoned even back then.



MINIX3's development has stalled years ago.

Basically, around MINIX 3.2.0 (just before I stated contributing) the OS ditched its homegrown userland and adopted the NetBSD source tree + pkgsrc. While that boosted the software compatibility of MINIX3 in the short term, the maintenance burden of keeping up with upstream with such a large diff proved unsustainable in the long term, especially after the grant money dried up.

In hindsight, my opinion is that MINIX3 should've gone with NetBSD binary compatibility. The NetBSD syscall table would've been a far slower moving target to keep up with than the entire NetBSD source tree.

The OS also had a significant amount of tech debt, especially in the microkernel which was uniprocessor and 32-bit only, as well as outdated hardware support which meant nobody was daily-driving it anymore. It also was an aging design: while the system was divided up into user-mode servers with message-based communication, you couldn't containerize a process by spawning a parallel userland ecosystem for example because it wasn't capability-based or namespaceable.

It's too bad really, because the base system had really impressive capabilities. It could transparently survive crashes of stateless drivers, even when stress-testing it by injecting faults into the drivers at runtime. You could live-update the various system services at runtime without losing state or impacting clients. Some really good papers came out of MINIX3 [1].

I've ranted more in detail before, both on HN [2] as well on Google Groups [3]. I do not fault the maintainers for the current state of affairs because keeping up the MINIX3 userland against modern software standards was a major maintenance burden, so adopting NetBSD's one way or another was inevitable. At any rate, there are other micro-kernel based operating systems [4], some under active development, so MINIX's spirit lives on.

[1] https://wiki.minix3.org/doku.php?id=publications

[2] https://news.ycombinator.com/item?id=34916261

[3] https://groups.google.com/g/minix3/c/qUdPZ0ansVw/m/7LuOv0YOA...

[4] http://www.microkernel.info/



Still waiting for that Minix 3.3.0 release, after so many RCs.

I followed the project somewhat, and I understand the main issue has been lack of having someone at the helm, pursuing this 3.3.0 release.

The situation is such that the release blocking bugs were fixed, and yet the release hasn't happened, because nobody is willing to put the time and effort to make it happen.

Should the release happen, and somebody be willing to review and merge changes and organize a regular schedule of releases (even with a long period, such as yearly), the system would no doubt get some life back.



Hell yeah! Well deserved. I had a blast with some of his books. Especially _Operating Systems: Design and Implementation_ and _Computer Networks_. Legend



Still have a cd-rom with copy of minix3, from when he had a talk in my university. His books on OS and Networking are very approachable and fun read!



It's kind of sad systems research pretty much stopped at this point. I really was hoping that by 2024 I'd be running a distributed operating system where processes could be be freely migrated between my phone, desktop, laptop and NAS without too much of a hitch.



The research has not stalled at all, the incentive to make consumer devices has because of the capture by tech giants. If someone wanted to implement an interesting idea like this, they would be harshly judged for not being able to compete with pixels and iphones for the rest of the stack. See Rabbit for example.

What you are describing is the bread and butter of modern systems research and all the large cloud providers internally implement this kind of thing.



There is still a lot of research and innovation, but it doesn't always come in the way of completely new software projects. The cost of trying to build a new OS is simply massive. You need to be compatible with existing useful software if you want to do anything other than an appliance. Anything that provides a new paradigm shift that requires changing existing software has a huge slog ahead of it to make it successful. That said, there is tons of incremental progress in the operating systems.

I think a lot of folks have thought about the idea of a truly distributed operating system. I'm pretty sure existing operating systems will eventually evolve in that direction. You already see bits and pieces of it popping up.



Sounds like you like myself have taken Rob Pike’s take from long ago to heart http://www.herpolhode.com/rob/utah2000.pdf

While plenty of things have happened since that paper, I have this sinking feeling that he was right and we just stopped trying to really expand and explore what’s possible. But it may be more a matter of the state of academia than about the domain itself. It’s not like people were getting a bunch of conference invitations for GPGPU applications with ML until hype freight trains hit. This sobering reality of academic hegemony and grant chasing kept me from ever getting terribly interested in systems research unfortunately.



> running a distributed operating system where processes could be be freely migrated between my phone, desktop, laptop and NAS without too much of a hitch.

This was what i always wanted ever since i read Tanenbaum's "Modern Operating Systems" and in particular his "Amoeba distributed OS" - https://en.wikipedia.org/wiki/Amoeba_(operating_system) Also see Plan 9/Inferno from Bell Labs. But instead what we got (due to industry shenanigans) is this garbage/duct-tape of distribution gone crazy in the "Cloud".



This type of thing always seemed to be in the very cool but pointless basket, to me.

A lot of systems researchers were absolutely obsessed with clustering, network transparency, distributed systems, and viewed them as the pinnacle of the operating system. I never understood why. I completely understand the coolness factor mind you, I just never could see why it was so important that your server-laptop-phone-network system behaved as a single system.

I think a lot of effort was wasted chasing that dragon. Wasted is probably the wrong word because research into cool things is good and probably created useful things along the way. I don't feel there was ever enough justification put into it and it could possibly have been better spent though.

The alternative of having multi-system tools and programming models that allow you to manage multiple systems without having them appear as a single image at the lowest level didn't get much love from academia after TCP/IP, and was largely developed by industry.



Very strongly disagree with you here.

We should have had a distributed OS like Amoeba/Plan9/Inferno/etc. allowing us to manage all our chosen set of devices using a single uniform interface i.e. "A Namespace" (in Plan9/Inferno speak). Such namespaces can themselves be connected into "Hierarchical Namespaces" and so on. This is a natural and easy way to let users keep their control over their devices while still being connected to the "Greater Internet".

But the Industry manipulated us into the Cloud model so that they could retain control and make money off of us. It was all Business to the detriment of a better User Experience via Technology.



> We should have had a distributed OS like Amoeba/Plan9/Inferno/etc. allowing us to manage all our chosen set of devices using a single uniform interface i.e. "A Namespace" (in Plan9/Inferno speak). Such namespaces can themselves be connected into "Hierarchical Namespaces" and so on. This is a natural and easy way to let users keep their control over their devices while still being connected to the "Greater Internet".

Why should we have?

> But the Industry manipulated us into the Cloud model so that they could retain control and make money off of us. It was all Business to the detriment of a better User Experience via Technology.

The choice was not cloud or distributed single system machine, they were and are orthogonal.



> Why should we have?

Because that is what an OS is supposed to do viz. provide a uniform interface and transparent access to various Compute, Storage and Network resources wherever they might be. A Distributed OS (https://en.wikipedia.org/wiki/Distributed_operating_system) is a natural extension to a Single Node OS. Note that we have in a sense realized our distributed OS in the IaaS and PaaS layers of a Cloud network. However they are done in such a manner as to take control away from us unless of course you use some open source offerings setting up which is much more complex than a distributed OS should be.



You should be looking at infrastructure related roles for this stuff; live migrations of VMs and containers are regularly done when you drain a VM or Kubernetes minion, for example.

For consumers, they mostly care about the user experience. Having software that syncs their contents to a server thus making it available to all devices has much lower overhead than trying to live migrate a process over unreliable networks.



Hey! Don't let out the secrets :-) This is something i have been long thinking about (but have done nothing practical so far). The problem is how to bridge/shim between Erlang and those platforms which do not support it eg. Android (though some people seem to have done it - https://github.com/JeromeDeBretagne/erlanglauncher). Joe Armstrong actually called the Erlang/OTP System as a AOS (Application Operating System) in his paper i.e. it contains much of the functionality of a traditional OS but people seem to ignore it and insist on calling it "just another language".
联系我们 contact @ memedata.com