(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40613126

此人分享了他们研究经典虚拟内存系统 (VMS) 操作系统的经验。 他们讨论了设置模拟器并深入研究其文档的挑战,强调了由于 1990 年之前文件的内置版本控制等功能而具有的历史意义。演讲者表示惊讶地发现 C 作为底层语言,尽管最初期望更复杂的东西。 They advocate for studying VMS for understanding computing history and encourage sharing discoveries。 Monolithic versus microkernel debates arise in operating system design discourse。 While acknowledging both designs have merits, the interlocutor suggests teaching monolithic kernels first due to their prevalence。 They describe a fascination with VMS despite the challenges, viewing it as a hidden gem full of valuable insights from its past。 If you encounter interesting aspects while exploring, please share。 End of conversation。

相关文章

原文


Took this class at MIT. 20/10 would recommend, especially for people who don't come from a systems background. The textbook is quite easy to understand and the labs are a very good check of your comprehension. Just maybe skip the network driver on the first go around :P



I also took the class that uses this OS at MIT. Absolutely fantastic. I was just browsing the class website today actually, and you can totally kinda take the class yourself. The site has all the lecture notes, the labs, and even a version of xv6 in a repo with branches for all the labs, along with instructions for getting it all working yourself. It's kinda amazing how open it is.

The first lab: https://pdos.csail.mit.edu/6.1810/2023/labs/util.html

I do plan to work through the whole thing myself again at some point soon.



Many of the projects in my OS class in college involved building on top of the xv6 operating system. I remember one of the projects was building a scheduler for xv. It was definitely a great way to learn about different pieces of an operating system!



I wish, but most IT manuals everywhere are in English. But this is IT/CS, so using English at least to being able to read technical articles it's mandatory. Nothing too difficult, as my non-tech SO (native Spanish speaker) had read novels written in British English and to me that prosody and style it's hell compared to either an Stephen King or a Preston&Child one.



I have ways wanted to understand the Windows NT kernel -- maybe the earlier versions as they are simpler.

I heard the first version is close to VMS. Is it true? Is there any material on VMS? OpenVMS is open sourced but the version is too high.



For a good look at the development of the NT kernel, read the book show stopper from 1996. It is correct that Dave Cutler did lead a number of projects inside DEC including projects related to VMS, the only inheritance the Microsoft New Technology kernel received from VMS was spiritual at the design and architecture levels. Dave was already in the process of a major overhaul or successor to VMS when his team was shown the door at DEC and many of these visions made their way into windows at the cost of the robustness of the minicomputing established design patterns. Cutler saw the MSDOS team as rookie programmers writing themselves into spaghetti solutions for the wrong problems but papa gates had it locked up tight with the OEM contracting agreements so Bill paid him in blank checks to deal with the kids in the room and ship a real kernel to compete with what steve was cooking at NeXT.

So no, NT is nothing at all like VMS from a featureset or likeness perspective, nothing at all from an implementation perspective (VMS had beautiful clean C standard library and compiler suite), but very much was the race to the bottom by the mini crowd to play games in the micro world.

check dave plumbers interview with cutler on youtube for more color.



I am reading the book, now that you mentioned. I only reach the part when David was hired by Microsoft though. Actually I'm super interested in his earlier works in Dupont and DEC as they paved the way.

I took him as an inspiration. His “What I really wanted to do was work on computers, not apply them to problems.” rings so true with me, and he as a natural leader also makes me look up to him.



Difficult realm for me to investigate myself. I have found that better than any book is to pull old VMS images out and get them up on a SIMH/Vax emulator and poke around! It's a quite laborious process and documentation of the process is sparse, last time I ended up using the Computer History Museum remote access program to interact with ancient VMS.

To actually learn how to use the thing I ended up diving deep into textfiles.com archives for references to VMS and got the hang of the basics enough to start learning via the online (built in) documentation.

I would also point you towards usenet archives if you want to learn more about the development history of VMS, what what i have been able to piece together it was really rock solid and loved and quite a shame that "open"VMS is mostly just a legacy compatibility licensing agreement and the technology is in a bit of a glacial deep freeze.

The fact that every file on the entire OS had version control - by default - before 1990... that blows me away and I feel like there's so many beautiful secrets to learn from this codebase. I always imagined it would be written in some abstract arcane language, when I learned it was in C i got a bit scared knowing what microsoft calls C has very little resemblance to how C is typically used on unix likes, and with the knowledge that many VAX admins dumped VMS for BSD I was quite afraid I may be out of my element.

I was surprised to find the C library is essentially just posix, I felt pretty much completely at home on VMS compared to any version of windows I have had to write C for. Pleasant experience, shame what happened to it, wish there was more about its story told in an authoritative context like a book.

If you stumble across anything nifty please feel free to share I may come back and read this read again someday.

Cheers



I think what he means is system programming, not application programming. But TBH I'm so sick of my DE job that I indeed don't care at all about business problems.

Sure we all need them for jobs, hey but that doesn't mean I have to love them. Would that impede my career progression? Sure, but as long as I can get into a system programming without being too close to business, I'm happy for life. Other people can climb the greasy pole, good luck for them.



Thanks, I have a book about win32 programming, published 25 years ago but seems to be still relevant. I'll read the internals after I get some ideas about user space programming first.



WTF?

> I heard the first version is close to VMS.

No.

> Is it true?

No.

> Is there any material on VMS?

Tons. It came with metres of docs. It is still on sale. You can get a freebie hobbyist VM image.

> OpenVMS is open sourced

No, it isn't.

> but the version is too high.

Not true.

Always, always check your assumptions before asking a question. If you want to ask "because (a) can I (b)" then check A is true before asking. ALWAYS.



Why Unix v6? Why teach with a 50 years old design? I feel to teach the fundamentals of an operating system, i.e. scheduling, IPC, address space management, a microkernel design would be better.



Studying precursor technologies to ones popular today is a great way to learn what led us to this point, what tradeoffs were made and why, and perhaps what we might've lost as well. Students can get a deeper appreciation for new technologies when they're eventually exposed to them. This can only broaden their horizons in their future careers.

As someone who missed this part of history, I would love to have learned about it in college.



You gotta walk before you can run. Xv6 is basic, but it’s a great intro to operating system fundamentals that can fit in a semester for people who’ve never seen these concepts before.



I'm guessing many professors don't choose Xv6 for their operating systems class because it's a great design. Some probably pick it because it's good enough, simple and easier to teach in a single semester where students are also taking other classes. Are you saying the microkernel design is not only better but also easier to teach?



Perhaps build simplicity is also an attractive point?

Basically a rather simple Makefile, gcc, qemu, xorriso...

I see lots of open source hobbyist OS projects require you to build your own GCC, which is not fun. Xv6 works with the standard gcc (provided by distro package manager). On macOS: just install homebrew-i386-elf-toolchain, and tweak some lines on the Makefile. Done. You are ready to build Xv6, which should take about a minute or less.



Vonn Neumann architecture is 80 yrs old. Why is it relevant how long a design is if it's still the most relevant and widely used one? The basic abstractions of unix v6 still holds to this day.

The main difference between Microkernels and Monolithic is how address space get to be shared between userland and kernel. I don't see how microkernel design would be "better". Why teach a design that isn't widely used?



The Vonn Neumann architecture does not expose the same structural flaws monolithic kernels do—namely, their (of course tendency in modern computing to be) massive size and their failure scenarios in the event of a problem.

Old is not always a problem, but monolithic kernels face the same problems they did in the 80s. It's not surprising Apple is moving away from kernel drivers and into userspace functionality.



My comment was in the context of which is better for learning purposes. You spun my comment and turned it into mono vs micro kernel.

System Design is about trade offs. You conveniently say "expose the same structural flaws monolithic kernels do" and not mention anything about microkernels. System arch discussions are not about who "wins", but about trade offs.

> but monolithic kernels face the same problems they did in the 80s

Putting your lack of balance aside, what problems are you talking about specifically? Are you aware of how many new instructions have been added since the 80s that were designed specifically to address the shortcomings of monolithic kernels? Microkernels, even when resources were much scarcer back in the 1980s, still did not become popular.

Both designs have their use cases. Micro and mono kernels have pros and cons but for learning purposes it makes more sense to teach about monolothic kernels since all popular operating systems follow this design.



apple started with mach, which was a microkernel, and linked basically all of the freebsd kernel into it, morphing it from a microkernel operating system into a monolithic operating system

i agree that monolithic kernels are unpleasant and brittle, but unfortunately they don't actually seem to be obsolete



NeXT you mean.

Additionally, Apple has a long term roadmap to fix that design decision, hence why they are killing one kernel subsystem at a time, moving them into userspace, with one year given for the developers to adopt each of them, after being made available as the new way.

Finally, it is kind of ironic that systems that have doubled down on monolithic kernels, are now used to run an endless pile of containers and Kubernetes clusters.



> hence why they are killing one kernel subsystem at a time, moving them into userspace, with one year given for the developers to adopt each of them, after being made available as the new way.

Notably they aren't exposing most of the functionality, it wouldn't be possible to e.g. implement NFS or 9p with the userspace filesystem thing they've been pushing. They're basically just shutting users out from their own computer....



Mach as deployed at NeXT was also never a microkernel, and combined the BSD 4.2 kernel. I believe this is also effectively what OSF/1 did with the OSF variant of Mach.

Mach has basically always supported that kind of use, even back to early CMU research versions.



> apple started with mach, which was a microkernel, and linked basically all of the freebsd kernel into it, morphing it from a microkernel operating system into a monolithic operating system

The microkernel aspect of it had completely vanished by the time they hit the general public.

> i agree that monolithic kernels are unpleasant and brittle, but unfortunately they don't actually seem to be obsolete

Of course they aren't obsolete! The security in your phone depends directly on sel4, at least if you use an apple device. They just aren't relevant to most of the compute and most of the software you interact with today or in all of history.



While microkernels play a vital role in our software ecosystem, most of the software anyone interacts with is not coordinated with a microkernel. Furthermore most of the software on the internet is not run by a microkernel, nor most of the software available on the internet. I suspect such a course would not prepare one well to either work with kernels or reason about the kernels you work with as a professional.



And? What's wrong with k8s and containerized setups?

Linux evolved numerous features over the years responding to server room challenges. Some of them look monolithic, others are decomposable, in any case linux became the default dev target platform for everything.

Minix might be nice, but linux has won, and it was NEVER about os architecture.



Because it is travesty of what is effectively a microkernel architecture.

A free beer UNIX clone won, that is quite different than any technical advantages.

Even Android does the same with Binder IPC, since Project Treble.



Android, SteamOS, WebOS and all the numerous Linux-based projects, mostly show that the world needs a stable target platform everybody can do a meaningful contribution to (and then make sure nobody steals the work later).

Linux literally ate the world with its POSIX-compatible open-source proposition. I don't have a single device without Linux at home, and this includes a NAS, 5 notebooks, 1 PC, a handheld gaming console, my TV, a bunch of mobile phones, a washing machine.

The world just couldn't care less if it is a microkernel, a hybrid or a monolithic kernel. Like you said, it's not about some boring technical advantages, and it never was.



Just wait until the Linux founding fathers are no longer around.

Try to write Linux POSIX code for Android, WebOS, and ChromeOS apps, and see how many normies will buy your wonderfull app.

Free beer ate the world, everyone likes free beer, even it is warm.



Yes, free as in beer and free as in freedom. No complicated licensing, code open for change, any scale, any use-case.

Hard to compete against with all these "license per working space" or "tcp stack not included" or "no code for you" of the usual competitors.

> Just wait until the Linux founding fathers are no longer around.

Yes, things change all the time. People come and go. Just as companies do.



I think he counts as one of the founding fathers. He was the second-in-command when I met him in about 1997 or 1998.

And note what he turned to when he left: an 8-bit OS.

https://www.fuzix.org/

Where do you think he got that antipathy for large and complex systems?



a lot of people agree with you, which is why minix exists (though their source control system just fell offline this year), but none of windows nt, linux, os/360, and even really macos/ios are microkernel designs. sel4 and the other l4 variants are, and so is qnx, and linuxcnc runs linux under a microkernel, and xen is kind of a microkernel if you look at it funny, so we're definitely seeing significant mainstream use of microkernels, but it's not clear that that 50-year-old design is actually obsolete the way the pdp-11, the cray-1, and the system/370 are



Actually, z/OS (descendant of System/370) is more microkernel than Linux is. But the problem with microkernels is similar to the problem with microservices - you have to make the executive decisions somewhere, and that unfortunately ends up being the bulk of "business logic" that the OS does.

In theory, I like the concept of functional core, imperative shell - the imperative shell provides various functions as a kind of APIs, and the functional core handles all the business logic that involves the connections between the APIs. (It's also sometimes called hexagonal architecture.)

However, it is questionable whether it actually reduces complexity; I daresay it doesn't. Every interaction of different shell APIs (or even every interaction that serves a certain purpose) needs a controller in the core that makes decisions and mediates this interaction.

So when you split it up, you end up with more bureaucracy (something needs to call these APIs in between all the services) which brings additional overhead, but it's not clear whether the system as a whole has actually become easier to understand. There might also be some benefit in terms of testability, but it's also unclear if it is all that helpful because most of the bugs will then move to the functional core making wrong decisions.



i admit to not being very familiar with the current version of os/360; can you elaborate?

btw, when you say 'z/OS (descendant of System/370)', i think you are confusing hardware and software; system/370 was the hardware (obsolete), os/360 the software (sadly, not obsolete; later renamed os/370, mvs, and z/os in a series of increasingly desperate attempts to escape its reputation)

generally the functional/imperative contrast centers on mutability: imperative style uses mutability, and functional style doesn't. is that what you mean? i'm not sure a functional core in the sense of 'mutation-free core' is a reasonable way to build a computer operating system, because limiting resource consumption and handling failures reliably are two central concerns for operating systems, and typically immutability makes them much more difficult. immutability does have a lot of uses in modern operating systems, but at least on current hardware, it makes more sense to me to build it as an functional shell around a mutable core than the other way around

(the other aspect of the functional/imperative axis has to do with constructing new functions at runtime, passing them as arguments to subroutines, and returning them from subroutines: you do these things in functional programming, but not in imperative programming. i am at a loss how this could relate to what you're talking about at all.)

it's not clear to me what https://web.archive.org/web/20070403130947/http://alistair.c... has to do with functional-core/imperative-shell or for that matter with operating system kernels. can you elaborate?

for the most part operating systems design is an exercise in delegating as much as possible of those 'executive decisions' to userspace. 'mechanism, not policy' is the mantra for kernels and for system software in general, including things like device drivers and window servers. that way, you can use different policies in different parts of the system and change them over time without destabilizing the system. i feel like microkernels are generally better at this than monolithic kernels, and sel4 in particular takes this to the extreme



Ah, sorry for the inaccuracies. I mean MVS as a predecessor of z/OS, of course.

What I mean by functional core/imperative shell is similar to what you mean by (the "kernel" is the "imperative shell" and the "userspace" is the "functional core"):

"for the most part operating systems design is an exercise in delegating as much as possible of those 'executive decisions' to userspace. 'mechanism, not policy' is the mantra for kernels and for system software in general, including things like device drivers and window servers"

And z/OS does that a lot, much more than Linux. On a typical z/OS, many of the functions that would be normally running inside Linux kernel are running in a separate address spaces, with limited authority.

But the intractable problem IMHO is, to decide the policy, you still need the authority to do so (you need to be able to invoke the commands to the kernel), so you can still wreak havoc in the system.



For example, on z/OS, the whole disk storage subsystem (SMS, but there is more) is separate from the MVS (kernel). Security is also externalized in RACF server (in fact there are alternate products from non-IBM vendors). You can run multiple TCP/IP stacks, which are also running in their own address spaces. Sysplex serialization has its own address space.

All the address spaces involved in the operating system are coordinated through SVC or PC routines, which are like system calls, and scheduling of SRBs, which are kinda like kernel threads. I am not sure (although I am not aware of latest developments) if in Linux one can define a custom system call, like you can on z/OS. Or if you can schedule your own kernel thread from user space.

You seem to know about MVS, yet we probably disagree on whether it is to be called a microkernel or not. I am not an OS expert, and I never did kernel-level programming for Linux or z/OS. But I did read Lister's Fundamentals of Operating Systems long time ago, and that book is somewhat based on what MVS (the actual kernel of z/OS) does. It was written before the whole microkernel debate, which AFAICT might be just an artifact of enormous variety and complexity of x86 hardware.

So I would like to hear, in your opinion, what should have been different in MVS (or z/OS) for you to consider it a microkernel?



this started failing, i think, last week:
    : ~; git clone git://git.minix3.org/minix.git 
    Cloning into 'minix'...
    fatal: Could not read from remote repository.

    Please make sure you have the correct access rights
    and the repository exists.


The irony here is that both SerenityOS and Redox are UNIX-like. Of course in their design, they're not purely like most other UNIXen, but they also don't stray away too far.



They offer a POSIX like API on top, which isn't the same thing, as the key APIs, and overall system architecture, are something else.

Also mostly because as it happens in most hobby projects, people keep wanting to replicate GNU due to the existing software, thus keeping the UNIX cycle alive.



I would LOVE to build a modern-day operating system using a high-level programming language, even if it were just a pedagogical toy. I love Unix, but it’s not (and shouldn’t be) the final word on OS design.

In the meanwhile, Project Oberon from the late Niklaus Wirth (http://www.projectoberon.net/) is a good example of a pedagogical operating system (and programming language) that isn’t Unix. Project Oberon was heavily influenced by the Cedar and Mesa projects at Xerox PARC.



Interesting are systems like Xerox PARC Workstations (Mesa, Cedar, Smalltalk, Interlisp-D), ETHZ Oberon, Inferno, Apollo/Domain, Tru64, QNX.



QNX it's another Unix in the end any the Photon GUI it's nothing odd to any KDE/Windows 2000 user.

Smalltalk has issues on exporting your software to be run under a standalone way.

On Interlisp, there's Mezzano, a Common Lisp OS, but it needs some tweaks and optimizations.

Oberon UI wise it's the same as Acme under p9/9front/p9port. On Inferno, 9front and Go superseded it in some ideas.



The way it was handled only proves the point UNIX would never had taken off outside Bell Labs, if AT&T was allowed to sell UNIX the moment it stopped being a toy project for playing games.



The book was forbidden by AT&T from publishing, the moment AT&T got released from the ban to sell their research, in parallel to the BSD lawsuit.

It kept being shared via piracy across universities, until AT&T and other commercial UNIX vendors, eventually allowed the book to be published again.

https://en.wikipedia.org/wiki/A_Commentary_on_the_UNIX_Opera...

Had the book never seen the light of the day, in the alternative universe of a commercial UNIX, universities wouldn't have adopted UNIX as research material to feed the next generation of UNIX clone makers.



While I also find the handle quite objectionable, I wouldn't say that "racewar" and "genocide" are words which can be freely exchanged without lack of meaning.

A review of the contributors, well, contributions, suggests a 9front connection.

They have a reputation for humor in very poor taste, but insofar as there's any sincerity to be found there, it isn't that of sincere genocide enjoyers.



One has to respect some of the design decisions behind Plan 9. Per-process namespaces, binds, and treating everything as a file provides a certain level of elegance that just isn't seen in many other Unix-like OSes.



Disclaimer: I've made LEDs blink with an Arduino and I think microcontrollers are cool, but I'm an embedded systems noob for sure :)

I poked around the FAQ a bit and I'm kinda confused about this.

Do you write bare-metal Pascal programs in this? I.e., do you write a Pascal program that uses Ultibo libraries to run without a separate operating system?

How is the resulting program different from a unikernel?

I think I'm mostly confused about why the IDE is bare-metal (why not write your Pascal programs in a normal environment and then run it bare metal on your RPi)D



> Do you write bare-metal Pascal programs in this? I.e., do you write a Pascal program that uses Ultibo libraries to run without a separate operating system?

Yes, that's exactly how it works.

> How is the resulting program different from a unikernel?

I think that's essentially what Ultibo is.

> I think I'm mostly confused about why the IDE is bare-metal (why not write your Pascal programs in a normal environment and then run it bare metal on your RPi)D

You do run the IDE in a normal environment and then run it on your RPi. See below for a better description of how it works...

https://ultibo.org/wiki/Getting_Started

联系我们 contact @ memedata.com