![]() |
|
![]() |
| I understood it was Unix for 'one mechanism' or 'unified' instead of the broad everything-but-the-kitchen-sink Multix approach. That was the joke I understood. Notthing about single-user. |
![]() |
| If you haven't already, I would start with Advanced Programming in the Unix Environment by Stevens
https://www.amazon.com/Advanced-Programming-UNIX-Environment... It is about using all Unix APIs from user space, including signals and processes. (I am not sure what to recommend if you want to implement signals in the kernel, maybe https://pdos.csail.mit.edu/6.828/2012/xv6.html ) --- It's honestly a breath of fresh air to simply read a book that explains clearly how Unix works, with self-contained examples, and which is comprehensive and organized. (If you don't know C, that can be a barrier, but that's also a barrier reading blog posts) I don't believe the equivalent information is anywhere on the web. (I have a lot of Unix trivia on my blog, which people still read, but it's not the same) IMO there are some things for which it's really inefficient to use blog posts or Google or LLMs, and if you want to understand Unix signals that's probably one of them. (This book isn't "cheap" even used, but IMO it survives with a high price precisely because the information is valuable. You get what you pay for, etc. And for a working programmer it is cheap, relatively speaking.) |
![]() |
| It is a must for anyone serious about UNIX programming.
Additionally one should get the TCP/IP and UNIX streams books from the same collection. |
![]() |
| Except much of these UNIX later development were done by their derivatives and are often available with certain degree of incompatibility among them (or not even at all) |
![]() |
| Windows has a slightly better concept: Structured Exceptions (https://learn.microsoft.com/en-us/windows/win32/debug/struct...). It is a universal concept to handle all sorts of unexpected situations like divide by zero, illegal instructions, bad memory accesses... For console actions like Ctrl+C it has a separate API which automatically creates a thread for the process to call the handler: https://learn.microsoft.com/en-us/windows/console/handlerrou... . And of course Windows GUI apps receive the Window close events as Win32 messages.
Normal windows apps doesn't have a full POSIX subsystem running under them. The libc signal() call is a wrapper around structured exceptions. It is limited to only a couple well-known signals. MSVCRT does a bunch of stuff to provide a emulation for Unix-style C programs: https://learn.microsoft.com/en-us/cpp/c-runtime-library/refe... In contrast to Unix signals, structured exceptions can give you quite a bit more information about what exactly happened like the process state, register context etc. You can set the handler to be called before or after the OS stack unwinding happens. |
![]() |
| Impressive, super cool, and inspiring!
Example of “creating something impressive in X days” requires a lot of experience and talent that is built over years. |
![]() |
| It was really cool watching the ~daily updates on this on Mastodon - seeing how someone so skilled gradually pieces together a complex piece of software. |
![]() |
| Hare looks like an interesting language.
Though this limitation will limit its adoption in this multicore age I think: From the FAQ https://harelang.org/documentation/faq.html .... Can I use multithreading in Hare? Probably not. We prefer to encourage the use of event loops (see unix::poll or hare-ev) for multiplexing I/O operations, or multiprocessing with shared memory if you need to use CPU resources in parallel. It is, strictly speaking, possible to create threads in a Hare program. You can link to libc and use pthreads, or you can use the clone(2) syscall directly. Operating systems implemented in Hare, such as Helios, often implement multi-threading. However, the upstream standard library does not make reentrancy guarantees, so you are solely responsible for not shooting your foot off. |
![]() |
| Blitter was great, but those were simpler times.
The best we have nowadays is using compute shaders for the same purpose. Just like when using a TMS34010 with its C SDK. |
![]() |
| I was interested in Hare until I found this immensely self-defeating FAQ item: https://harelang.org/documentation/faq.html#will-hare-suppor...
As a baseline, I support developers using whatever license they would like, and targeting whatever operating systems, indeed, writing whatever code they would like in the process. That doesn't make this specific policy a good idea. Even FSF, generally considered the most extreme (or, if you prefer, principled) exponents of the Free Software philosophy, support Windows and POSIX. They may grumble and call it Woe32, but Stallman has said some cogent things about how the fight for a world free of proprietary software is more readily advanced by making sure that Free Software projects run on proprietary systems. They do at least license the library code under MPL, so merely using Hare doesn't lock you into a license. But I wonder about the longevity of a language where the attitude toward 95+% of the desktop is "unsupported, don't ask questions on our forums, we don't want you here". Ironically, a Google search for "harelang repo" has as the first hit an unofficial macOs port, and the actual SourceHut repo doesn't show up in the first page of results. Languages either snowball or fizzle out. I'm typing this on a Mac, but I could pick up a Linux machine right now if I were of a mind to. But why would I invest in learning a language which imposes a purity test on developers, when even the FSF doesn't? A great deal of open source and free software gets written on Macs, and in fact, more than you might think on Windows as well. From where I sit, what differentiates Hare from Odin and Zig, is just this attitude of purity and exclusion. I wish you all happy hacking, of course, and success. But I'm pessimistic about the latter. |
![]() |
| I don't think that Apple particularly cares about porting their software to Linux. Do you feel the same about Apple? That with such an attitude, they surely cannot succeed? |
![]() |
| > so of Apple's programming languages
So the whole part of your message about "even the FSF saying that free software should run on proprietary system" works when you want to criticize Hare, but not when looking at Apple proprietary software, right? A language is just another piece of software, I don't see why you should apply different rules to a programming language than, e.g. to a serializing system like Protobuf. And I don't think Google actively supports swift-protobuf (https://github.com/apple/swift-protobuf). Hare upstream just says "we are not interested in supporting non-free OSes, but we won't prevent you from doing it". It's your choice to not use Hare because of this, but it's their choice to not support macOS. |
![]() |
| Ouch, I hadn't really considered it before but that quote deeply resonates with me. The experience of trying to debug windows wifi system is day and night compared to wpa_supplicant/mac80211. |
![]() |
| Are there "waypoint" commits for major milestones? Id really like to see those.
Like PC bootstrap, basic kernel action loops, process forking, yada yada |
Source: UNIX: A History and a Memoir Paperback – October 18, 2019 by Brian W Kernighan (Author)