(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41490290

作者认为,虽然 NT 操作系统可能比同时代的操作系统有更好的初始设计原则,但总体上它既不更先进,也不“更好”,因为它的复杂性阻碍了性能、开发和市场采用等实际方面。 作者提供了 2000 年代中期使用 Windows 代码库的个人经验的例子,将 NT 描述为具有过多、冗长和僵化的编码风格,导致与类 Unix 系统相比,流程更长且效率更低。 此外,作者指出,虽然 NT 拥有强大的理论基础,但与类 Unix 系统相比,它在灵活性、适应性和资源分配等方面遇到了一些问题。 作者最后指出,尽管 NT 的设计具有高水平的架构艺术性,但其成本超出了其价值,特别是考虑到 Microsoft 的资源有限。 作者认为 NT 对严格架构的关注导致其可靠性差且复杂。 此外,作者强调,即使是最可靠的内核也可能由于额外的用户模式组件和辅助服务而变得不可靠。 最后,作者批评微软向开发人员隐藏代码并忽视与用户的协作机会。 总的来说,作者认为 NT 代表了“最坏就是更好”的例子,在实用性和效率方面,简单性胜过复杂性。

相关文章

原文




































































































































































































































































































































































































































































































this is a lovely and well written article, but i have to quibble with the conclusion. i agree that "it’s not clear to me that NT is truly more advanced". i also agree with the statement "It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems"

but i don't agree with is that it was ever more advanced or "better" (in some hypothetical single-dimensional metric). the problem is that all that high minded architectural art gets in the way of practical things:

    - performance, project: (m$ shipping product, maintenance, adding features, designs, agility, fixing bugs)

    - performance, execution (anyone's code running fast)

    - performance, market (users adopting it, building new unpredictable things)
it's like minix vs. linux again. sure minux was at the time in all theoretical ways superior to the massive hack of linux. except that, of course, in practice theory is not the same as practice.

in the mid 2000-2010s my workplace had a source license for the entire Windows codebase (view only). when the API docs and the KB articles don't explain it, we could dive deeper. i have to say i was blown away and very surprised by "NT" - given it's abysmal reliability i was expecting MS-DOS/Win 3.x level hackery everywhere. instead i got a good idea of Dave Cutler and VMS - it was positively uniformly solid, pedestrian, uniform and explicit. to a highly disgusting degree: 20-30 lines of code to call a function to create something that would be 1-2 lines of code in a UNIX (sure we cheat and overload the return with error codes and status and successful object id being returned - i mean they shouldn't overlap, right? probably? yolo!).

in NT you create a structure containing the options, maybe call a helper function to default that option structure, call the actual function, if it fails because of limits, it reports how much you need then you go back and re-allocate what you need and call it again. if you need the new API, you call someReallyLongFunctionEx, making sure to remember to set the version flag in the options struct to the correct size of the new updated option version. nobody is sure what happens if getSomeMinorObjectEx() takes a getSomeMinorObjectParamEx option strucure that is the same size as the original getSomeMinorObjectParam struct but it would probably involve calling setSomeMinorObjectParamExParamVersion() or getObjectParamStructVersionManager()->SelectVersionEx(versionSelectParameterEx). every one is slightly different, but they are all the same vibe.

if NT was actual architecture, it would definitely be "brutalist" [1]

the core of NT is the antithesis of the New Jersey (Berkeley/BSD) [2] style.

the problem is that all companies, both micro$oft and commercial companies trying to use it, have finite resources. the high-architect brutalist style works for VMS and NT, but only at extreme cost. the fact that it's tricky to get signals right doesn't slow most UNIX developers down, most of the time, except for when it does. and when it does, a buggy, but 80%, solution is but a wrong stackoverlflow answer away. the fact that creating a single object takes a page of code and doing anything real takes an architecture committee and a half-dozen objects that each take a page of (very boring) code, does slow everyone down, all the time.

it's clear to me, just reading the code, that the MBA's running micro$oft eventually figured that out and decided, outside the really core kernel, not to adopt either the MIT/Stanford or the New Jersey/Berkeley style - instead they would go with "offshore low bidder" style for the rest of whatever else was bolted on since 1995. dave cutler probably now spends the rest of his life really irritated whenever his laptop keeps crashing because of this crap. it's not even good crap code. it's absolutely terrible; the contrast is striking.

then another lesson (pay attention systemd people), is that buggy, over-complicated, user mode stuff and ancillary services like control-panel, gui, update system, etc. can sink even the best most reliable kernel.

then you get to sockets, and realize that the internet was a "BIG DEAL" in the 1990s.

ooof, microsoft. winsock.

then you have the other, other, really giant failure. openness. open to share the actual code with the users is #1. #2 is letting them show the way and contribute. the micro$oft way was violent hatred to both ideas. oh, well. you could still be a commercial company that owns the copyright and not hide the, good or bad, code from your developers. MBAAs (MBA Assholes) strike again.

[1] https://en.wikipedia.org/wiki/Brutalist_architecture [2] https://en.wikipedia.org/wiki/Worse_is_better





































































































































































































































联系我们 contact @ memedata.com