(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38019231

总体而言,命名空间解决了几个问题,包括防止具有冲突功能集的重复包、减少对基于 URL 的解决方案的依赖、简化包发现、提高代码可重用性以及提供一致的命名约定。 然而,对命名空间的主要反对意见是,为了便于使用,用户可能更喜欢较短、扁平的命名空间名称。 此外,命名空间抢注可能会导致混乱和潜在的使用错误。 建议的折衷方案是将所有现有包纳入其自己的子命名空间中。 无论这些选项如何,具有冲突功能集的重复包问题仍然是一个持续存在的挑战,需要持续的维护工作来缓解和解决。 最后,值得注意的是,虽然对 Go 目前缺乏命名空间实现的批评是合理的,但 Java 提供了一个有效命名空间的绝佳示例,可以作为未来开发工作的模板。 最终,在考虑命名空间包管理策略时采用的最佳方法将根据每个案例的特定要求和约束而有所不同。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Was Rust Worth It? (jsoverson.medium.com)
450 points by todsacerdoti 4 days ago | hide | past | favorite | 709 comments










I wrote a lot of rust, but after some years it still feels unproductive. I do a lot of zig now and I am like 10 times more productive with it. I can just concentrate on what I want to code and I never have to wonder what tool or what library to use.

I know rust gives memory safety and how important that is, but the ergonomic is really bad. Every time I write some rust I feel limited. I always have to search libraries and how to do things. I cannot just "type the code".

Also the type system can get out of control, it can be very hard to actually know what method you can call on a struct.

I still think rust is a great tool, and that it solves tons of problem. But I do not think it is a good general purpose language.



> but the ergonomic is really bad. Every time I write some rust I feel limited.

> But I do not think it is a good general purpose language.

Remember that this is not a sentiment that's shared by everyone. I use Rust for tasks that need anything more complicated than a shell script. Even my window manager is controlled from a Rust program. I say this as someone who has been programming in Python for nearly two decades now. At this point, I'm about as fast in Rust as I am in Python.



I tried to get into rust for many years, I'm now in a C/CPP job (after Java/Python/Ruby and other gigs). What I've come to understand is that Rust's lifetime model is very difficult to work with whenever you have a cyclic reference. In C/CPP the same holds, but you deal with it through clever coding - or ignoring the problem and cleaning up memory later. Java, and other GC'd languages just work for these structures.

While the Rust devs believe such cyclic references are rare - I think this speaks mostly to the problem domain they are focused on. Relational models are everywhere in Apps, they are often common in complex systems software like databases, and they are fairly rare in firmware/drivers/system code code.

There are a few patterns for dealing with cyclic references, but they all end up requiring either unsafe or a main "owner" object which you clean up occassionally (effectively arena allocation). Having now worked in C/CPP - the idea of having unsafe blocks sprinkled around the code doesn't bother me, and many C/CPP components have some form of arena allocation built-in. I just wish Rust learning resources would be more upfront about this.



> Relational models are everywhere in Apps, they are often common in complex systems software like databases, and they are fairly rare in firmware/drivers/system code code.

It's not like that you can't write relational models in the safe Rust. The only forbidden thing is a reference pointing arbitrary memory, which is typically worked around via indices and often more performant in that way. It is much rarer to find applications that need an absolutely random pointer that can't be hidden in abstractions in my opinion.



> I just wish Rust learning resources would be more upfront about this.

While beginner resources don't dwell too much upon cyclic references, they don't consider unsafe blocks as unusual. All the material I've seen say that there are certain domains where Rust's compile-time safety model simply won't work. What Rust allows you to do instead, is to limit the scope of unsafe blocks. However, the beginner material often won't give you too much details on how to analyze and decide on these compromises.

Anyway, compile-time safety checks (using borrow checker) and manual safety checks (using unsafe) aren't the only way to deal with safety. Cyclic references can be dealt with runtime safety checks too - like Rc and Weak.



> Cyclic references can be dealt with runtime safety checks too - like Rc and Weak.

Indeed. Starting out with code sprinkled with Rc, Weak, RefCell, etc is perfectly fine and performance will probably not be worse than in any other safe languages. And if you do this, Rust is pretty close to those languages in ease of use for what are otherwise complex topics in Rust.

A good reference for different approaches is Learn Rust With Entirely Too Many Linked Lists https://rust-unofficial.github.io/too-many-lists/



Also, take a look at GhostCell (https://plv.mpi-sws.org/rustbelt/ghostcell/ and https://www.youtube.com/watch?v=jIbubw86p0M). If anyone's used this in a project or production environment, I'd love to hear your firsthand experiences and insights.


You still need a main owner in those patterns, that owner must be part of a DAG of owners - you cannot have cyclic ownership.


Except in those other languages the compiler types .clone() for me.


Sometimes the compiler types clone for you even you don’t actually want it to.


It is easier to tell it, "don't do it this time" than all the time.

It is no accident that while Val/Hylo, Chapel and Swift have taken inspiration from Rust, they have decided to not inflict the affine types directly into the language users, rather let the compiler do part of the work itself.



Oh I just put up a blog post about this on Monday :)

https://jacko.io/object_soup.html

Agreed that I wish more beginner Rust books had a section about this. The pattern is quite simple, but it's hard for beginners who get stuck to realize that they need it.



I would have needed this when I started learning Rust! All my early programs were object soups.


> While the Rust devs believe such cyclic references are rare -

They are.

I have not had to use cyclic references ever, except once doing experiments with fully connected graphs, that was very unusual

If you're doing a lot cyclic references Rust is not the right choice. Horses for courses

But are you sure you're using the best algorithm?



Maybe for you it's unusual - in my previous work all the apps contained graphs, and I just joined a company where almost all the apps also contain graphs


I don't write Rust, but I never understood why graphs meant you need circular references.

Doesn't it just come down to the question of who owns the node?

If it's a tree, and parents are never removed before children, just make the child owned by the parent and keep a weak reference to the parent.

If it's a general graph, and vertices can exist or not exist regardless of edges, keep a list of them independent of the edges, and keep weak references in the edges.

If it's a graph where a one or a few roots exist, and nodes exist as long as there's a path from a root node to them, that sounds like a classic use case for Rc.

Is there a common use case I'm missing?



Things get tricky when you have a valid triangular relationship amongst equal objects. This comes up far more often than you’d expect.


Can you give an example?


What's a frequently encountered case for such cyclic loops? Without details I'm drawn to trying to break the cycle, either by promoting the shared state to a container object for the set, or by breaking it out into it's own object that multiple things can point at.


I think a game is a good example, or anything that's kind of like a game in that it's modeling a world that's changing over time. Objects come and go, and they "target" each other or "require" each other or whatever sort of relationships the program wants to express. Those relationships end up forming a graph that might contain cycles.

I just put up a blog post about this actually :) https://jacko.io/object_soup.html

> promoting the shared state to a container object for the set

Yeah I think that's a good way to describe these "ECS-ish" patterns.



a parent field.

a doubly linked list



Your parent said "frequently encountered" and while it's probably true that doubly linked lists may be "frequently encountered" in some people's code they're usually a bad idea and "don't use a list here" is often the right fix, not "conjure a way to make that safe in Rust".

It's very noticeable how often people who "need" a linked list actually only wanted a queue (thus Rust's VecDeque) or even a growable array (ie Vec).

Aria has a long list of excuses people offer for their linked lists, as well as her discussion of the time before Rust 1.0 when she sent lots of Rust's standard library collections to that farm up-state but wasn't able to send LinkedList.

https://rust-unofficial.github.io/too-many-lists/



Doubly-linked list is something you have almost no reason to ever write.

Parent field is something where you have a clear hierarchy (it's not really “cyclic”, so it's the perfect use-case for weak references).

When coming from a managed-memory language, this obviously requires some conceptual effort to understand why this is a problem at all and how to deal with it, but when compared to C or C++, the situation is much better in Rust.



Also, a parent field is something you should be able to infer, e.g, keep a stack of parents as you traverse down a search tree following the child pointers, instead of storing parent pointers in the tree nodes.


That's assuming you traverse the tree down from the root each time. Often you do, but there are cases where you don't -- e.g., if your goal is to determine the lowest common ancestor of two given nodes.


ASTs


An abstract syntax tree can't have cycles by definition.


Technically true, but sometimes you want parent pointers. You then have a more general graph in the underlying representation, but it still represents a tree structure.


Same shows up in Postgres Query* struct for SQL. Copying memory between parser, planner, execution would be too expensive in large queries - so instead you have an arena allocated representation.


ASTs are one of "nicely behaving" data structures. It is like a archetype of abstract data types pervasive in functional programming languages.


You usually solve this by using a traversal helper that keeps the stack and next/prev for you without storing them inside the AST explicitly.


An Abstract Syntax Tree / or Double-Linked List both qualify, but they're also a lower level implementation detail than I'd expect to frequently interact with in a reference safety focused language.

I've still been meaning to write something in / learn Rust's ways of thinking; is there not an intended replacement for these data structures? Or do they expect it all to go under Unsafe?



> is there not an intended replacement for these data structures? Or do they expect it all to go under Unsafe?

For linked-lists, there's one in std and the majority of people should never have to write their own as it's error prone and requires unsafe.

For graph use-case then you can either use ECS, arena, ref counting or unsafe, but you're probably better off using/developing a dedicated crate that optimizes it and abstract it away behind an easy to use (and safe) interface.



The one in std uses unsafe. My main concern with learning rust is that you can spend ages trying to learn “the right way” of doing things in rust, when the right way really is to use unsafe.


No, the right way is to use unsafe primitives that have been tested, audited or even formally proven (like the ones in std).

Sometimes such a primitive doesn't exist and you should use unsafe yourself, but then you're the one supposed to make sure that your code is in fact sound. If you keep unsafe for small portions of the code you can reason about and extensively test so Miri gives you a good level of confidence, then it's fine to use unsafe. But it's the more expensive option, not the default.



to be a bit pedantic, i assume the language you a referring to as CPP is actualy C++? cpp to me means the c (and c++) preprocessor.


Surprisingly, I am faster in Rust than any other language. Something about my prior experiences just made it click just the right way.

I don't want to program in anything else anymore. I don't want to deal with obscure C++ error messages, C footguns and lack of ergonomics, I don't want to deal with abstraction hell of Java, or the poor person's typing that python has.

I have been programming in Python for the past 6 years, I know all sorts of obscure details, and with rust, I just don't need to think about all of those issues.



> Surprisingly, I am faster in Rust than any other language.

Not really surprising, given that you have C and C++ background. That's what I was trying to highlight. Rust isn't a confusing or unproductive language as many project it to be - if you have the conceptual understanding of what happens on the hardware. Especially about stack frames and RAII. If you know those, the borrow checker complaints will immediately make sense and you will know how to resolve them.

Add rust-analyzer (Rust's language server) to it, you get real-time type annotations and a way to match types correctly in the first attempt. In my experience Rust also helps structure your program correctly and saves a ton of time in debugging. All in all, Rust is a fast way to write correct programs.



> Rust isn't a confusing or unproductive language as many project it to be - if you have the conceptual understanding of what happens on the hardware. Especially about stack frames and RAII. If you know those, the borrow checker complaints will immediately make sense and you will know how to resolve them.

I have reasonable understanding of "what happens on the hardware" (been writing kernel code for years), know modern C++ (with RAII and stuff) and Rust is confusing and unproductive language for me.



I get the feeling that learning rust can be a "bang your head against it until you get an 'aha' moment" sort of affair, much like learning git.

Some people pick up rust quickly because it clicks into their brain early, some take longer or end up bouncing off.



I had university courses on computer architecture and assembly, even before I took up Python as a hobby. I did have a little C experience before that. My entire perspective on Rust type system from day 1 (back in 2013, before Rust 1.0) was based on the hardware (primarily stack frames) and problems I had with assembly and C. There was never a point where the borrow checker didn't make sense. This is why I insist that Rust isn't hard to understand if you learn the hardware on which it runs.

Back then, people were debating the design decisions that led to the borrow checker, in public for everyone to see (on Reddit and IRC). They were trying to avoid memory safety issues in Firefox and Servo. They were even a bit surprised to discover that the borrow checker solved many concurrency bugs as well.



I took a different route to Goku (other commenter), I used to write a lot of C and C++ in university, did everything there, up until 2018-ish, I got a bit into rust and things just clicked, my understanding of memory was just not that good enough, and then my C skills skyrocketed as a consequence of learning proper memory management.

Then I got into Haskell, and functional programming, that made thinking about traits, immutability, and all functional aspects a breeze.

Then finally, I got into rust again, use it at work and personal projects. Somehow I managed to rewrite a project that took me 4 months in Python in about 4 days. It was faster, more robust, cleaner, and I could sleep at night.



I'd add that if you have some understanding of how memory ownership should be such that you don't end up with memory leaks, you are fine. The borrow checker just verifies that your mental model is correct, and removes some of the cognitive load from you.


> if you have some understanding of how memory ownership

What reading materials will help me level up my understanding of this?



> At this point, I'm about as fast in Rust as I am in Python.

This is factually impossible.

For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

This is not even considering more local complexities, like data structures with cyclical references.



> This is factually impossible.

How do you outright deny something as subjective as my personal experience? Besides, I'm not the only one in this discussion that made the same opinion.

> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

While GC'ed languages allow you to skip a proper initial design stage, it's a stretch to claim that it's not required at all. In my experience using Python, while the initial stages are smooth, such design oversights come back and bite at a later stage - leading to a lot of debugging and refactoring. This is one aspect where Rust saves you time.

> This is not even considering more local complexities, like data structures with cyclical references.

I'm not going to dwell on cyclical references, since there's another thread that addresses it. They point out a way to make it as easy in Rust as it is in GC'ed languages.

Meanwhile, the upfront architecture and data structure design isn't as complicated as you project it. Rust is mostly transparent about those - even compared Python. How well do you understand how Python manages Lists, dictionaries or even objects in general? I often find myself thinking about it a lot when programming in Python. While you need to think upfront about these in Rust, there's actually less cognitive overhead as to what is happening behind the scenes.



This is possible if you are really slow in python


Or maybe they mean "fast to 1.0" rather than "fast to 0.1"?

They didn't specify.



That is also not shared by everyone. If you have written enough Rust to have internalized designing for the borrow checker, you don't have to spend much time in a design phase.

The only time I find I have to "fight the compiler" is when I write concurrent code, and you can sidestep a lot of issues by starting with immutable data and message passing through channels as a primitive. It's a style you have to get used to, but once you build up a mental library of patterns you can reasonably be as fast in Rust as you are in Python.



> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

It's nearly the opposite. For larger programs in Python, you need an upfront design stage because the lack of static typing will allow you to organically accrete classes whose job overlap but interfaces differ.

Meanwhile, Rust will smack you over the head until your interfaces (traits) are well-organized, before the program grows enough for this to become a problem (or until you give up and start over).

How do I know? I'm stuck with some larger Python programs that became a mess of similar-but-not-interchangeable classes. RiiR, if I ever have the time.



> For larger programs in Python, you need an upfront design stage because the lack of static typing will allow you to organically accrete classes whose job overlap but interfaces differ.

You can also install pre-commit and mypy, and have static typing.



That's the entire point we're making. Rust's type system forces you to deal with the problem early on and saves time towards the end. It's not like that's impossible with Python with addons like mypy. But Rust's type system goes beyond just data types - lifetimes are also a part of the type system. I don't know how you can tack that on to Python.


> Rust's type system forces you to deal with the problem early on and saves time towards the end. It's not like that's impossible with Python with addons like mypy.

Definitely not - mypy's pretty good these days, and lots of people use it.

> But Rust's type system goes beyond just data types - lifetimes are also a part of the type system. I don't know how you can tack that on to Python.

Well, Python's objects are generally garbage collected rather than explicitly destroyed, so I don't think it'd make sense to have lifetimes? They don't seem a correctness thing in the same way that types do.



Lifetimes and borrowing are very much a correctness thing and aren't just for tracking when memory is freed. While you won't have use-after-free issues in a GCed language, you will still have all the other problems of concurrent modification (data races) that they prevent. This is true even in single-threaded code, with problems like iterator invalidation.


Lifetime analysis matters a lot for way more than just garbage collection.

File handles, iterators, mutex guards, database transaction handles, session types, scoped threads, anything where ordering or mutual exclusivity matters.



I don't know about all of those, but Python's context managers and built in constructs handle most of those, I think?


Only in the most basic cases. If your handle has to be passed into another function or outlive the current scope of your function, the guardrails end.


RiiR: Rewrite it in Rust


> This is factually impossible.

Factually it's not. It may be true that in a very very idealized thought-experiment, when someone has a perfect knowledge, never makes mistakes, doesn't have preferences, can type arbitrary fast etc, python needs fewer keystrokes, fewer keywords or such, thus is faster. Who knows. But in reality none of the assumptions above hold, also literally everything plays a much bigger role in development speed anyway.



You know what slows me down in Python? The fact that you need to actually go down a code path to make sure you’ve spelled everything right.

But nothing that Rust does slows me down because I’m used to it.



Of course it's possible. You just need to write Python very slowly :)


Everyone has their breaking point. I start to write Python very slowly around after 10k lines or so. Can't remember where I put stuff...


> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

Every language requires this (if you want robust code), most just let you skip it upfront ... but you pay dearly for doing so later.



I disagree. I have similar observation. With modern editors and Language Server that are giving immediate feedback writing strongly typed languages doesn't differ than writing python.


> This is factually impossible.

No it isn’t Both languages comprise much more than their memory management handling. Even if your premise is true, the conclusion does not follow.



I mean, it’s easily the same for me. I am way more productive in Rust because I know it very well, and with Python I’m debugging all kind of gotchas.


I have to second the OP: ownership isn’t that hard. I just get used to structuring a program in certain ways. Having written a lot of C++ helps because the things Rust won’t let you do are often unsafe or a source of leaks and bugs in C++.

Having an editor with rust-analyzer running is massively helpful too since ownership issues get highlighted very quickly. I can’t imagine dealing with any language larger than C without a smart editor. It can be done but why?

I still find async annoying though.

My biggest source of friction with Rust (other than async) is figuring out how to write code that is both very high performance and modular. I end up using a lot of generics and it gets verbose.



I think this is a very valuable comment, and the replies don't do it justice.

I strongly agree from my own and my peers experience with the sentiment that latency from zero to running code is just higher in Rust than Python or Go. Obviously there are smart people around and they can compensate a lot with experience.



Honestly I found myself coding very much the same way in Rust as I did in Python and Go, which were my go-to hobby languages before. But instead of "this lock guards these fields" comments, the type system handles it. Ownership as a concept is something you need to follow in any language, otherwise you get problems like iterator invalidation, so it really shouldn't require an up-front architectural planning phase. Even for cyclic graphs, the biggest choice is whether you allow yourself to use a bit of unsafe for ergonomics or not.

Having a robust type system actually makes refactors a lot easier, so I have less up-front planning with Rust. My personal projects tend to creep up in scope over time, especially since I'm almost always doing something new in a domain I've not worked in. Whenever I've decided to change or redo a core design decision in Python or Go, it has always been a massive pain and usually "ends" with me finding edge cases in runtime crashes days or weeks later. When I've changed my mind in Rust it has, generally, ended once I get it building, and a few times I've had simple crashes from not dropping RefCell Refs.



Couldn't one use Arc and similar boxed types to avoid thinking about memory until later?


Why not just something like nim and that point, and straight up ditch 90% of the complexity?


I was answering to this other user:

> This is factually impossible.

> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

It seems that is not factually impossible.

Now, answering your question: It could be useful to use boxed types and later optimize it, so you get the benefits of rust (memory safety, zero cost abstractions) later, without getting the problems upfront when prototyping.



I've finally set my mind to properly learning a new language after Python, Haskell, and typescript. I'm looking into Rust especially because of how I've heard it interoperates with Python (and also because it's maybe being used in the Linux kernel? Is that correct?).


Rust is an excellent follow up to those languages. It's got many influences from Haskell, but is designed to solve for a very different task that's not yet in your repertoire so you'll learn a ton.

And yes the Python interop is excellent.



I'm sold, thank you. Yes, it felt like a great "missing quadrant" to my generalist skillset.


Linux kernel has support for rust userland drivers, and rust interops with python with pyo3.


Not sure what you mean by "userland" drivers here, but support for kernel modules written in rust is actively being developed. It's already being used for kernel drivers like the Asahi Linux GPU driver for M1 Macs.


I am referring to userspace / userland drivers.

https://www.kernel.org/doc/html/v4.18/driver-api/uio-howto.h...



But you can write userspace drivers in any language, as long as that language has basic file I/O and mmap() support. There's nothing special about using Rust for userspace drivers.


Isn't this false? Don't certain languages basically need, say, a libc that isn't nessesarily available in kernel space?


What window manager is that, may I ask out of curiousity?


Sway. And river, sooner or later. A single Rust program is used for setting up services (using s6-rc on Gentoo), shutdown, reboot, idle inhibit, etc.


It's interesting that you bring up Python. I find Rust unpleasant to program in -- and I also find Python unpleasant to program in.

Now I'm wondering about demographics. Are people who love Python more likely to love Rust as well?



I agree. I feel far more productive in C and C++ than in Rust at that point.

Rust feels like totally missing the sweet spot for me. It's way too pedantic about low level stuff for writing higher level applications, but way too complicated for embedded or writing an OS. In the former case I would rather take a C++, Java, Haskell, OCaml or even Go, and maybe sprinkle some C, and in the latter case C in macroassembly mode is far more suitable.

I still have a feeling that original vision of Graydon Hoare (i.e. OCaml/SML with linear types, GC, stack allocations, green threads and CPS) would be a much better language.



The problem with C and to C++ is that it’s 2023 and the CVE list is still loaded with basic memory errors. These come from everywhere too: small companies and open source all the way up to Apple, Microsoft, and Google.

We as a profession have proven that we can’t write unsafe code at scale and avoid these problems. You might be able to in hand whittled code you write but what happens when other people work on it, it gets refactored, someone pulls in a merge without looking too closely, etc., or even maybe you come back two years later to fix something and have forgotten the details.

Having the compiler detect almost all memory errors is necessary. Either that or the language has to avoid this entirely. Rust is the former class unless you use unsafe, and the fact that it’s called “unsafe” makes it trivial to search for. You can automatically flag commits with unsafe in them for extra review or even prohibit it.



I think nobody is arguing the need for static memory safety, just that the poor Rust ergonomics aren't a good tradeoff, especially for scenarios where C is useful. We need many more Rust alternatives that explore into different directions, Rust is already too big and "established" for any radical changes in direction.


On that regard, by packaging Modula-2 into a C friendly syntax, I do agree Zig is relatively interesting option, however not having a story for binary library distribution is an hindrance in many C dominated industries.


> I think nobody is arguing the need for static memory safety, just that the poor Rust ergonomics aren't a good tradeoff

Unless the "poor ergonomics" and lack of shortcuts are explicitly what provides the static memory safety.



IMHO Rust's ergonomics problems aren't caused by the borrow checker itself, but have the same cause as similar problems in C++ (mainly a "design by committee" approach to language design and implementing features that should be language syntax sugar in the stdlib instead, which then directly results in the stdlib being too entangled with the language and "too noisy" hard to read code).

Apart from the static memory safety USP, Rust is repeating too many problems of C++ for my taste, and at a much faster pace.



I agree with this. The borrow checker itself isn't the problem. That's necessary to make you write correct and safe code anyway.

The problem is that there is too much syntax, too many symbols, and too many keywords. I just keep forgetting how to use impl and lifetimes and single quotes and whatnot. It makes it really tough to use as an occasional language. And if I can't do that, then how can I get confident enough to use it in my job?



> The problem is that there is too much syntax, too many symbols, and too many keywords. I just keep forgetting how to use impl and lifetimes and single quotes and whatnot.

This is exactly how I feel about Rust.

There are some good ideas in there, hiding behind a horrible language design. I am waiting for someone to provide a more developer friendly alternative with the simplicity of C or Go.



Maybe Lobster could be an option.

https://strlen.com/lobster/

https://aardappel.github.io/lobster/memory_management.html

It uses compile-time reference counting / lifetime analysis / borrow checking. Which is mostly inlined to the point that there is none of the sort in the compiled output and objects can even live on the stack. It basically looks like Python but is nothing like it underneath, and of course with no GIL. You can run it on JIT or compile it to C++.

There's also Koka with the Perceus algorithm on compile time and looks like a much cleaner language than Rust. It also tracks side effects of every function in a type where pure and effectful computations are distinguished.

https://github.com/koka-lang/koka



In that regard, strangely enough, I find with constexpr code, templates, and concepts, is easier to achieve most compile time code stuff, while staying in C++, than dealing with Rust macros.


When the pendulum swings it often swings far before normalizing somewhere in the middle. I agree that Rust isn't the answer.

FWIW there is interest in adding bounds checking to C [1]. That discussion includes feedback from at least one member of the C standards committee.

[1] https://discourse.llvm.org/t/rfc-enforcing-bounds-safety-in-...



It is, however those same companies aren't dialing full safe ahead knob either, hence why Microsoft just recently published a set of secure coding guidelines for C and C++.

https://devblogs.microsoft.com/cppblog/build-reliable-and-se...



Does it have tooling to enforce those guidelines? If not, how is it better than someone saying "write correct code" and calling it a guideline?


Following a guideline that checks correctness for you is easier than following "write correct code".


Partially, in Visual Studio and GitHub.


Quite annoying to read in my native language, German. Do they use automatic translations? It's full of grammar errors and mistranslations.


These are two issues, which a theoretically orthogonal but in practice not so much. These are known as soundness and completeness. A good talk on topic [1]

Rust will reject a lot of sound programs, and that's a huge performance hit. You are hitting the incompleteness wall with multiple mutable borrowing, closures in structures are a huge source of pain as well. And usually the answer from the community is "use handlers instead of pointers", but this gives you, surprise surprise, manual resource management alike that in C, and the compiler won't help much.

Of course this is all subjective but for me the ergonomics of Rust is far too bad. It's a good step in right direction along with ATS, but I really hope we could do better than this.

[1] https://www.youtube.com/watch?v=iSmkqocn0oQ



Can you give us examples, please? I use Rust since version 1.0, and I like it a lot.


Cyclic data structures are impossible to represent in safe Rust because there is no clear "owner" in a cyclic data structure.


Cyclic structures can be flattened into a vector or an arena, or unsafe Rust can be used.


What are some examples of sound programs you want to write in Rust but are unable to write?


I maintain very large C and C++ application and very rarely have any memory issues. Tools like Valgrind and Helgrind are excellent for finding and fixing problems. So switching to Rust is a very bad ROI.


I gave Rust a few chances, and always came out hating its complexity. I needed a systems programming language to develop a hobby OS[1], and Nim hit the sweet spot of being very ergonomic, optional GC, and great interop with C. I can drop down to assembly any time I want, or write a piece of C code to do something exotic, but the rest of the system is pure Nim. It's also quite fast.

[1] https://github.com/khaledh/axiom



Opposite experience for me. Writing Rust on embedded systems greatly improved my confidence and speed. When using C a small mistake often leads to undefined behaviour and headaches. Rust theres none of that - its been a game changer for me.


I was using rust on embedded, I moved to zig. Very happy with it as you can pass allocators around to use fixed buffer.


When you are developing hardware on an FPGA, a lot of hardware bugs look like they have locked up the CPU and strangely enough, a lot of undefined behavior looks exactly like a hardware lockup...


I am curious what kind of code you are writing? Is it very low level or very high?

>I know rust gives memory safety and how important that is, but the ergonomic is really bad. Every time I write some rust I feel limited. I always have to search libraries and how to do things. I cannot just "type the code".

You don't have to search libraries and figure out how to do things in Zig?



It's hard to describe, but in some languages, you spend a lot less time looking at reference docs and more time just naturally writing the solution. Lisp is a great example of that, if you get through the learning curve.


I suspect Zig libraries feel easier because they're doing easier things. I bet if you try to draw a triangle with the Vulkan API in Zig, you'll find yourself looking at the reference docs a lot.


Most of the time I can use my "general computer science baggage" to write the solution. At present, I do embedded and web business logic (wasm) where the UI is rendered by preact. For those two projects zig is working very well.


I agree with this general feeling, and it is hard to articulate

Rust forces you to figure out ahead of time where each bit or byte is going to go and on which thread and using which mutation scheme. I’m happy to play the game, but it feels tedious for anything short of a parser or a microcontroller.

It messes with my process because I like to get something working before I determine the best API structure for it

I can get 90% of the performance with Swift and it flows much more easily, even though Rust’s type system is more powerful.



I’ve written plenty of Rust code in my life (easily more than 100kLOC, and I’ve really never worried about putting which bit where.

You can just clone and be on your merry way; you don’t need to worry about perf-related things if you don’t want to.



Those two sentences feel in contradiction of one another. You don’t need worry to about where the bits go, you just need to know to call a method to move the bits?

Swift makes every type implicitly copyable on the stack, including object pointers (through ARC), so you don’t have to clone. You can even pass functions as variables without any hoops.

I love lots of things about Rust, though, and will continue to use it for lots of things. Cross-platform things, for one!



I rather use compiled managed languages like Swift, D and C# instead, they provide enough low level coding knobs for C and C++ style coding, while being high level productive.

Would add Go to the list, but only when I really have to.

Nim and Crystal could be alternatives, but don't seem to have big enough communities, at least for what I do.

However I do agree with the conclusion, Rust is a great language for scenarios where no form of automatic memory management is allowed, kernels, specific kinds of drivers, GPGPU programming, as general purpose, there are more productive alternatives, equally safe.



C# is underrated by the HN crowd, I find. I quite like how mid sized firms (100-1000 employees) use it.


I used to be .NET dev and don't agree. Couple of reaons:

1) Modern Java is almost as good as C# with some things I can't give up in Java (static imports => succint code, Groovy Spock => succint tests)

2) Kotlin is better than C#

3) JVM has much much bigger ecosystem (almost all the apache projects are JVM oriented) and default web framework is much less code to type (SpringBoot) is much more productiv

4) JVM has wider variety of langs

For those reasons IMHO if you are small-mid company (or startup) it's wiser to choose JVM.



Kind of, maybe you need to do some low level coding and don't want to wait for Valhala, or make use of JNI and native libraries, GraalVM/OpenJ9 still aren't as integrated as .NET Native or Native AOT, e.g. writing native shared libraries.

Also Java lost the attention span of the gaming industry, besides Android casual games and Minecraft, there are hardly anyone else paying attention to it.



To be fair, C# is ok for game dev, but not great. C# libraries are lagging heavily behind Java.

Want a fastest possible library? It's in C++, and not portable to Win/Mac. So good luck with wrap + porting it.

Want a decent implementation of an algo? It usually exists for Java but not for C#. Hope you like writing it from scratch.

Want a C# implementation of an algo that doesn't allocate to the Nth degree. Again, write it yourself.

But ok, maybe Unity has a good ecosystem... And they fucked it over a barrel.



It is widely better recieved in the AAA gaming developer community than Java, and that is what matters.

I also like Java, but c'mon no decent algorithms being implemented in C#? That is already approaching zealotry.



I didn't say no decent algorithm in C#, but for each performance sensitive algorithm/data structure there is a C and Java implementation at the least ( in my case Roaring Bitmaps).

In C# the solution is half baked or archived or abuses allocation.

I think Unity has way more with C# adoption in game dev than innate C# qualities.



This is a classic case of goalpost moving. The reason why so many algorithms are written in Java especially closer to academic side is because most curriculums in comp-sci often straight up not allow using anything else except Java, Python or sometimes C++. Having C# as an alternative in these is a luxury. There are also more people using Java in general. However, this does not make it a better language at solving these tasks, nor it is any suitable for writing high performance implementations for advanced vectorized algorithms which would push hardware, which is actually what you want when you start caring about such scenarios, which C# excels at.


I'm not moving the goalpost. I explained my examples in another reply. Want to write an engine mostly from scratch in C# and you need libraries that are low on allocation and for niche data/algorithms that games need? You're going to have a bad time(TM).

Sure you could use YAML parser, but it allocates everyone and their mother. Can you find a Fluent localization in C#? Sure, but its outdated and archived. Ok, but basic RoaringBitmap implementation? The repo is archived and not fully complete.

Why C# is used in game dev is incidental. It has more to do with Unity and XNA/FNA than any concrete quality of language modulo value types (but then again, most C# libraries don't focus on avoiding allocation and are just as happy as Java to construct a complicated hierarchy of classes).



I think Java is only good for long-running servers.

Java doesn’t support C interop. For many desktop and embedded projects this is a showstopper, here’s an example https://github.com/Const-me/Vrmac/tree/master/VrmacVideo That C# code directly consumes V4L2 and ASIO Linux kernel APIs, and calls unmanaged user-mode DLLs like libfdk-aac.so and liba52-0.7.4.so.

Native stack and value types in C# reduce load on GC, and the number of complicated tricks required from JIT compiler. This in turn helps with startup performance. This is critical for command-line apps, and very desirable for desktop apps.

Another thing missing in Java is intrinsics support, both scalar like popcnt, bitscan, BMI, etc., and SIMD like SSE and AVX.



Projects Panama & Valhalla seems to solve all your complaints:

> Java doesn’t support C interop. For many desktop and embedded projects this is a showstopper, here’s an example https://github.com/Const-me/Vrmac/tree/master/VrmacVideo That C# code directly consumes V4L2 and ASIO Linux kernel APIs, and calls unmanaged user-mode DLLs like libfdk-aac.so and liba52-0.7.4.so.

Part of Panama: check out the "Foreign Function & Memory API" [0]. The official docs [1] say it is a preview in 21 but it got stabilized in Java 22 (isn't out yet).

> Another thing missing in Java is intrinsics support, both scalar like popcnt, bitscan, BMI, etc., and SIMD like SSE and AVX.

Also part of Panama: see the "Vector API" JEP [2].

> Native stack and value types in C# reduce load on GC, and the number of complicated tricks required from JIT compiler. This in turn helps with startup performance. This is critical for command-line apps, and very desirable for desktop apps.

This is part of Project Valhalla [3], they're adding value types and actual generics, among other things.

That said, most of these are not done / not in a stable LTS Java release yet. We'll see how much better it'll be compared to C# (if at all) once they land.

[0] https://openjdk.org/jeps/454

[1] https://docs.oracle.com/en/java/javase/21/core/foreign-funct...

[2] https://openjdk.org/jeps/460

[3] https://openjdk.org/projects/valhalla/



> Part of Panama

Most real-live C APIs are using function pointers and/or complicated data structures. Here’s couple real-life examples defined by Linux kernel developers who made V4L2 API: [0], [1] The first of them contains a union in C version, i.e. different structures are at the same memory addresses. Note C# delivers the level of usability similar to C or C++: we simply define structures, and access these fields. Not sure this is gonna be easy in Java even after all these proposals arrive.

For a managed runtime, unmanaged interop is a huge feature which affects all levels of the stack: type system in the language for value types, GC to be able to temporarily pin objects passed to native code (making copies is prohibitively slow for use cases like video processing), code generator to convert managed delegates to C function pointers and vice versa, error handling to automatically convert between exceptions and integer status codes at the API boundary, and more. Gonna be very hard to add into the existing language like Java.

> "Vector API" JEP

That API is not good. They don’t expose hardware instructions, instead they have invented some platform-agnostic API and implemented graceful degradation.

This means the applicability is likely to be limited to pure vertical operations processing FP32 or FP64 numbers. The rest of the SIMD instructions are too different between architectures. A simple example in C++ is [2], see [3] for the context. That example is trivial to port to modern C#, but impossible to port to Java even after the proposed changes. The key part of the implementation is psadbw instruction, which is very specific to SSE2/AVX2 and these vector APIs don’t have an equivalent. Apart from reduction, other problematic operations are shuffles, saturating integer math, and some memory access patterns (gathers in AVX2, transposed loads/stores on NEON).

> most of these are not done / not in a stable LTS Java release yet

BTW, SIMD intrinsics arrived to C# in 2019 (.NET Core 3.0 released in 2019), and unmanaged interop support is available since the very first 1.0 version.

[0] https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Lin...

[1] https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Lin...

[2] https://gist.github.com/Const-me/3ade77faad47f0fbb0538965ae7...

[3] https://news.ycombinator.com/item?id=36618344



Well maybe you should use C++ or Rust instead of Java or C# in that case?

My point is if you are doing business (especially web) apps. Use one of JVM langs insted of C# because ecosystem is much bigger (and it has fresher langs as well like Kotlin - if that's what you care about)



> use C++ or Rust instead of Java or C# in that case?

Despite having to spend extra time translating C API headers into C#, the productivity gains of the higher-level memory safe language were enormous.

Another example, I have shipped commercial embedded software running on ARM Linux, and based on .NET Core runtime. The major parts of the implementation were written in idiomatic memory-safe C#.

> doing business (especially web) apps

Well, these business web apps are precisely the long-running servers I have mentioned. Still, the software ecosystem is not limited to that class of problems, and due to different tradeoffs Java is not great for anything else.



Also with JDK 21 - you can use virtual threads. No need for async/await which IMHO is a design mistake. Java copied Go here instead of C#.


Which makes C interop worse, just like Go


Java is surely keeping up but I can't name single Java feature that I miss in C# or is implemented better in Java. I haven't used Java in a long time though, just occasionally I read about new Java features and I've never said to myself "cool, I wish I had it in C#".

Static import are also available in C# for quite some time now (c# 6, released in 2015, and in C# 10 you can even make this import global for for project).

I haven't used Kotlin, is there any killer feature compared to C#? (except more succinct code in certain cases?)



Kotlin is younger and made better choices by default, like immutable "val" as default option.

Also since it's Jetbrains - IDE integration is superior compared to anything C# can have (including Rider...)



Depending on how you look at it, better extension everything support on Kotlin's case, and a way to do DU, while it keeps being discussed for C#, people should just add F# to their codebase, but alas.


#1 => agree

#2 => don't know enough about Kotlin to comment

#3 => agree but quality > quantity

#4 => not terribly important to me (Clojure is cool but it's not "switch to the JVM" level cool)



I've always found those sort of firms with C#, in my experience, have the best architected code. Proper domain-driven design, onion architectures, clean testable code... Some have legacy issues where they might not have the most cutting edge CI/CD pipeline or high automated test coverage, but the code itself can be very nice. I've never really experienced that level of consistency with a different language/company size.


C# is a lovely language to work with.

The only issue I have is with the .NET ... that is, building self-contained binaries to distribute.

For comparison:

* Hello World win-x64 binary self-contained in .NET 7 is around 70 MB

* The same for Go results in 1.2 MB

Edit: Missed 'trimming' in .NET, which would result in a binary of size around 11 MB in win-x64



Usually that means you aren't using trimming, .NET speak for dead code removal during linking.

Also remember that standard .NET runtime does a little bit more than Go's runtime, so it might happen that even with trimming, for basic applications Go ends up having an upper hand on file size.

On the other hand, I have had Go static binaries grow up to 200 MB and being require to use UPX to make it manageable, e.g. trivy.



You're right. But even with trimming I get around 10x the size of the Go binary


Since I edited my comment, see my additional remarks regarding runtime capabilities, and the counterpoint of big Go binaries.

Also note that triming only works properly if the libraries have taken the effort to be trimmable, as the linker errs on the safe side and won't trim unless certain that it is really dead code, and not called via reflection.



    dotnet publish -c release -p:PublishAot=true


I was making Windows 98 apps with Delphi 4, and they were 350 KB large

And I was upset that they were so big. Sometimes I used UPX. Or I kicked out all Delphi GUI libraries, and created the GUI with the Win32 API calls directly. I got 50 KB Hello Worlds.



50kB hello worlds? Uhm.. thats still big.

15k May 3 2019 quickrun.exe*

Win32 GUI Application that spawns Window and ask for alias to run. Pure Win32 API, written in C (Mingw).

I literaly looled at 11MB hello world of .net or 1.2MB Go..



Well, it is what I remembered

I do not have Windows 98 anymore. But I still have Delphi 4 installed under Wine, so I just tried it out.

Just showing a messagebox from windows gives 16k

Using the sysutils unit though, puts it at over 40k. And with classes, it becomes 57k. Not sure what they pull in. sysutils contains number/datetime parsing and formatting, and exception handling. classes has basic containers and object oriented file handling.



Ahh, Delphi. Then I suppose its all right for it. Still, much better compared to Go or Java :D


If only Borland's management didn't decide to focus mostly on enterprise customers.


Let me give a real world example from my own experience.

I have built a Win32 desktop app with its core logic in Go; and then re-built from scratch using .NET (v7). The core logic involved a fairly complicated keyboard input processing based on bunch of config files.

- Final binary of .NET ~ 14 MB

- Final binary of Go ~ 2 MB



This is an unfair comparison of apples to oranges by building the binary with the wrong flags. .NET produces smaller binaries than Go with NativeAOT (despite including more features).


Yea, I missed trimming. But still NativeAOT results in 10x the size of the Go binary in Windows (win-x64)


Two aspects:

- There is no point in chasing smallest possible binary size if it trades off performance and features one wants to use in production scenario, or comes with other tradeoffs that sacrifice developer productivity. I don't see anyone complaining about the size of GraalVM native images. As long as binaries are reasonably sized, it's not an issue.

- dotnet publish -c release -p:PublishAot=true definitely produces smaller binaries than Go as of .NET 8 (and no, you cannot use the argument that it's not released yet - it's in RC.2 which is intended for evaluation for adopting .NET 8 scheduled for release next month)



That's awesome, honestly. Can't wait.


Idk the last time you tried but a hello world in C# using .Net 8 is smaller than a Go hello world, for what it’s worth.


Is it? Every time I see C# being mentioned here people agree how awesome it is. Not that I'm complaining, I love C#


Agreed. I feel C# is appropriately rated on HN and other programming forums. It has perfomant memory options that other GC languages lack, and great builtin packages to use. Overall, it is a good language.

My biggest issue with C# though is how badly exceptions are handled given that it is a statically typed langauge. I wish functions explicitly defined the exceptions it can throw since a minor package bump could add an exception without your compiler warning you that it isn't handled. I much prefer Rust, Go and Zig's error handling to C#'s since those kind of issues don't happen.



> It has perfomant memory options that other GC languages lack, and great builtin packages to use.

As clarification for the audience, it isn't the only GC enabled language with C and C++ like capabilities, in fact there are several examples since the early 1980's.

The adoption push for Java and scripting languages distorced the understanding of what was already available out there.



Well, I see it as lack of fanboyism which is interesting and almost unique to the Java/C# ecosystem. A lot C# experts(and I mean REAL, low level experts) seem to also have very high Java expertise..

And those that have Java expertise but not C# seem to demur to those that do; image!

But it's still niche(around here and in the startup world) and gets lumped in with Java and together they are not "hip" or "agile" or whatever.



C# is fine, but it feels like a slightly better Java, just without the huge ecosystem of libraries.


Slightly... The difference in type erasure is pretty huge IMHO.

But what libraries are you lacking?



Type reification is planned and so is value types.

And type erasure isn't as negative as you seem to make it.

Bunch of really obscure use cases - fluent localization, roaring bitmaps and so on.



> compiled managed languages like [...] C#

I've been out of the windows development game for a long time, so I haven't used C# since it strictly required a VM... what's pre-compiled C# development like nowadays? Are there major caveats? If you can emit plain old binaries in C# with no runtime dependencies, that would make it a truly compelling language IMO.

And as another question, what's the cross-platform (mainly Linux) support like in AOT-compiled C#? If it's just as good as in Windows and emits plain executables, I would probably consider it the best place to start for any new project. (Something tells me it's not...)



C# supports AOT since forever, NGEN was present in .NET 1.0. Not many people used it, because it requires signing binaries and only supports dynamic linking, with a performance profile towards fast startup.

On Microsoft side the Singularity and Midori experiments used AOT.

They influenced the AOT toolchains for Windows 8 store apps with MDIL (Singularity/Bartok), and Windows 10 store apps with .NET Native (Midori/Project N).

Now there is Native AOT, which supports CLI and native libraries,.NET 8 extends that to EF and ASP.NET frameworks. For GUI applications, maybe only fully on .NET 9.

Mono AOT has had support for ages, being used on iOS, Android, and Blazor.

Finally there is IL2CPP and Burst compiler from Unity.



In 8, NativeAOT also supports iOS (and even Android reportedly?) for, I assume, MAUI target to do away with Mono. Documentation on this definitely needs work, and there are projects that made it working with WPF, Windows Forms and Avalonia back in .NET 7. Arguably, none of those were particularly user-friendly but generated COM interop project for 8 was done specifically to improve this on Windows as well.


This. I have exactly the same experience, I can't believe how much I was able to ship with Zig and the code mostly feels like "done".

You can always improve it, but there's no need to. With Rust, I was never happy, even after 5 years, I was still thinking about better abstractions and implementing more traits, making it more generic, etc.



Why is there no need to improve Zig code but there is for Rust code? You'd need the same abstractions in Zig as well, no?


No, usually you don't. Rust has closures, iterators, generics, different traits for operator overloading, smart pointers, etc.

Zig doesn't have any of that. It's very interesting combination of low-level, predictable code, with meta-programming, where you get some of that abstraction back.

i.e. Zig does not have generics, but your function can return type, so generic list is just a function which returns a newly created struct.



Could you expand on the generics point, please? That sounds interesting but I can't quite get my head around it.


Functions in Zig can be called both in runtime and in compile-time. You can force some expression to be called during comptime using a keyword, and sometimes the comptime is implied (like when you define top-level const)

If a function is called in comptime, it can also return types. So for example:

    // this is a function which accepts a type
    // if you accept type, you also have to restrict the arg to be comptime
    // if the arg is comptime it still does not mean that the function cannot be called in runtime,
    // but in this case, it returns type, so it is comptime-only function
    // there are also cases where this is not true, like std.mem.eql(u8, a, b) which accepts type,
    // but can be called in runtime because it does not return type
    fn Wrapper(comptime T: type) type {
        return struct { value: T };
    }

    const F32Wrapper = Wrapper(f32);

    // @TypeOf(x.value) == f32
    var x: F32Wrapper = .{ value: 1.0 }


Zig has `comptime` where you effectively generate Zig code at compile time by writing Zig code, no special generics syntax/semantics needed. It is very nice and powerful concept that covers lots of ground that in Rust would belong to the land of procedural macros.


Can't speak for why Zig doesn't have a problem but Rust is cursed by its success: it lowers the barrier for improvement enough to entice you to always improve it.


No. Rust forces you to spend endless hours doing mental gymnastics which shouldn't be needed in the first place (linked data-structures, owned arenas, or even just boxed slices are impossible in safe rust).

And you just keep refactoring/improving/desperately trying different ideas, because it never feels right.

It's ok if you don't agree but pls don't try to make my responses look like I like Rust, I don't and I'd happily get those years back if it was possible.



> it can be very hard to actually know what method you can call on a struct

The rust-analyzer language server can autocomplete the available methods for a value.



depending on the autocompleter feels like asking to code to Chatgpt to me.


I disagree, there's a big difference: rust-analyzer is deterministic and 100% accurate while ChatGPT is non-deterministic and hallucinates.


Yep. I can't remember method names for the life of me, which is why my best experiences have been with Go and Java: The IDE (always Jetbrains) knows, via the type system, what methods I can call.


Then you underestimate the power of ChatGPT by a facor of a million.


Furthermore you can use `cargo doc` to generate a documentation website that had everything you can do or you can use docs.rs for this. Whoever wrote this didn't embrace the tooling and just gave up.


Wait, I am a bit confused. Does Zig have more/better libraries than Rust? I thought it's a pretty new language. The most limiting thing for me with Rust was the lack of libraries (vs. say Python or Node/JavaScript).


It doesn't. The ecosystem is very immature and even the official tooling is very unstable. It has a bunch of interesting design ideas but at this point it's more of an experimental language than a production ready one by most metrics. (And unless it finds some kind of corporate backing, this is unlikely to ever change).


We shipped a few web apps backed by zig. It is absolutely in production.


Just because you put it in production does not mean it's production ready.


It interops seamlessly with C libraries.


Depending on what seamlessly means, Rust can also interop with C libraries. I wrapped a bunch of them.


Truly seamless because the zig compiler is also a C compiler, so the type information and calling convention works across languages at a level above any other I've encountered.


It's also an unfinished language. I agree Zig is promising, but it's not confidence inspiring when the creator is still making videos debugging the compiler.


True, but I think that the person you're responding argued that rust/C integration is also seamless. (In the general discussion I'd say they're right as C to Rust integration isn't much of an problem and you can use C libraries relatively easily in Rust as well, but at the same time when talking about Zig I don't think it's fair to put it on the same ground).


It is impossible to do C interop without a C compiler, by the way.


Seamlessly as in @cInclude("raylib.h")


Given that Zig is memory unsafe it isn't either a good general purpose language.

IMO a good GPR is memory safe (no C, C++, Zig), is easy to use(no Rust), has strong static typing (no Perl, Python, Ruby) and is "stable" (no Scala). Lots of choices remain: Java, Kotlin, Ada, D, OCaml..



From these, my favourite is D


c#


It somehow seems that Zig has most of the qualities that people like about C: clear, crisp, no huge Stdlib, good old straightforward imperative semantics, and readonably fast. But without lots of the cruft.


Unfortunely lacks support for binary libraries, and not yet a story for UAF other than the tooling we are already using in C derived languages to track them down.


You should try diving into num for like a month and see how you like it. It's different enough that you need to go past a certain kind of ledge to start liking it. Or at least that was my experience.

For me, it shares the most important benefits of Rust but with quite a lot more ergonomic coding model.



Nim on paper is great; it has many advantages over Rust in the general purpose "niche". Tragically, it's kind of stillborn. It's older than Rust, has orders of magnitude less mindshare, and has no companies with serious technical reputations backing it.


Yeah, you're not wrong about the mindshare problem. But it somehow at least in my mind differs from other "stillborn" older languages in that it keeps improving. The end result is that it still feels modern in the year 2023.


So any language that isn't sponsored by Google/MS/Amazon/Mozilla from day 1 should just die?


Woopsie. I meant s/num/nim/ of course.


Since you have previously said that you are using Zig to do embedded programming for medical devices, I assume that it is your main pain point. I largely agree that the current Rust embedded story is not exactly for existing embedded programmers (and I guess you are one of them). Rather it looks more like a solution for existing application programmers. I don't think it's an intrinsic limitation of Rust the programming language, rather a specific community at this point happens to prefer this way. Still it is a weakness in some sense and Zig will be a good alternative.


Yes, I think rust is very good for higher level programmers wanting to code embedded like a regular OS. There are many great projects around using rust on embedded.

But me, I prefer to manipulate registers directly. Especially with "exotic" MCU where you have to execute write/read in specific number of CPU cycles. Rust makes that very hard.



By "wrote" you are meaning just coding or coding+debugging? Because other languages are easier to code, but hard to make error free, while Rust is hard to write but much easier to make bug free.


nim could be another option which defaults to c backend with python syntax


I use Rust a lot, and have been really keen on getting into Zig.

Not sure if much has changed (it was a while back), but my biggest problem was with finding and using 3rd party libraries (mostly for the boring stuff like DB connectivity, JSON/YAML parsing, logging, etc.).

E.g. even now if I search for "zig mysql library", the tops hits are all about people discussing it on reddit, than any actual library.



You cImport the C library most of the time.


Give Copilot a try, it completely shift coding experience in Rust. Especially this:

> I always have to search libraries and how to do things.

Once you pass the initial curve with crutch like Copilot, then you can be almost as productive (if not more, considering refactoring and testing) with your native first coding language



How much of your negative Rust experience is due to async?

Having used Rust for a long time it’s definitely the biggest source of confusion and headaches.



It is not that complicated but it is very "time consuming" for reasons I cannot really explain.


Perhaps my biggest critique is that crates.io has no namespacing. Anyone can just claim a global and generic package name and we mostly have to deal with it (unless you avoid using the crates.io repository, but then you'll probably have more problems...). Some of these globally-claimed generic packages are not really the best package to use.

Maybe it was a reaction against the Java-style reverse DNS notation, which is verbose and annoying, but a more GitHub-style user/group namespace prefixing package names would have been the nice middle ground.



I did some analysis on crates.io to find the top name squatters. Then I did some calculations and found that the top name squatter created their crates at a rate of about one ever 30 seconds for a period of a week straight.

I send the analysis to the crates.io team and pointed that they have a no-automation policy.

They told me that it was not sufficient proof that someone was squatting those names. That's my problem with crates.io is that they have a clear policy and they don't enforce it so all the short/easy to remember names for crates are already taken and there is nothing you can do to get it.



There's a HUGE gap between

> Using an automated tool to claim ownership of a large number of package names is not permitted.

And

- Hey, I found that someone created crates at a rate of about one every 30 seconds for a period of a week straight.

- That's not sufficient proof of squatting.

Whoever answered that, was either supporting the squatter or explicitly in favor of the practice. I cannot conceive that someone would get that evidence in their hands, and in their right mind think that the claim is bogus. Hell, I'd even be willing to suppress the squatter with evidence of one new crate created every 30 seconds for one hour!

The only reasonable conclusion to make is that they didn't really care. But then don't save face and claim that you do. That's hypocrisy.



In July 2023 the crates.io team started asking for feedback around changing their policy around name squatting - https://rust-lang.zulipchat.com/#narrow/stream/318791-t-crat...


There's a secret effort in the Rust community to supplant Crates.io and create an entirely new package ecosystem with proper namespacing, security, and much better community.

Not naming names, but I know several people working to put Crates.io out to pasture.

There's a level of playing nice with them for the time being (eg. build reproducibility), but it's only KTLO.

Crates.io needs to die for Rust to thrive. They're a bungled, mismanaged liability. New code, new leadership.



Crates.io doesn't need to die necessarily. It needs some competition as a wake-up call.

Once a better alternative is out there, crates.io will either wither and die or improve. If it matches its competition in terms of quality and reliability, everyone is better off. If not, the alternative solution will take over.

I'm eager for this crates.io alternative to land, assuming they don't break too many projects in their improvements.



Why does something like that need to be secret...? Isn't it in the community's best interest?


Drama avoidance and avoiding bikeshedding seem obvious. Much easier to present a working system than a design that will get nitpicked into irrelevancy.


Yeah, that makes sense.


That quite funny. Just like when some people formed a violent militant group to take down a violent tyranical dictatorship. Of course they would promise that they would absolutely de-arm right after the dictatorship has been overthrown, and immediately establish a peaceful democratic government with fair election. They absolutely would, would they? They would never turn into what they were formed to replace, would they?


If one tyranny has namespaces and the other doesn't, I'll prefer the former.


    pub use std::sic::semper;


     const step_on_snek = false;


I find this interersting as most namespacing solutions would need the cargo team involved and I've heard nothing about this.


This was my first thought too. And there are a lot of questions that will get asked, like, will all crate library names start being prefixed as well? So you end up with

use::bar; // changing to

use::foo::bar;

I assume the library names that can be overridden in cargo would still be accepted, and then it all gets a little messy. The transition would be very messy.



My preferred syntax route is a new separator in package names and the lib name gets populated by everything after it.

Still doesn't solve all of the policy problems with namespacing.



How secret is it now that you've posted on HN about it?


Wait I thought only I could read the secret


yes, HN hides secrets automatically

******* is my password, but you can't see it. Type your password back, and I won't be able to see it. Try it!



I'm not sure it's working the way you described:

hunter2



It’s working. I really can’t see it. All I see is ****


Maybe you are getting tricked to give out your password...




tr!pt0ph@n3


It's an open secret.


Hilarious.


crates.io is fine.


> they have a no-automation policy

What's that? I have scripts that automate publishing of new release of my crates. And I think many projects have.



Of course that's permitted.

What they stated was only regarding claiming new ownership over crate names:

> Using an automated tool to claim ownership of a large number of package names is not permitted.



Write some automated analysis that looks up popular packages on npm, pub.dev, rubygems, nuget. "Rustify" the package names. Add to it frequently used words, maybe popular names, etc. Then, write a script that creates an empty package, registers a name on crates.io every thirty seconds, and then you have about 20k package names after a week that nobody can use.


> Maybe it was a reaction against the Java-style reverse DNS notation

I suspect it was less a reaction against anything and more just following the norms established by most other package managers. NPM, PyPI, RubyGems, Elixir's Hex, Haskell's Cabal... I'm having a hard time thinking of a non-Java package manager that was around at the time Rust came out that didn't have a single, global namespace. Some have tried to fix that since then, but it was just the way package managers worked in 2014/2015.



> I'm having a hard time thinking of a non-Java package manager that was around at the time Rust came out that didn't have a single, global namespace

The implication here is that namespaces in package managers weren't a known concept. Outside Java, NPM - probably the biggest at the time - not only supported them but was actively encouraging them due to collective regret around going single-global in the beginning. Composer is another popular example that actually enforced them.

Not only was namespacing a known widespread option, with well documented benefits, it was one that was enthusiastically argued for within the Rust community, and rejected.



NPM added namespaces in version 2, which was released in Sep 2014, just 2 months before cargo was announced. I don't remember anyone making a big deal about using scopes in NPM for several years after that, it was just there as an option. The announcement blog post of v2 only gives two paragraphs to scoped packages and explicitly frames the feature as being for enterprises and private modules [0]:

> The most prominent feature driving the release of npm 2 didn’t actually need to be in a new major version at all: scoped packages. npm Enterprise is built around them, and they’ll also play a major role when private modules come to the public npm registry.

My memory is that the industry as a whole didn't really start paying attention to the risks posed by dependencies in package managers until the left pad incident.

To be clear, I'm not saying that it was a good idea to not have a better namespace system or that they were completely ignorant of better options, just that they were very much following the norms at the time.

[0] https://blog.npmjs.org/post/98131109725/npm-2-0-0.html



The left pad issue was kind of wild coming from the enterprise Java space. Supply chain attacks against open source software were already being taken pretty seriously, my last company had it's own Maven repository manager running that was used to stage and vet packages before they could be used in production.


I don't think the left-pad problem wasn't about package namespacing it was about the ability to unpublish packages as well the prevalence of micropackages caused by lack of a decent standard library.

Also npm's bad policy/decision to transfer control of package in the name of predictability(this should probably be avoided for packages that aren't malicious. You could argue for seizing broken/trivial and unmaintained packages that have a good name but even then it might be best to leave well enough alone).

I suppose you're talking about the original dispute which led the developer to unpublish his libraries (which npm stupidly allowed, and cargo didn't). There's a smaller chance of a company wanting a random package namespace then a package name but its not impossible (think Mike Rowe Soft vs Microsoft)



> I don't think the left-pad problem wasn't about package namespacing it was about the ability to unpublish packages as well the prevalence of micropackages caused by lack of a decent standard library.

It was "about" cavalier approach to the dependency supply chain. A dependency disappearing outright is just one of many failure modes it has.



> The left pad issue was kind of wild coming from the enterprise Java space.

This may be a little off topic for this comment thread but this is a little misrepresentative. Hosted private repos for enterprise weren't exclusive to Java at the time of left pad - anyone doing enterprise node likely had one for npm too & were probably well prepared for the attack. Such enterprise setups are expensive though (or at least take a level of internal resources many companies don't invest in) leaving the vast majority exposed to both js & java supply chain attacks even today.



At the time Nexus was free to self host, and that is what many smaller teams did just that to archive known good packages for the CI pipeline, I'm not in the Java space anymore so I don't know if that's still the case.


Yeah it's all a bit of revisionist history here, or I guess a bit ignorant. I had a friend who worked at Sonatype from pretty early days and they were, as I understand it, specifically working in this area of infrastructure for vetting, signing, license checking, etc. for corporate environments that needed to be extra careful about this stuff.

That crates.io launched without explicitly acknowledging this whole problem is either naivety or worse: already by then Java wasn't "cool" and the "cool kids" were not paying attention to what happened over there.

It's not that the industry wasn't paying attention until the 'left pad incident' -- that only holds if one's definition of "the industry" is "full stack developers" under the age of 30; I remember when that happened and I was working in a shop full of Java devs and we all laughed at it...

Maven's biggest problem was being caked in XML. In other respects it was very well thought out. That and it arrived at the tail-end of the period in which Java was "cool" to work in.



It's not revisionist history, the wording I chose was meant to acknowledge that there were segments of the industry that did take dependencies seriously. I'm very much aware that the Java world had a much more robust approach to dependencies before this, but "the industry as a whole" includes all the Node shops that were hit by leftpad as well as all the Python and Ruby shops that were using equally lousy dependency management techniques.

Rust chose to follow the majority of languages at the time. Again, as I noted in my previous comment, I'm not defending that decision, just pointing out that most of the widely-used languages in 2014 had a similar setup with similar weaknesses.



What are the benefits?


mainly, you can trust that anything under the foo/ namespace is controlled by only a smaller group of people, as opposed to the current situation on cargo where people pseudo-namespace by making a bunch of packages called foo-bar, foo-baz, and you can't trust that foo-bin wasn't just inserted by someone else attempting to appear to be part of the foo project. It also helps substantially with naming collisions, especially squatting.


If you want to check your dependency tree for the number of maintainers you're dealing with instead of the number of dependencies, this can be done with cargo tree + checking the Cargo.toml/crates.io ownership info for each of the found packages. I don't know if there's a command written to do that already, but I've done that with a small script in the past.


That's helpful, but doesn't address the bigger problem of people squatting on deceptive names.


Perl’s CPAN was around in the 90’s and all modules were namespaced.


Fair enough. As I noted to another commenter, I'm not trying to say there was no prior art (if nothing else there was Maven), just that they were following the overwhelming majority of mainstream languages at the time.


> just that they were following the overwhelming majority of mainstream languages at the time.

They were trying to do better than mainstream languages in other areas and succeeded. IIRC on this front they just decided Ruby's bundler was the bee's knees.



The same developer who worked on bundler also worked on the initial version of cargo. That’s why they’re similar.

And at that time, it was a good idea. Ruby was popular and bundler + gem ecosystem was a big reason for its popularity. No one was worried that Rust might become so popular that it might outgrow the bundler model. That was only a remote possibility to begin with.



A mistake that many programmers make, as if baking one more feature on top would have made any difference that wouldn't be amortized in just a few weeks... Sigh.


Yes, and all of those have had major security issues caused by their lack of foresight.

"We're pretending security is not an issue." has been the feedback every time this is raised with the Cargo team.

To be honest, it's turned me off Rust a little bit.

The attitude of "Rust is memory-safe, so we don't need any other form of security." is not a good one.



> "We're pretending security is not an issue." has been the feedback every time this is raised with the Cargo team.

Literally nobody has said this.

> The attitude of "Rust is memory-safe, so we don't need any other form of security." is not a good one.

Fortunately it's an attitude that nobody in the Rust project has!



> Literally nobody has said this.

I know of a few people, personally, who have said this.



Ok, but like, were any of them people of note, actively working on the project?

Because it seems like the people who are working on the project aren’t saying that.



The people that are working on the project haven't implemented namespaces, or any other security feature really, so what they say is immaterial. What they do is the only thing that matters.


How do namespaces measurably increase security?


They reduce the risk of supply chain attacks like typo squatting or Dependency confusion.


Funnily enough, they in fact increase it.


Namespaces can't be typosquatted?


I don't believe I said that.

The point is that it's much easier to make a mistake typing "requests" than " org.kennethreitz:requests" (as a pure hypothetical.)

It also means that more than one project can have a module called "utils" or "common", which once again reduces the risk of people accidentally downloading the wrong thing.



> The point is that it's much easier to make a mistake typing "requests" than " org.kennethreitz:requests" (as a pure hypothetical.)

Sorry what? It's strictly the opposite: more character to type equals more risks to make a mistake.

In fact, in the general case, namespace increase the risk of supply chain attacks, because it makes packages names even less discernable.



> I know of a few people, personally, who have said this

jiggawatts



> "We're pretending security is not an issue." has been the feedback every time this is raised with the Cargo team.

Do you have a specific link where I can read this response, because this is not at all the responses I have read.



Just some random Cargo security-related issues I noticed:

- No strong link between the repo and the published code.

- Many crates were spammed that were just a wrapper around a popular C/C++ library. There's no indication of this, so... "surprise!"... your compiled app is now more unsafe C/C++ code than Rust.

- Extensive name squatting, to the point that virtual no library uses the obvious name, because someone else got to it first. The aformentioned C/C++ libraries were easy to spit out, so they often grabbed the name before a Rust rewrite could be completed and published. So you now go to Cargo to find a Rust library for 'X' and you instead have to use 'X-rs' because... ha-ha, it's actually a C/C++ package manager with some Rust crates in there also.

- Transitive dependencies aren't shown in the web page.

- No enforcement or indication of safe/unsafe libs, nostd, etc...

- No requirement for MFA, which was a successful attack vector on multiple package managers in the past.

DISCLAIMER: Some of the above may have been resolved since I last looked. Other package managers also do these things (but that's not a good thing).

In my opinion, any package manager that just lets any random person upload "whatever" is outright dangerous and useless to the larger ecosystem of developers in a hurry who don't have the time to vet every single transitive dependency every month.

Package managers need to grow up and start requiring a direct reference to a specific Git commit -- that they store themselves -- and compile from scratch with an instrumented compiler that spits out metadata such as "connects to the Internet, did you know?" or "is actually 99% C++ code, by the way".



> > "We're pretending security is not an issue." has been the feedback every time this is raised with the Cargo team.

> Do you have a specific link where I can read this response, because this is not at all the responses I have read.

Those aren't people saying security isn't an issue but examples of concerns you have which is different.

For some of those, there are reasonable improvements that can be made but will take someone having the time to do so. While the crates.io team might not be working on those specific features, I do know they are prioritizing some security related work. For instance, they recently added scoped tokens.

For some, there are trade offs to be discussed and figured out.



Sure, but all the drawbacks you enumerate are also advantages for gaining critical mass. A free-for-all package repository is attractive to early adopters because they can become the ones to plug the obvious holes in the standard library. Having N developers each trying to make THE datetime/logging/webframework/parsing library for Rust is good for gaining traction. You end up with a lot of bad packages with good names though.


> Extensive name squatting, to the point that virtual no library uses the obvious name, because someone else got to it first.

Maybe the obvious names should have been pre banned. But I don't see the issue with non-obvious names either way you're going to have to get community recommendation/popularity to determine if

brandonq/xml is better or worse then parsers/xml



In ASP.NET land, I regularly work on projects where there is an informal rule that only Microsoft-published packages can be used, unless there's good reason.

You don't want to be using Ivan Vladimir's OAUTH package to sign in to Microsoft Entra ID. That probably has an FSB backdoor ready to activate. Why use that, when there's an equivalent Microsoft package?

When any random Chinese, Russian, or Israeli national can publish "microsoftauth", you just know that some idiot will use it. That idiot may be me. Or a coworker. Or a transitive dependency. Or a transitive dependency introduced after a critical update to the immediate dependency. Or an external EXE tool deployed as a part of a third-party ISV binary product. Or...

Make the only path lead to the pit of success, because the alternative is to let people wander around and fall into the pit of failure.



Crates.io has publisher information-- namespacing is not required for that. For example, here are all the crates owned by the `azure` GitHub organization and published by the `azure-sdk-publish-rust` team: https://crates.io/teams/github:azure:azure-sdk-publish-rust


fwiw but when I saw a 3rd party library pretty much exactly like "microsoftauth" on nuget, I reported it and it was swiftly removed.

I think we need to encourage a culture that package managers are our shared garden and we must all help in the weeding.



> I'm having a hard time thinking of a non-Java package manager that was around at the time Rust came out that didn't have a single, global namespace.

Technically not in the same category, but Docker Hub (2014) had namespaces.



php's composer[0] in 2012 had package namespaces

[0] https://getcomposer.org/



Sorta—it looks like they were mostly just using that system by convention until May 2015, when they finally become enforced [0]. Still, that's a good one that I hadn't thought of, and they at least had the convention in place.

[0] https://github.com/composer/packagist/issues/163#issuecommen...



I'm honestly astounded at how badly many languages have implemented dependency management, particularly when Java basically got this right almost 20 years ago (Maven) and others have made the mistakes that Java fixed. With Maven you get:

1. Flexible version (of requirements) specification;

2. Yes, source code had domain names in packages but that came from Java and you can technically separate that in the dependency declaration;

3. You can run local repos, which is useful for corporate environments so you can deploy your own internal packages; and

4. Source could be included or not, as desired.

Yes, it was XML and verbose. Gradle basically fixed that if you really cared (personally, I didn't).

Later comes along Go. No dependency management at the start. There ended up being two ways of specifying dependencies. At least one included putting github.io/username/package into your code. That username changes and all your code has to change. Awful design.

At least domains forced basically agreed upon namespacing.



> Later comes along Go. No dependency management at the start. There ended up being two ways of specifying dependencies. At least one included putting github.io/username/package into your code. That username changes and all your code has to change. Awful design.

"github.io/username/package" is using a domain name, just like Java. Changing the username part is like changing the domain name--I don't see how this is any worse in Go than in Java.

If you don't like that there's a username in there, then don't put one in there to begin with. Requiring a username has nothing to do with Go vs. Java, but rather is because the package's canonical location is hosted on Github (which requires a username).

I don't know why so many programmer's use a domain they don't control as the official home of their projects--it seems silly to me (especially for bigger, more serious projects).



Note that in Java it is merely a convention to use domain names as packages. There is no technical requirement to do so. So moving to a different domain has no impact whatsoever on dependency resolution. Many people use non-existent domain names.

To be honest I really like how Java advocated for verbose namespaces. Library authors have this awful idea that their library is a special little snowflake that deserves the shortest name possible, like "http" or "math" (or "_"...).



Java did a lot of things right beyond the language and VM runtime, both of which were "sturdy" by the standards of the early 1990s. Using domain names for namespaces was one nice touch. Having built-in collections with complete built-in documentation was another excellent feature that contributed to productivity.


It may be a convention but in practice if you want to publish your package to Maven Central you need to prove ownership of your group ID domain. (Or ownership of your SCM account, which is in essence another domain).


Interesting, I contribute to a project that releases on Maven Central but the package name has never been registered as a domain.


Slight difference is that it wouldn't break existing builds if you changed namespaces in Java. The maven central repo does not allow packages to be rescinded once they are published.

So that old version of package xyz will still resolve in your tagged build years from now even if the project rebrands/changes namespaces.



> I don't know why so many programmer's use a domain they don't control as the official home of their projects

Not only that, but a commercial, for-profit domain that actively reads all the code stored on it to train an AI. Owned and run by one of the worst opponents of the OS community in the history of computing.

At least move to Gitlab if you must store your project on someone else's domain.



If youre project is open source, won't everyone who wants it just scrape in anyway?


Yes, #3 in particular is important for many large corps where one team develops a library that may be pulled in by literally thousands of other developers.


The dependency management side of Maven is great. OTOH, I was astounded to learn today that Maven recompiles everything if you touch any source file: https://stackoverflow.com/a/49700942

This was solved for C programs since whenever makedepend came out! (I'm guessing the 80s.)

(Bonus grief: Maven's useIncrementalCompilation boolean setting does the opposite of what it says in the tin.)



Build and test caching is one of the few compelling advantages that Gradle has over Maven. It's a shame that this is still an issue.


Maven and Java really don’t get enough credit for how well it’s dependency management works.

So many inferior dependency management systems for other languages have come along later, and learned nothing from those that came before it.



100% agree. It's unbelievable what a PITA it is dealing with pip or npm compared to Maven even 10 years ago. The descriptors could get convoluted but you could also edit them in an IDE that knew the expected tokens to make things happen.


What’s so hard about “npm install” and “package.json”. It’s dead simple


No you see Java devs have stockholm-syndromed themselves into believe that a giant stack of XML, or some unhinged mini-language are actually good, and much better than something the humans involved can actually read and parse easily and now to compensate with other ecosystems providing 85% of the functionality, with 5% of the pain, they’ve got to find some reason to complain about them.


what's wrong with XML? Maven XML is a configuration not a programming language.


Is this a joke? XML is horrible to work with, more boilerplate than information. Compare your average maven file to a cargo.toml and tell me which is easier to work with...


"XML is more verbose" is a lazy criticism in the same veign as "Python is better than Java because you can do 'Hello World' in one line".

Maven files have a simple conventional structure and a defined schema. Once you learn it, it's a breeze to work with. It's not like you need to write hundreds of lines of SOAP or XLST — which is actually why people started to dislike XML, and not because XML inherently bad.

Edit: I'd also take XML over TOML any day, especially when it comes to nested objects and arrays.



For a descriptor verbose is superior. It's way clearer what you're looking at. Matching a named end tag is much easier than matching a }. Also, XSD means you can strictly type and enumerate valid values and you will instantly see if you've written something invalid.


Maven stores every version of every library you've ever needed in a central location. It doesn't pollute your working directory and it caches between projects. And this is more of a Java thing than a Maven, thing, but backwards compatibility between versions is way easier to manage. There's no incompatible binaries because you changed the node runtime between npm install and running your project.


And NuGET, which was inspired on them.


Is this a joke?


Not even remotely a joke.

The inverse-style domain name thing does a really good job of removing the whole issue of squatting from the ecosystem. You have to demonstrate some level of commitment and identity through control of a domain name in order to publish.

I would also say that this puts just enough friction so that people don't publish dogshit.

crates.io demonstrates quite clearly that you either have to go all the way and take responsibility for curation or you have to get completely out of the way. There is no inbetween.



Having tried Java and other languages, no, it's not a joke. Other than XML Maven got a lot of things right.


and i dont particularly think that using xml is that bad. The schema is well defined, and gives you good autocompletion in any competent IDE (such as intellij).

It took some iterations before maven 3 became "good", so people forget that it wasn't as nice before now! Unfortunately, it seems that the lessons learned there is never really disseminated to other ecosystems - perhaps due to prejudice against "enterprisey" java. Yet, these package managers are now facing the sorts of problems solved in java.



What the hell is that XML hate

Whats the diff between changing lib version in xml and json?



I have no problem with XML in general and even think it's still the better format for many things. But it's not really appropriate for a build config. Thankfully Maven now offers polyglot but I've seen no use of it in the wild.


And you don't even need to use XML with Polyglot Maven

https://github.com/takari/polyglot-maven



I think the correct approach is to do full-real-name_good-package-name it might not be practical but it would be legendary.


URLs for packages makes a lot of sense. It works well in the land of Go. It also conveniently eliminates the need for the language to have a global packages database. Upload your package to example.com/your-thing and it's released! (You can, of course, still offer a cache and search engine if you want to.)


No, URL's don't make sense because your application shouldn't care where on the internet your dependency happened to be hosted when you integrated it. It's location has nothing to do with what it is.

By the time you're going to production, your vetted and locked dependency should be living in your own cache/mirror/vendored-repo/whatever so that you know exactly what code you built your project around and know exactly what the availability will be when you build/instantiate your project.

Your project shouldn't need to care whether GitHub fell out of fashion and the project moved to GitLab, and definitely shouldn't be relying on GitHub being available when you need to build, test, deploy, or scale. That's a completely unnecessary failure point for you to introduce.

Systems that use URL-identified packages can work around some of this, but just reinforce terrible habits.



URLs are well structured and unique, with a sensible default - sourcing the file from the internet - and ubiquitous processes for easily mapping the URL to an alternative location.

I.e., when you're going to run the production build, the URLs are mapped to fetch from the vetted cache and not the internet.

I don't see any downsides to allowing them as a source, or making them the default approach



> and ubiquitous processes for easily mapping the URL to an alternative location.

This seems strange to me because the whole point of a Uniform Resource Locator is to specify where a resource can be located.

It's a bit like saying "My project depends on the binder on shelf 7 in Room 42, sixth binder from the left. Except when I go into production, then use...." Don't tell me what binder it's in, tell me what it is.

I can see a case made for URIs, which is basically what Java did.



This was a big annoyance for me back in the day when I was dealing with XML namespaces. URLs never made sense for that use case and too many tools tried to pull XSDs from the literal URL which was always generally out of date, some projects switch to URIs like tag uris or URNs and it was much better, imo.


From my experience, URNs really should be used more often for these sorts of things. One thing that AWS got right almost from the get go


Fully qualified domain names (java/maven) aren't URIs. The latter are far more transient. Maybe a form of permalink could work, but that likely places too great a burden on package maintainers. I don't see that working out honestly.


Isn’t that why GOPROXY exists though? Not sure why you would need an internet connection. URLs don’t necessarily equate to the internet. Our internal and external packages are all locally hosted and work regardless of the internet being available.


Isn't that just delegating the problem? URL dependencies do not replace what crates.io does, and a modern language will still want something like it. You'd just end up with most every dependency being declared as crates.io/foo.


URLs form a nice global namespacing system. But yes, I agree that it should be possible to actually get the source from anywhere.

Basically the URL of a package name should be primarily the ID, not the locator (even if it is used for location by default).



> By the time you're going to production, your vetted and locked dependency should be living in your own cache/mirror/vendored-repo/whatever so that you know exactly what code you built your project around and know exactly what the availability will be when you build/instantiate your project.

In the Go world this would be "vendored" dependencies, that is, the dependencies are within your source tree, and your CICD can build to its hearts content with no care in the world about the internet because it has the deps.

The URL is useful for determining which version of a specific project is being used - "Oh we switched to the one hosted on gitlab because the github one went stale"

The advantage of using gitlab, or github, or whatever public code repository is that you get to piggy back off their naming policies which ensure uniqueness.

But, at the same time, there's no reason that the repo being referred to cannot be in house (bob.local) or private.

Having said all of that, the Go module system is a massive improvement on what they did have originally (nothing) and the 3rd party attempts to solve the problem (dep, glide, and the prototype for modules, vgo), but it's not without its edge cases.



It worked for the rest of the Internet.


There is something to be said for separating {unique piece of content} and {hosted location}.

E.g. doi's https://en.m.wikipedia.org/wiki/Digital_object_identifier



You can use the `replace` option in the Go mod file to redirect your dependency references elsewhere if you need to.


You can, though. From a random Cargo package I have downloaded to my computer:

    [dependencies]
    uniffi = { git = "https://github.com/mozilla/uniffi-rs" }
    
You can also specify revision/branch/etc.

Alternatively, you can do:

    [registries]
    maven = { index = "https://rust.maven.org/git/index" }
    [dependencies]
    some-package = { index = "maven", version = "1.1" }
    
Obviously Maven doesn't host any Rust crates (yet?), this is just a theoretical example. Very few projects bother to host their own registry, partially because crates.io doesn't allow packages that load dependencies from other indices (for obvious security reasons). The registry definition can also be done globally through environment variables: CARGO_REGISTRIES_MAVEN="https://rust.maven.org/git/index". Furthermore, the default registry can be set in a global config file.

In theory, all you need to do is publish a crate is to `git push upstream master`, and your package will become available on https://github.com/username/crate-name (or example.com/your-package if you choose to host your git repo on there).

Personally, I don't like using other people's URL packages, because your website can disappear any moment for any reason. Maybe you decide to call it quits, maybe you get hit by a car, whatever the reason, my build is broken all of the sudden. The probability of crates.io going down is a lot lower than the probability of packages-of-some-random-guy-in-nebraska.ddns.net disappearing



It doesn't help with the failure mode of dependencies disappearing, which forces people that care about it to vendor, which in turn brings its own set of issues.


Cargo does support URLs to git repos for dependencies. But crates.io is the official platform and almost every search I do on it returns at least one generically named entry with an empty repository that someone snatched away and never used.


I don’t work with rust on the regular, but this is so annoying with package repositories in general. No don’t use http-server, it’s bad, instead you have to use MuffinTop, it’s better. And then you just have to know that. The concept of sanctioned package names would be interesting, but probably chaotic in practice as the underlying code behind this sort of alias changes over time. This will remain a part of being a domain expert in any given ecosystem forever I think, hooray!


(You probably agree with me but I'm going to just write one big comment instead of replying to every slightly incorrect comment in the thread)

Naming things really is one of the hardest problems. This crates thing is a special case of Zooko's Triangle: https://en.wikipedia.org/wiki/Zooko%27s_triangle

Crates.io names are human-meaningful and everyone sees the same names, but it's vulnerable to squatting, spamming, and Sybil attacks.

You could tie a name to a public key, like onion addresses do, but it's unwieldy for humans. (NB, nothing stops you from doing this inside of crates.io if you really wanted)

You could use pet names where "http-server" and "http-client" locally map to "hyper" and "reqwest", but nobody likes those, because they don't solve the bootstrap problem.

It's a problem with all repos because when you say "http-server should simply be the best server that everyone likes right now", you have to decide who is the authority of "best", and "everyone", and "now". Don't forget how much useless crap is left in the Python stdlib marked as "don't use this as of 2018, use the 3rd-party lib that does it way better."

So yeah... probably will be a problem forever. As a bit of fun here are some un-intuitive names, and my proposed improvements:

- Rename Apache to "http-server"

- Rename Nginx to "http-server-2"

- Rename Caddy to "http-server-2-golang"

- Rename libcurl to "http-client"

- Rename GTK+ to "linux-gui"

- Rename Qt to "linux-gui-2"

- Rename xfce4 to "linux-desktop-3"

Then you only need to remember which numbers are good and which numbers are bad! Like how IPv4 is the best, IPv6 is okay, but IPV5 isn't real, and HTTP 1.1 and 3 are great but 2 kinda sucked.

Very simple. If a company as big as Apple can have simple names like "WebKit", "CoreGraphics", and "CoreAudio" then surely a million hackers competing in a free marketplace can do the same thing.



I think if you get three to five developers who are enthusiastic about X language, their collective knowledge will select good packages.


> Perhaps my biggest critique is that crates.io has no namespacing. Anyone can just claim a global and generic package name and we mostly have to deal with it (unless you avoid using the crates.io repository, but then you'll probably have more problems...). Some of these globally-claimed generic packages are not really the best package to use.

This is true that with no namespace anyone can end up squatting a cool name, but with namespace you end up in an even worse place: no-one end up with the cool names, and it makes discoverability miserable, because instead of having people come up with unique names like serde, everybody just name their json serialization/parsing library json and user now needs to remember if they should use "dtolnay/json" or "google/json" (and remember not to use "json/json" because indeed namespace squatting is now a thing), and of course this makes it completely ungoogleable.

We've had the namespace discussion for hundreds of time in the various Rust town squares, and the main reason why we still don't have namespace is because it doesn't actually answer the problem it's supposed to address and if you dig a little bit you realize that it even makes them worse.

Having a centralized public and permissionless repository opens tons of tricky questions, but namespaces are a solution to none of them.



> and of course this makes it completely ungoogleable

Do you really think nobody has ever been able to google a Go import path? Some of these arguments are ridiculous.



While Java made that notation famous, it was already used in NeXTSTEP and Objective-C, hence why you will see this all over the place in macOS and derived platforms, on configuration files and services.


If you have a namespace, can't people just globally-claim namespaces instead? like serde/serde or something similar. I feel if you really don't want people claim whatever they want, you have to do the Java package style where namespaces are tied to domain names.


CPAN has the best model IMHO. Hierarchy that starts with categories. You build on top of the base stuff and extend it, rather than reinvent/fork something with a random name. Result is a lot more improvement and reuse, and more functionality with less duplication. Plus it's easy to find stuff.

Perl's whole ecosystem is amazing compared to other languages. It's a shame nobody knows it.



Go made a great decision to namespace packages via this Github style.


And the horrible decision to not make library-level ("module") and code-unit-level ("package") namespacing orthogonal. The former was an afterthought tacked on since the package system was designed to be used only within Google's monorepo and little care was paid to how it would work when it was released to the public and used more generally.


I want to understand what you just said, but I fear watering your language down a bit might be a tall ask with some people. Would you be willing to eli5 what you believe Go did that was a horrible decision with regards to module/package namespacing?


I think what they mean is: if you see a line like `import git.example.com/foo/bar/baz`, that could be package `baz` inside module `git.example.com/foo/bar`, or it could be package `bar/baz` inside module `git.example.com/foo`.

Also, even if you know it's the latter, package namespacing isn't strictly related to directory structure, so `bar/baz` has no specific meaning outside of the context of a go import. They could have used any other separator for package components - `git.example.com/foo:bar:baz` - but instead they chose the slash, making the scheme both technically ambiguous and easy to confuse for an HTTP URL.



Ah that makes sense. I think Go did somewhat stumble a bit in the early days due to this, especially with repositories in GitLab, where GitLab allows essentially a directory tree where your repository can be nested indefinitely in directories like `https://gitlab.com/mygroup/subgroup1/subgroup2/repository`.

I still don't think this is a huge issue, to be honest. Not one big enough for me to complain about, for sure. But it's definitely not ideal.



I think he wants them to be treated differently for some reason.


Go packages is yet another design item that I dislike on the language, exposing SCM URLs directly on the source code, and no story for binary caching.


> exposing SCM URLs directly on the source code

Incorrect. You're able to use any URL you control, regardless of where your SCM is located.



Thanks for pointing out you didn't got what the whole point of the wrong design is about.


Thanks for confirming you'd rather be a hater than someone with justified opinions.


I am quite open about my opinion regarding most of Go's design decisions.


How is this a good decision? You have to alias packages when names collide


What is the better decision to avoid name colliding?


This needs to be resolved by every damned language. Just make signed dependencies a universal default, point to an https page for the package vendor and use the signing key from there.

Neither node nor maven ever bothered to solve this, so we end up wandering the Wild West wondering when it will be that HR, or legal, or architecture comes knocking on the door to ask what we were thinking having a dependency on a dynamic version of Left Pad.

I'd kinda like to see what Cloudflare and Let's Encrypt could come up with if they worked together on at very least a white paper and an MVP POC.



I don't think that's the real solution.

Pay somebody either internally or externally to maintain a repo of all your dependencies and point your code at that. You won't get a left-pad incident. You won't get a malicious .so incident (unless you mirror binaries instead of source code).

Like if you ran out of screws to make your product with do you walk around the street and scrounge up some? No, you go to a trusted vendor and buy the screws.



I'm not suggesting scrounging.

I'm suggesting that the guys who've repeatedly proven themselves technically competent around security at scale might have a couple of useful ideas regarding how the industry might go about crawling its way out of this little security clusterfuck.

And perhaps even stop treating something as simple as a BOM as an enterprise feature, given that the overhead on such things is damned near zilch and the security implications are staggering.

https://www.cisa.gov/sbom



There's reasons why Google projects don't go out on the internet to get their 3rd party deps.

They're all checked into Google3 (or chromium, etc.). One version only. With internal maintainers responsible for bringing it in and multiple people vetting it, and clear ownership over its management. E.g. you don't just get to willy nilly depend on a new version -- if you want to upgrade what's there, you gotta put a ring on it. If you upgrade it, you're likely going to be upgrading it for everyone, and the build system will run through the dependent tests for them all, etc.

And the consequence is more responsible use of third party deps and less sprawling dependency trees and less complexity.

And additional less security concerns as the code is checked in, its license vetted, and build systems are hunting around on the Internet for artifacts.



I actually really like this idea. The community just needs to align on a good, simple, standard approach to mirroring repos...


Maven has signing and you can check the signatures on your dependencies if you care to. Most don't, sadly.


To be fair, all the dependencies in the maven central repository require a signature to publish them.


That's a workaround.

I'm not a fan of those - if the engine's in the wrong place, then why the hell did you fucking put it there?

Don't hack a patch into place to stop people from holding it wrong; design it in such a way that it's impossible to hold wrong in the first place.



Agreed. The other thing I don’t really like is that you can’t split up things in the rust namespace hierarchy between crates (something that’s natural with jars in JVM). I would have liked to have defined things so that I could have the unicode handlers for finl live in finl::unicode, the parser in finl::parser, etc. but because they’re in separate crates rust gets upset about finl being defined twice and there’s no workaround for it. There are likely pitfalls I don’t see in what I want, so I live with it.


We have an RFC for this that has fairly broad support that I just need to bug some people some more so we can approve it.

https://github.com/rust-lang/rfcs/pull/3243



Would re-exporting partially solve your problem?


As I recall, I could do something where I could have a common root crate that would import and re-export the other crates with the namespaces modified on the exports and then control which crates are exported through feature gates, but it just seemed more hassle than it was worth.


I don't think a lack of namespace is that much problem. Sure, it is often annoying, but people are creative enough to create a short enough and still available crate name for most cases. Namespacing only makes sense for a large group of related crates---and it wouldn't give much benefit over a flat namespace.

As other mentioned though, a typosquatting is a much bigger problem and namespacing offers only a partial solution. (You can still create a lookalike organization name, both in npm and in Github.)



Id argue typesquatting a namespace is arguably easier then type squatting a package name since you will often ignore the namespace name


> Some of these globally-claimed generic packages are not really the best package to use.

Or it is just a placeholder from a squatter.



I love this lack of namespacing personally, because it means that whatever crate you see in a project is going to be the same as the crate your see in another one. Never need to alias crate names. It happens in Golang all the time and I really think namespacing packages was a mistake there.


Golang's problems aren't due to using namespaces, they're due to delaying too many decisions until too late.

Go has namespacing mostly because for a long time it didn't have a package manager at all, so people just used a bunch of ad hoc URL-based solutions mostly revolving around GitHub, which happens to have namespaces and also happened to lend itself to aliasing (because a whole GitHub URL is too long).

If you want to look at an actual example of namespacing done well, Maven/Java is the place to look. There is no aliasing—the same imports always work across projects.



I don't see the problem. Even with namespaces you'll have

brandonq/xml vs parsers/xml with no clue if one is better then another.

Also possibly with some confusion over whether things with the same name are forks or not. May make it a little more difficult to Google. Why not just have brandon_xml vs xml-parser and have a community list of best and most popular libraries?

I guess the only issue is that some generic/obvious package names are bad packages. That can have been avoided if they banned/self-squated most of the names. I suppose if you use dns namespaces and actually tie it to ownership of the domain name it might make sense but that would also cause issues(what if you forgot to renew the domain?).



The advantage is one of trust. If the `abc` developers build well known library `abc.pqr` are well trusted then I know I can use `abc.xyz` and everything else under the same namespace without (much) vetting.

We could even have `rust.xyz` for crates that are decoupled from `std` but still maintained by rust core devs such as `regex`.



Yet to see any proof that namespacing has made things better in other ecosystems, are go style links or other types of namespaced imports any less prone to supply chain risks?

It's definitely a good thing that people choose new unique names for crates rather than dijan/base64 vs dljan/base64

Do understand the desire of having a crate for audio manipulation called "audio" but at the same time how often do we end up with "audio2" anyway? It's an imperfect solution for an imperfect world and I personally think the crates team got it right on this one.



> Yet to see any proof that namespacing has made things better in other ecosystems

It's really as simple as this: many libraries are generic enough implementing something that already exists. Let's say you want a library to manage the SMTP protocol. On crates.io, of course someone has already taken the "smtp" crate (ironically, this one is abandoned, but has the highest download counts, because it's the most obvious name). Let's say you disagree with the direction this smtp crate has gone, and you make your own. What do you call it?

Namespaces solve this problem. You'd instead of have user1/smtp and user2/smtp competing in feature sets. You can even be user3/smtp if you don't like the first two.

This is precisely what Java enables too. The standard library is in com.java.*; if you don't like how the standard library does something, you can make com.grimburger.smtp and do it yourself. If you choose to publish to the world, all the more power to you. It doesn't conflict with the standard library's smtp implementation.



This is a common critique, and although I don't have insight into why the original decision to not have namespaces was made, the current outlook is that until issues related to continuity are resolved, it's a no go:

https://samsieber.tech/posts/2020/09/registry-structure-infl...



That article starts with the premise that “it’s a feature, not a bug” then goes on to describe a whole bunch of things I consider to be anti-features of a packaging system that has a flat namespace.

The first section says it discourages forking. I consider this to be bad. Nobody’s code should be more important purely because it squatted a better name.

The Identity section actually makes the case that flat registries make naming harder.

The section on Continuity is “we’ve tried nothing and we’re all out of ideas”. Make up an org name and grandfather all packages in the flat namespace into that special org. Also this is already a problem because packages in the flat namespace do get abandoned, then forked, and then we have the associated issues.

The section on Stability seems to take it as a given that crates.io should be the only registry. I don’t. It also seems to conflate cargo with rustc for the benefit of the argument.

The squatting section describes only anti-features and I don’t consider the author’s legitimate use cases to be legitimate reasons to squat.

I think the only legitimate problems that need addressing are the ergonomics of accessing namespaced packages throughout transient dependencies and backwards compatibility with non-namespaced code. But the fact that these are real problems does not, to me, make a flat namespace a “feature”. It’s just easier to implement.

It’s okay for it to be a mistake that takes effort and time to fix.



Another option would be to grandfather all packages into their own org. So serde becomes serde/serde. This way you don't need to manage permission rules in the legacy "all" namespace.

You get some oddities such as serde-derive/serde-derive but the package owners can choose if they want to move to serde/derive or leave it in a separate namespace.



I like that there are no namespaces, it forces people to come up with unique names rather than naming their calendar crate to be `calendar`.


I think most people are in support of namespacing but it's a big change and will take a while to see through.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com