(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40211891

本文讨论了程序员对 Go 编程语言的看法。 他们最初对这种语言表示不同意,但最近发现了它的乐趣,因为它很简单,而且缺乏某些可能导致代码过多重复的功能。 程序员建议实现类型约束以及将无接收器方法附加到类型的能力,以改进语言的功能。 他们将 Go 与其他面向数据的语言进行了比较,并批评了这些功能的缺乏,指出如此著名的语言限制这些功能是不寻常的。 此外,他们还强调了这些改进可能带来的好处,包括优化生成的机器代码和减少用于排序自定义类型的样板代码。 此外,他们提到了社区接受这些提议的增强功能的重要性,表明广泛采用将实现更统一的编码风格并简化各种解决方法的整合。 文本还涉及了固执己见与适应性语言这一有争议的话题。 虽然 Go 的创建者旨在采用一种简单化且固执己见的方法,但新一代的开发人员更喜欢灵活性和适应性。 作者对 Go 团队关于该语言方向的规定表示失望,并将其与 Rust 或 Zig 等更具可调整性的替代方案进行了对比。 总的来说,本文反映了开发人员学习接受一种他们曾经因其独特的怪癖和潜在的增强功能而受到批评的语言的旅程。

相关文章

原文


This addresses pretty much all of my least favorite things with writing Go code at work, and I hope--at the very least--the overwhelming positivity (by HN standards -- even considering the typical Rust bias!) of the responses inspires Go maintainers to consider/prioritize some of these features, or renews the authors interest in working on the project (as some have commented, it seems to have gone without activity for a little bit over half a year).

Some of the design decisions seem to me to be a bit more driven by being Rust-like than addressing Go's thorns though. In particular, using `impl` to define methods on types (https://borgo-lang.github.io/#methods), the new syntax for channels and goroutines (https://borgo-lang.github.io/#channels), and the `zeroValue()` built-in (https://borgo-lang.github.io/#zero-values-and-nil) seem a bit out of place. Overall though, if I had a choice, I would still rather write Borgo by the looks of it.



I have to disagree. I'm on record here lamenting Go. I've never really enjoyed writing it. When I've had to use it, I've used it. Lately though, I've found a lot more pleasure. And much of that comes from the fact that it does NOT have all these features. The code I write, is going to look like the code written by most other on my team. There's an idiomatic way to write Go, and it doesn't involve those concepts from other languages. (For better or for worse) So I'm super hyped that we'd have a "compiles TO Go" language, but I'm not as excited as using it as a catalyst to get new (and perhaps wrong for the language) features into Go.


A lot of people said the same about generics, and some even still do. I could barely stand Go before generics, and still don't think they go far enough.

From my experience, things I think Go could really benefit from, like I believe it has benefited from generics:

* A way to implement new type constraints[1] for existing types, like `impl Trait for T`. This would obsolete most uses of reflection in the wild, in a way generics alone haven't. This isn't about syntax, it's about an entirely different way to associate methods to types, and with both Go and Rust being "data-oriented" languages, it's weird that Go is so limited in this particular regard. It has many other consequences when combined with generics, such as ...

* Ability to attach receiverless methods to types. Go concrete types may not need associated methods like "constructors", but generic types do, and there's no solution yet. You can provide factory functions everywhere and they infect the whole call graph (though this seems to be "idiomatic"), or make a special method that ignores its receiver and call that on a zero instance of the type, which is more hacky but closer to how free functions can be resolved by type. There's no reason this should be limited to constructors, that's just the easiest example to explain, in Rust associated methods are used for all kinds of things. Speaking of which...

* `cmp.Ordered` for custom types. Come on people. We shouldn't still have this much boilerplate to sort/min/max custom types, especially two full years after generics. Even boilerplate aside, `sort.Sort()` and especially `sort.Slice()` generate extremely un-optimized machine code, but it's not "idiomatic" to BYO sort implementations either because no other interface for this is standard. However, we would basically get this for free if both of the above points were solved, but it's also possible we get something else entirely that solves only ordering and not e.g. construction or serialization.

* Enums, especially if the need for exhaustiveness checking could be balanced with Go's values of making code easy to evolve later. When I need them, I use the `interface Foo { isFoo() }` idiom and accept heap boxing overhead, but even the official `deadcode` analysis tool still to this day does not recognize this idiom or support enough configuration to force it to understand. The Go toolchain could at the very least recognize idioms people are using to work around Go's own limitations.

If we had solutions to these problems, I think most Go folks would find enough value in them that they would still be "Go". In fact, I think people would have an easier time consolidating on a new standard way to do things rather than each come up with their own separate workarounds for it.

This is where I feel "The code I write, is going to look like the code written by most other on my team" the least, because that's only true when a Go idiom has some official status, it's not nearly as true for workarounds that the Go team has not yet chosen to either endorse or obsolete.

[1] Note that I'm not saying only interfaces, but type constraints in general. `cmp.Ordered` isn't an interface, it's a type constraint, so a full solution here has to work for that too.



Wow, this is everything I want from a new Go!

Having worked on multiple very large Go codebases with many engineers, the lack of actual enums and a built-in optional type instead of nil drive me crazy.

I think I'm in love.

Edit: Looks like last commit was 7 months ago. Was this abandoned, or considered feature complete? I hope it's not abandoned!



An Enum type has to be on the core Go team's radar by now. It's got to be tied with a try/catch block in terms of requested features at this point (now that we have generics).


Comments like this are what drives me away from Go; comments that enforce a particular belief about how or what features you should or should not use/introduce in your PL. Talking in absolutes is so far removed from a logical arguments and from good engineering. I would appreciate if anyone could recommend a language like Go (static, strong typed, not ancient, good tooling) with a friendly community, that won’t ostracize the non-believers. Zig?


Go is a very opinionated language from it's inception. We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone. Maybe it's part of good engineering to keep things simple and not allow hundreds of ways to do something. Maybe the people who use Go are the ones who just want to write and read simple and maintainable code and don't want it to be cluttered with whatever is currently the fashion.

You could look at Lisp. It's kind of the opposite of Go in this regard. You can use whatever paradigm you like, generate new code on the fly, use types or leave them. It even allows you to easily extend the language to your taste, all the way down to how code is read.

But Lisp might violate your set of absolutes.



Thanks for your response AnonymousPlanet. I agree there is value in the pursuit of a minimal set of features in a PL which brings many benefits. And of course the opposite - an overly feature packed and/or extensible PL as a core feature has tradeoffs. Over this range of possibilities my preference probably falls somewhere in the middle.

I see an effect where the languages whose primary goal is a particular set language design choices (such as strict memory safety over all else) grow a cult following that enforces said design choices. Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.



> Go is a very opinionated language from it's inception.

True.

> We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone.

This is part of the story that Rob Pikes uses to justify how opinionated Go is, but it's a bit stupid given that most language do fine and I've never seen any debates about the code formatting after the very beginning of a project (where it's always settled quickly in the few case where it happens in the first place).

The real reason why Go is opinionated is much more mundane: Rob is an old man who think he has seen it all and that the younger folks are children, and as a result he is very opinionated. (remember his argument against syntax coloring because “it's for babies” or something).

It's not bad to be opinionated when designing a language, it give some kind of coherence to it (looking at you Java and C++) but it can also get into the way of users sometimes. Fortunately Go isn't just Rob anymore and isn't impervious to changes, and there is finally generics and a package manager in the language!



Rob Pike... and Ken Thompson, and Robert Grisemer.

Firstly, Ken Thompson is a master at filtering out unnecessary complexities and I highly rate his opinion of the important and unimportant things.

Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important, HN sentiments notwithstanding.



> Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

This is a PR statement that has been introduced only after Go generics landed, for years generics were dubbed “unnecessary complexity” in user code (Go has had generics from the beginning but only for internal use of the standard library).

> Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important

Well, given that the k8s team inside Google developed their own Go dialect with pre-processing to get access to generics, it seems that its limitations proved harmful enough.

The main reason why Go has been successful in back-end code is the same as the reason why any given language thrive in certain environments: it's a social phenomenon. Node.js has been even more successful despite JavaScript being a far from perfect language (especially in the ES 5 era where Node was born), which shows that you cannot credit success to particular qualities of the language.

I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying. Especially so when it comes in a discussion about error handling, where Go is by far the language which makes it the most painful because it lacks the syntactic sugar that would make the error as return value bearable (in fact the Go team was in favor of adding it roughly at the same time as generics, but the “simplicity at all cost” religion they had fostered among their users turned back against them and they had to cancel it…).



70% of cloud tools on CNF are built with Go; Kubernetes is just one of many. Also, since Kubernetes was originally started as a Java project you should consider whether the team was trying to code more with Java idioms than with Go ones.

Nodejs has been more successful than Go in cloud?



Seriously, if you feel patronised by how someone designs a programming language, it might be best to move on. It's obviously not for you. Especially when you feel compelled to bad faith assumptions and ageism over it.

For those who want to feel the wind of coding freedom blow through their hair, I can recommend to spend some time learning Lisp. It offers the most freedom you can possibly have in a programming language. It might enlighten you in many other ways. It won't be the last language you learn, but it might be the most fruitful experience.



Can you name a language that provides more freedoms? I used Lisp as an example for that side of the spectrum because I'm familiar with it, having used it for many years in the past. But maybe there are better examples.


What kind of "freedom", precisely, are you talking about? Freedom to write purely functional programs? Well, then you need Haskell or Clojure at least. Freedom to write small, self sufficient binaries? Well you need C or C++ then. CL is a regular multiparadigm language with a rich macro system, relatively good performance but nonexistent dependency management, too unorthodox OOP, with no obvious benefits compared to more modern counterparts, and a single usable implementation (SBCL). If I want s-expressions based language I can always choose Scheme or Clojure, if I need modern flexible multiparadigm language I'd use Scala


It baffles me that so many developers are unable to use pre-comit hooks for their code formatting tools, that exist since the 1990's, to the point go fmt became a revelation.


That's hardly the point. The point is that there is a single format for the language itself and you don't have to argue about spaces vs tabs vs when to line break, whether you want trailing commas and where to put your braces.

You can format on save or in a pre commit hook. But that the language has a single canonical format makes it kind of new.



Yes, because there is no one in the room able to configure the formating tool for the whole SCM.

A simple settings file set in stone by the CTO, such a hard task to do.

The fact that is even a novelty unaware of, only confirms the target group for the language.



Aside from "not ancient" Java has everything you want! I'd consider the best tooling (Intellij), static, strongly typed, has enums now (sealed interfaces), composeable error handling, null safety with new module flags, etc. Not sure about the community, but the maintainers I've worked with seemed nice enough. I imagine the community has a lot less ego than rust/go due to the general perception of the language.


You're reading way too much into what the parent poster said. He just correctly stated the overall sentiment of the community.

That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig. How much effort would you spend arguing against someone bringing that up as a serious proposal?



> That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig.

Suggesting the addition of exceptions to Go is as reasonable as suggesting the addition of loops to Rust. Which is to say that it already has exceptions, and always has. Much of the language's design is heavily dependent on the presence of exceptions.

Idioms dictate that you probably shouldn't use exceptions for errors (nor should you in any language, to be fair), but even that's not strictly adhered to. encoding/json in the standard library famously propagates errors using the exception handlers.



It doesn't not use exception handlers as a primary way of handling errors either, though. Go doesn't specify any error handling mechanism. Using exception handlers is just as valid as any other, and even the standard library does it, as noted earlier.

The only error-related concept the Go language has is the error type, but it comes with no handling semantics. Which stands to reason as there is nothing special about errors. Originally, Go didn't even have an error type, but it was added as a workaround to deal with the language not supporting cyclic imports.

Your pedantry is hilarious and contradictory.



You're absolutely technically correct, in the "spherical cow in a vacuum" sense. In reality though, essentially all Go code out there handles errors through the pattern of checking if the error in a `(value, error)` tuple returned from a function is `nil` or not. That is what the discussion here is about - the way errors are handled in a language in practice, not in theory. Therefore, pedantry.

Basically, discussions have context and I have no intention of prepending 10 disclaimers to every statement I make to preemptively guard against people interpreting my comments as absolutes in a vacuum.



That's a lot of pedantry you've got there for someone who claims it is not welcome. Rules for thee, not for me?

But, if you'd kindly come back to the topic at hand:

> That is what the discussion here is about - the way errors are handled in a language in practice, not in theory.

While I'm not entirely convinced that is accurate, I will accept it. Now, how does:

- "That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig."

Relate to that assertion? What does "suggesting adding exceptions to Go" have to do with a common pattern that has emerged?



It’s not simply a common pattern. It is a way of doing things in the community. The stdlib uses it, the libraries use it, and if you do not use it, people will not use your software.


I would be happy if they would add in compiler some thing that'll allow to ignore error return values and in this case compiler would just throw exception^Wpanic from that point. I think it even makes sense for go purists, like you need to handle errors or get panicked. And I'd just mostly ignore errors and will have my sweet exceptions.


I read somewhere something to the effect of this: Some languages solve deficiencies in the language by adding more features to the language. This approach is so common, it could be considered the norm. These are languages like C++, Swift, Rust, Java, C#, Objective C, etc. But two mainstream languages take different approach: C and Go strongly prefer to solve deficiencies in the language simply by adding more C and Go code. One of the effects of this preference is that old (or even ancient in the case of C) codebases tend to look not that different from new codebases, which as one might imagine can be quite beneficial in certain cases. There is a reasonable argument to be made that at least some of the enduring success of C has to do with this approach.


You have the same PL preferences as me. I haven't tried Rust yet, but Kotlin, modern C#, and F# all fit your requirements. Kotlin is closest because it uses the enormous Java ecosystem.


I'd say Scala.

It has its flaws, but the latest version (Scala 3) is really really good. The community is open to different styles of programming - from using it as a "better Java" to "pure functional programming like in Haskell".



Scala is the perfect example of why you want to limit expressivity. It's seems so cool and awesome at first, but then you have to support a code base with other engineers and you quickly come to the view that go's limited expressivity is a blessing.

Hilariously I was using a gen AI (phind) and asked it to generate some scala code and it no joke suggested the code in both implematic scala and in java style, and all you had to do is look at it and you could see java style was 1000X easier to read/maintain.



Well, flexibility has its price. And yeah, if you need to work in a team that uses a very different style, then you won't like it.

On the other hand, if you carefully select your team or work alone, then this is not a problem at all.

Btw, there isn't really "one" idiomatic scala style - therefore I tend to believe that you are not familiar with the language and the community.



Suggesting try/catch indicates that you have virtually no experience using Go. You're standing on the side lines yelling stupid/non-sensical feature requests and getting upset when you're not taken seriously.


Why would you not enforce a particular belief about how a language should be designed? There are languages designed around being able to do anything you want at any time regardless of if it makes sense, and then you end up with everyone using their own fractured subset of language features that don't even work well together. Not every language needs to be the same feature slop that supports everything poorly and nothing well.


Having the exceptions support does not mean the code will be scattered with try/catches - it is not used for the code flow, but ensuring no error slips silently. And when the exception is thrown, the stack trace is captured so you can get to the code and debug.


    func try(fn func()) { fn() }
    func catch(fn func(any)) {
        if v := recover(); v != nil {
            fn(v)
        }
    }
    func throw(v any) { panic(v) }

    func fail() {
        throw("Bad things have happened")
    }

    func main() {
        try(func() {
            defer catch(func(v any) {
                fmt.Println(v)
            })
            fail()
        })
    }

Sorry.


ESBuild, one of my favourite Go projects, uses panics to handle try/catch exceptions.

The syscall/js package [0] throws panics if something goes wrong, rather than returning errors.

Go already has try/catch exceptions. We just don't use them most of the time because they're a really bad way of handling errors.

[0] https://pkg.go.dev/syscall/js



I would have to go through my comment history for exact numbers. In analyzing a real, production service written in Go where multiple dozens of contributors over hundreds of thousands of lines over several years, "naked" if-err-return-err made up less than 5% of error handling cases and less than 1% of total lines. Nearly every error handling case either wrapped specific context, emitted a log, and/or emitted a metric specific to that error handling situation.

If you do naked if-err-return-err, you are likely doing error handling wrong.



Also known as The Apple Answer.

Plenty of The Go Way arguments apply to software we were writing from the dawn of computing until the 1990's, and there are plenty of reasons why, with exception of Go (pun intended), the industry has moved beyond that.



I nearly wrote "you are holding it wrong" to nod to that quote. But it is really true - most errors in long running services are individual and most applications I've worked in ignore this when (ab)using exceptions.

In our Go codebases, the error reporting and subsequent debugging and bug fixing is night and day from our Python, Perl, Ruby, PHP, Javascript, and Elixir experiences.

The one glaring case where this is untrue is in our usage of Helm which, having been written in Go, I would expect better error handling. Instead we get, "you have an error on line 1; good luck" - and, inspired, I looked at their code just now. Littered with empty if-err-return-err blocks - tossing out all that beautiful context, much like an exception would, but worse.

https://github.com/search?q=repo%3Ahelm%2Fhelm%20%20if%20err...



It actually is when debugging, because it makes control flow explicit.

In JS, for example, people don‘t even know which functions could throw exceptions, and just ignore them, most of the time. Fast to write and looks nice, but is horrible quality and a nightmare to debug.



Can someone help me understand why enums are needed? They only seem like sugar for reducing a few lines while writing. What cannot be achieved without them or what is really a pain point they solve? Maybe it is hard to have a constant with type information?


The issue is that it's more or less impossible to graft onto the language now. You could add enums, but the main reason why people want them is to fix the error handling. You can't do this without fracturing the ecosystem.


> but the main reason why people want them is to fix the error handling

Why do you think so? Maybe I'm an odd case, but my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake. Or for state machines. Error handling is manageable without enums (but I love Option/Result types more than Go's error approach, especially with the ? operator).



> but I love Option/Result types more than Go's error approach

The thing is, these don't add much on their own. You'd have to bring in pattern matching and/or a bunch of other things* that would significantly complicate the language.

For example, with what's currently in the language, you could definitely have an option type. You'd just be limited to roughly an api that's `func (o Option[T]) IsEmpty() bool` and `func (o Option[T]) Get() T`. And these would just check if the underlying point is nil and dereference it. You can already do that with pointers. Errors/Result are similar.

A `try` keyword that expands `x := try thingThatProducesErr()` to:

    x, err := thingThatProducesErr()
    if err != nil {
        return {zero values of the rest of the function signature}, err
    }
Might be more useful in go (you could have a similar one for pointers).

* at the very least generic methods for flat map shenanigans



Using an Option instead of a pointer buys you the inability to forget to check for nil.

Just need to make sure the Option exposes the internal value only through:

    func (o Option[Value]) Get() (Value, bool) {
        return o.value, o.exists
    }
Accessing the value is then forced to look like this:
    if value, ok := option.Get(); ok {
        // value is valid
    }
    // value is invalid
Thus, there's no possibility of an accidental nil pointer dereference, which I think is a big win.

A Result type would bring a similar benefit of fixing the few edge cases where an error may accidentally not be handled. Although I don't think it'd be worth the cost of switching over.



How is that better than
    if value != nil {
        // value is valid
    }
    // value is invalid
?

Of course, this is often left out, but you can just as easily do:

    value, _ := option.Get()
So this is just not true:

> Using an Option instead of a pointer buys you the inability to forget to check for nil.



It's better because you do not need to remember to check for nil, the compiler will remind you every time by erroring out until you handle the second return value of `option.Get()`.

> Of course, this is often left out, but you can just as easily do:

Unfortunately it gets brought up pretty much every time in these discussions.

Deliberate attempts to circumvent safety are not part of the threat model. The goal is prevention of accidental mistakes. Nothing can ultimately stop you from disabling all safeties, pointing the shotgun at your foot and pulling the trigger.



> my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake

Then what you are really looking for is sum types (what Rust calls enums, but unusually so), not enums. Go does not have sum types, but you can use interfaces to archive a rough facsimile and most certainly to satisfy your specific expectation:

    type Hot struct{}
    func (Hot) temp() {}

    type Cold struct{}
    func (Cold) temp() {}

    type Temperature interface {
        temp()
    }

    func SetThermostat(temperature Temperature) {
        switch temperature.(type) {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }


Enums and sum types seem to be related. In the code you wrote, you could alternatively express the Hot and Cold types as enum values. I would say that enums are a subset of sum types but I don't know if that's quite right. I guess maybe if you view each enum value as having its own distinct type (maybe a subtype of the enum type), then you could say the enum is the sum type of the enum value types?


> Enums and sum types seem to be related.

They can certainly help solve some of the same problems. Does that make them related? I don't know.

By definition, an enumeration is something that counts one-by-one. In other words, as is used in programming languages, a construct that numbers a set of named constants. Indeed you can solve the problem using that:

    type Temperature int

    const (
        Hot Temperature = iota
        Cold
    )

    func SetThermostat(temperature Temperature) {
        switch temperature {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }
But, while a handy convenience (especially if the set is large!), you don't even need enums. You can number the constants by hand to the exact same effect:
    type Temperature int

    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )

    func SetThermostat(temperature Temperature) {
        switch temperature {
        case Hot:
            fmt.Println("Hot")
        case Cold:
            fmt.Println("Cold")
        }
    }
I'm not sure that exhibits any sum type properties. I guess you could see the value as being a tag, but there is no union.


Unfortunately, this:
    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )
Isn't really a good workaround when lacking an enumeration type. The compiler can't complain when you use a value that isn't in the list of enumerations. The compiler can't warn you when your switch statement doesn't handle one of the cases.

Refactoring is harder - when you add a new value to the enum, you can't easily find all those places that may require logic changes to handle the new value.

Enums are a big thing I miss when writing Go, compared to when writing C.



> Isn't really a good workaround when lacking an enumeration type.

Enumeration isn't a type, it's a numbering construct. Literally, by dictionary definition. Granted, if you use the Rust definition of enum then it is a type, but that's because it refers to what we in this thread call sum types. Rust doesn't support "true" enums at all.

> The compiler can't complain when you use a value that isn't in the list of enumerations.

Well, of course. But that's not a property of enums. That's a property of value constraints. If Go supported value constraints, then it could. Consider:

    type Temperature 0..1

    const (
        Hot  Temperature = 0
        Cold Temperature = 1 
    )
Then the compiler would complain. Go lacks this in general. You also cannot define, say, an Email type:
    type Email "{string}@{string}"
Which, indeed, is a nice feature in other languages, but outside of what enums are for. These are separate concepts, even if they can be utilized together.

> Enums are a big thing I miss when writing Go, compared to when writing C.

Go has enums. They are demonstrated in the earlier comment. The compiler doesn't attempt to perform any static analysis on the use of the use of the enumerated values because, due to not having value constraints, "improper" use is not a fatal state[1] and Go doesn't subscribe to warnings, but all the information you need to perform such analysis is there. You are probably already using other static analysis tools to assist your development. Go has a great set of tools in that space. Why not add an enum checker to your toolbox?

[1] Just like it isn't in C. You will notice this compiles just fine:

    typedef enum {
        Hot,
        Cold
    } Temperature;

    void setThermostat(Temperature temperature) {
        switch (temperature) {
        case Hot:
            printf("Hot\n");
        }
    }

    int main() {
        setThermostat(10);
    }


> but all the information you need to perform such analysis is there.

No, it isn't, unlike C, in which it is. The C compiler can actually differentiate between an enum with one name and an enum with a different name.

There's no real reason the compiler vendor can't add in warnings when you pass in `myenum_one_t` instead of `myenum_two_t`. They may not be detecting it now, but it's possible to do so because nothing in the C standard says that any enum must be swappable for a different enum.

IOW, the compiler can distinguish between `myenum_one_t` and `myenum_two_t` because there is a type name for those.

Go is different: an integer is an integer, no matter what symbol it is assigned to. The compiler, now and in the future, can not distinguish between the value `10` and `MyConstValue`.

> Just like it isn't in C. You will notice this compiles just fine:

Actually, it doesn't compile "just fine". It warns you: https://www.godbolt.org/z/bn5ffbWKs

That's about as far as you can get from "compiling just fine" without getting to "doesn't compile at all".

And the reason it is able to warn you is because the compiler can detect that you're mixing one `0` value with a different `0` value. And it can detect that, while both are `0`, they're not what the programmer intended, because an enum in C carries with it type information. It's not simply an integer.

It warns you when you pass incorrect enums, even if the two enums you are mixing have identical values. See https://www.godbolt.org/z/eT861ThhE ?



> No, it isn't, unlike C, in which it is.

Go on. Given:

    type E int
    const (
        A E = iota
        B
        C
    )

    enum E {
        A,
        B,
        C
    }
What is missing in the first case that wouldn't allow you to perform such static analysis? It has a keyword to identify an enumerated set (iota), it has an associated type (E) to identify what the enum values are applied to, and it has rules for defining the remaining items in the enumerated set (each subsequent constant inherits the next enum element).

That's all C gives you. It provides nothing more. They are exactly the same (syntax aside).

> It warns you

Warnings are not fatal. It compiles just fine. The Go compiler doesn't give warnings of any sort, so naturally it won't do such analysis. But, again, you can use static analysis tools to the same effect. You are probably already using other static analysis tools as there are many other things that are even more useful to be warned about, so why not here as well?

> enum in C carries with it type information.

Just as they do in Go. That's not a property of enums in and of themselves, but there is, indeed, an associated type in both cases. Of course there is. There has to be.



Couldn't the zero value be nil? I get that some types like int are not nil-able, but the language allows you to assign both nil and int to a value of type any (interface{}), so I wonder why it couldn't work the same for sum types. i.e. they would be a subset of the `any` type.


When enums make it from the language to the db, things are now brittle and it only takes one intern to sort the enums alphabetically to destroy the look up relations. An enum look up table helps, but now they are not enums in the language.


Depends what you mean by 'enums' exactly, but now that generics has been added, a small change would be to allow interfaces defined via type disjunction to be used as concrete types:
    type Option1 struct { ... }
    type Option2 struct { ... }
    type MyEnum interface { Option1 | Option2 }

    var myValue MyEnum // currently not legal Go
That doesn't solve all the use cases for enums / sum types, but it would be useful.


I am so tired of reading Java/C++/Python code that just slaps try/catch around several lines. To some it might seem annoying to actually think about errors and error handling line by line, but for whoever tries to debug or refactor it's a godsend. Where I work, try/catch for more than one call that can throw an exception or including arbitrary lines that don't throw the caught exception, is a code smell.

So when I looked at Go for the first time, the error handling was one of the many positive features.

Is there any good reason for wanting try/catch other than being lazy?



>the error handling was one of the many positive features.

sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all



> sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all

Okay, but other than exceptions, whats the alternative?



Good point.

I only briefly tried Rust and was turned off by the poor ergonomics; I don't think (i.e. open to correction) that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.

Sometimes (like in the code I wrote about 60m ago), you want both the result as well as the error, like "Here's the list of files you recursively searched for, plus the last error that occurred". Depending on the error, the caller may decide to use the returned value (or not).

Other times you want an easy way to ignore the error, because a nil result gets checked anyway two lines down: Even when an error occurs, I don't necessarily want to stop or return immediately. It's annoying to the user to have 30 errors in their input, and only find out about #2 after #1 is fixed, and #3 after #2 is fixed ... and number #30 after #29 is fixed.

Go allows these two very useful use-cases for errors. I agree it's not perfect, but with code-folding on by default, I literally don't even see the `if err != nil` blocks.

Somewhat related: In my current toy language[1], I'm playing around with the idea of "NULL-safety" meaning "Results in a runtime-warning and a no-op", not "Results in a panic" and not "cannot be represented at all in a program"[2].

This lets a function record multiple errors at runtime before returning a stack of errors, rather than stack-tracing, segfaulting or returning on the first error.

[1] Everyone is designing their own best language, right? :-) I've been at this now since 2016 for my current toy language.

[2] I consider this to be pointless: every type needs to indicate lack of a value, because in the real world, the lack of a value is a common, regular and expected occurrence[3]. Using an empty value to indicate the lack of a value is almost certainly going to result in an error down the line.

[3] Which is where there are so many common ways of handling lack of a value: For PODs, it's quite popular to pick a sentinel value, such as `(size_t)-1`, to indicate this. For composite objects, a common practice is for the programmer to check one or two fields within the object to determine if it is a valid object or not. For references NULL/null/nil/etc is used. I don't like any of those options.



The ability to quickly parse, understand and reason about code is not superficial, it is essential to the job. And that is essentially what those verbose blocks of text get in the way of.


As an experienced Go dev, this is literally not a problem.

Golang code has a rhythm: you do the thing, you check the error, you do the thing, you check the error. After a while it becomes automatic and easy to read, like any other syntax/formatting. You notice if the error isn't checked.

Yes, at first it's jarring. But to be honest, the jarring thing is because Go code checks the error every time it does something, not because of the actual "if err != nil" syntax.



Just because you can adapt to verbosity does not make it a good idea.

I've gotten used to Javas getter/setter spam, does that make it a good idea?

Moreover, don't you think that something like Rusts ? operator wouldn't be a perfect solution for handling the MOST common type of error handling, aka not handling it, just returning it up the stack?

  val, err := doAThing()
  if err != nil {
    return nil, err
  }
VERSUS
  val := doAThing()?


I personally have mixed feelings about this. I think a shortcut would be nice, but I also think that having a shortcut nudges people towards using short-circuit error handling logic simply because it is quicker to write, rather than really thinking case-by-case about what should happen when an error is returned. In production code it’s often more appropriate to log and then continue, or accumulate a list of errors, or… Go doesn’t syntactically privilege any of these error handling strategies, which I think is a good thing.


This. Golang's error handling forces you to think about what to do if there's an error Every Single Time. Sometimes `return err` is the right thing to do; but the fact that "return err" is just as "cluttered" as doing something else means there's no real reason to favor `return err` instead of something slightly more useful (such as wrapping the err; e.g., `return fmt.Errorf("Attempting to fob trondle %v: %w", trondle.id, err)`).

I'd be very surprised if, in Rust codebases, there's not an implicit bias against wrapping and towards using `?`, just to help keep things "clean"; which has implications not only for debugging, but also for situations where doing something more is required for correctness.



I find that knowing where my errors may come from and that they are handled is essential to my job and missing all that info because it is potentially in a different file altogether gets in the way


Sure, I just don't think it's that significant. Humans don't read/parse code character-by character, we do it by recognizing visual patterns. Blocks of `if err != nil { }` are easy to skip over when reading if needed.


The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."

Rust's error handling is clearly better than Go's, but Go's is better than exceptions and the complaints about verbosity are largely complaints about having to actually consider errors.



Try-blocks with ~one line are best practice on code based I have worked with. The upside is that you can bubble errors up to the place where you handle them, AND get stack traces for free. As a huge fan of Result, I have to admit that that's a possible advantage. But maybe that fits your definition of lazy :).


> Is there any good reason for wanting try/catch other than being lazy?

It's the best strategy for short running programs, or scripts if you will. You just write code without thinking about error handling at all. If anything goes wrong at runtime, the program aborts with a stacktrace, which is exactly you want and you get it for free.

For long-running programs you want reliability, which implies the need to think about and explicitly handle each possible error condition, making exceptions a subpar choice.



Yes, it's the ability to unwind the stack to an exception handler without having to propagate errors manually. Go programs end up doing the exact same thing as "try/catch around multiple lines" with functions that can return an error from any point, and every caller blindly propagating the error up the stack. The practice is so common that it's like semicolons in Java or C, it just becomes noise that you gloss over.


The difference is that all code paths are explicitly spelled out and crucially that the programmer had to consider each path at the time of writing the code. The resulting code is much more reliable than what you end up with exceptions.


Do you really do that in practice, or do you just blindly go 'if err != nil return nil, err'?

Because fundamentally the function you called can return different errors at any point so if you just propagate the error the code paths are in fact not spelled out at all because the function one above in the hierarchy has to deal with all the possible errors two calls down which are not transparent at all.



I don't see how it's possible to do it blindly unless the code gets autogenerated. If you're typing the `if err != nil` then you've clearly understood that an error path is there.

There's no requirement for the calling function to handle each possible type of error of the callee. It can, as long as the callee properly wrapped the error, but it's relatively rare for that to be required. Usually the exact error is not important, just that there was one, so it gets handled generically.



In Go, no one really blindly returns nil, err. People very clearly think about errors—if an error may need to be actioned on up the stack, people will either create a named error value (e.g., `ErrInvalidInput = errors.New(“invalid input”)` or a named error type that downstream users can check against. Moreover, even when propagating errors many programmers will attach error context: `return nil, fmt.Errorf(“searching for the flux capacitor `%s`: %w”, fluxCap, err)`. I think there’s room for improvement, but Go error handling (and Rust error handling for that matter) seem to be eminently thoughtful.


Go programs generally do not “blindly prepare the error up the stack”. I’ve been writing Go since 2011 and Python since 2008, and for the last ~decade I’ve been doing DevOps/SRE for a couple of places that were both Go and Python shops. Go programs are almost universally more diligent about error handling than Python programs. That doesn’t mean Go programs are fre from bugs, but there are far, far fewer of them in the error path compared to Python programs.


I agree, I don't really understand everyone's issue with err != nil.. it's explicit, and linters catch uncaught errors. Yes the ? operator in Rust is neat, but you end up with a similar issue of just matching errors throughout your code-base instead of doing err != nil..


The problem is that you're forced to have four possible states

1. err != nil, nondefault return value

2. err != nil, default return value

3. err == nil, nondefault return value

4. err == nil, default return value

when often what you want to express only has two: either you return an error and there's no meaningful output, or there's output and no error. A type system with tuples but no sum types can only express "and", not "or".



this is true, but not a problem. Go's pattern of checking the error on every return means that if an error is returned, that is the return. Allowing routines to return a result as well as an error is occasionally useful.


I mean, I wish Go had sum types, but this really isn’t a problem in practice. Every Go programmer understands from day 0 that you don’t touch the value unless the error is nil or the documentation states otherwise. Sum types would be nice for other things though, and if it gets them eventually it would feel a little silly to continue using product types for error handling (but also it would be silly to have a mix of both :/).


Yeah, also you almost always need to annotate errors anyway (e.g., `anyhow!`), so the ? operator doesn’t really seem to be buying you much and it might even tempt people away from attaching the error context.


The huge volume of boilerplate makes the code harder to read, and annoying to write. I like go, and I don’t want exceptions persay, but I would love something that cuts out all the repetitive noise.


This has not been my experience. It doesn’t make the code harder to read, but it forces you to think about all the code paths—if you only care about one code path, the error paths may feel like “noise”, but that’s Go guiding you toward better engineering practices. It’s the same way JavaScript developers felt when TypeScript came along and made it painful to write buggy code—the tools guide you toward better practices.


exactly.. yes, I understand why ? is neat from a type POV since you specifically have to unwrap an optional type whereas in Go you can ignore a returned error (although linters catch that) - so at the end of the day it's just the same boilerplate, one with ? the other with err != nil


> Is there any good reason for wanting try/catch other than being lazy?

In a hot path it’s often beneficial to not have lots of branches for error handling. Exceptions make it cheap on success (yeah, no branches!) and pretty expensive on failure (stack unwinding). It is context specific but I think that can be seen as a good reason to have try catch.

Now of course in practice people throw exceptions all the time. But in a tight, well controlled environment I can see them as being useful.



> In a hot path it’s often beneficial to not have lots of branches for error handling.

This is true but the branch isn't taken unless there's an error in Go.

Given that the Go compiler emits the equivalent of `if (__unlikely(err != nil)) {...}` and that any modern CPUs are decently good at branch prediction (especially in a hot path that repeats), I find it hard to believe that the cost would be greater than exceptions.



> try/catch for more than one call that can throw an exception or including arbitrary lines

You generally need to skip all lines that the exception invalidates. That's why it's a block or conditional.



I've never needed either.

Try/catch is super confusing because the catch is often far away from the try. And in Python I just put try/catch around big chunks of code just in case for production.

I think Go is more stable and readable because they force you not to use the lazy unreadable way of error handling.

Enums I honestly never used in Go also not the not-type-safe ones.

But I'm also someone who used interfaces in Go maybe I think 4 times only in years and years of development.

I just never really need all those fancy things.



I think what this comment is missing is any sort of analysis of how your experience maps to the general go user, and an opinion on while you've never needed either whether you think it could have provided any benefit when used appropriately.

For example, and option type with enums combined can ensure return values are checked by providing a compile time error if a case is missing (as expressed in the first few examples of the readme).



I know it can, the compiler can do one more "automatic" unit test based on the type checking system.

But they decided not to add enums because it conflicted and overlapped too much with interfaces.

I just want to add "my" experience that personally, yes maybe you can argue enums are nice, but I never missed them in Go.

I personally agree with the Go team on how they argue and for me it would be a step back if they listened to the herd that does not take all sides of the story into consideration but just keeps pushing enums.

Try/catch is just a really bad thing all "hacky solution" alarm bells go off for me if you want to change error handling to giant try/catch blocks.



> But they decided not to add enums because it conflicted and overlapped too much with interfaces.

I'm very curious now about how it might conflict and/or overlap with interfaces.

To reach the goal of an enumeration type (and all the strong type-checking that that brings with it), enums could look as simple as:

    type DayNames enum {
       Sunday
       Monday
       Tuesday
       Wednesday
       Thursday
       Friday
       Saturday
    }
    ...
    func isFunDay (dow DayNames) {
       // This must fail to compile, because there is an unhandled enumeration
       switch {
          case Sunday: ...
          case Monday: ...
          case Tuesday: ...
          case Thursday: ...
          case Friday: ...
          case Saturday: ...
       }
       ...
    }
    ...
    isFunDay (0)   // Compile failure
    var x int
    isFunDay (x)   // Compile failure
And I don't see how that conflicts or overlaps with interfaces.


> I think something like when a variable type in an enum was an interface it would destroy the galaxy or something,

Hah :-)

> Here you Go: https://go.dev/doc/faq#variant_types

Not quite the same: Variants are a constrained list of types. Enums are a constrained list of values.

Let's assume that I agree with the reasoning for not having a constrained list of types.

It still doesn't tell me why we can't have a constrained list of values.



Your comment could have been a nice opinion that proves to a drive-by reader that needs can differ drastically between programmers.

But you ruined it with "fancy things" which shows offhand disregard and disrespect.

A question like "what do you need these features for?" would have been a better contribution to the forum.



I actually really have a disrespect for them. I'm in a constant fight against developers that want to translate code in almost the same code but "only using language features from the Advanced book".

I also wanted to add that I used inheritance only ONCE in all my years of writing Python in all other millions of lines of code inheritance was not the best solution.

This is my daily struggle as a CTO. People using waaayy too many "fancy" features of languages making it totally unreadable and unmaintainable.

It's their ego they want to show off how many complex language features they know. And it's ruining my codebases.



While I have no particular beef with Rust deciding to call its sum types "enum", to refer to this as the actual enum is a bit much.

Enumerated types are simply named integers in most languages, exactly the sort you get with const / iota in Go: https://en.wikipedia.org/wiki/Enumerated_type

Rather than the tagged union which the word represents in Rust, and only Rust. Java's enums are close, since they're classes and one can add arbitrary behaviors and extra data associated with the enum.



Very well then: Rust is not the only one to call a variant type / tagged union an enum. It's a nice language feature to have, whatever they decide to call it.

It remains a strange choice to refer to this as the true enum, actual enum, real enum, as has started occurring since Rust became prominent. If that's a meaningful concept, it means a set of numeric values which may be referred to by name. This is the original definition and remains the most popular.



Rust is targeting both users who know the original definition as well as people who don’t. Differentiating between real enums and sum types means the language gets another keyword for a concept that overlaps.

From a PL theory perspective, enum denotes an enumerable set of values within a type. It just happens that sums slot in well enough with that.



Checked the definition. An enum is defined as a set of named constants. Id argue that a set by definition needs to be constrained. If it lacks the constraints/grouping id argue it no longer is a set.


> While I have no particular beef with Rust deciding to call its sum types "enum", to refer to this as the actual enum is a bit much.

I didn't read GP as saying "Actual enums are what Rust has", I read it more as "Go doesn't have actual enums", where "enum" is a type that is constrained to a specified set of values, which is what all mainstream non-Rust languages with enums call "Enums".

I mean, even if Rust never existed, the assertion "Go doesn't have actual enums" is still true, no?



I am genuinely appreciative that a post like this, a GitHub link to a semi-slow moving, but clearly well considered and sincerely developed programming language, can not only remain on the front page of HN, but can generate a diverse and interesting group of discussions. It’s material like this that keeps me coming back to the site. I’m not sure if anyone needed this comment, but I’m sure my posting it isn’t going to hurt.


Great! Something I've always wanted.

I'd love to be able to use a bit more type-y Go such as Borgo, and have a Pythonesque dynamic scripting language that latches onto it effortlessly.

Dynamic typing is great for exploratory work, whether that's ML research or developing new features for a web app. But it would be great to be able to morph it over time into a more specified strongly typed language without having to refactor loads of stuff.

Like building out of clay and firing the parts you are happy with.

Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.



> Like building out of clay and firing the parts you are happy with. > Could even have a three step - Python-esque -> Go/Java-esque -> Rust/C++esque.

We do exactly that with Common Lisp. It compiles to different languages/frameworks depending on what we require (usually sbcl is more than enough, but for instance for embedded or ML we need another step. All dev (with smaller data etc) is in sbcl so with all the advantages.



We don’t necessarily do good lisp things; we use Common Lisp because macros and easy DSLs allows us to use CL for everything we do while using, for us, the best dev and debugging env in the world. So we want to do the exploration, building, debugging all in CL and after that compile, possibly, to something better depending. We trade for that a little bit of inconvenience (as in; leaky abstraction), but it’s worth it the past 30+ years.

For learning cl, the reddit lisp subreddit is good and has the current best ones on it. Lately there is a guy making a gui (clog) who is doing good work for spreading general lisp love by making it modern. Including tutorials. And there are others too.



Dart? Version 1 was a lot like Javascript/Typescript in one spec (a dynamic language with optional unsound typing). Version 2 uses sound typing, but you can still let variables unannoted (and the compiler will infer type "dynamic") for scripts.


Sounds like JavaScript and typescript would be a good fit for you. Highly expressive, dynamic and strongly typed, and highly performant both on server side and within the browser.


I do like JavaScript but it strikes a weird balance for me where it's a bit too easy to write and a bit too verbose so I tend to end up with hard to maintain code. Feels good at the start of a project but rarely a few weeks in. Also not a fan of the node ecosystem, I try to use deno where I can (maybe that would be bun these days).


py2many does python-esque to both Go and Rust.

The larger problem is building an ecosystem and a stdlib that's written in python, not C. Use ffi or similar instead of C-API.



I like the idea but in all honesty I have difficulty imagining it working in practice. Once your python code is stable (i.e. You've worked out 99% of the bugs you might have caught earlier with strict type checking) would there be any incentive to go back and make the types more rigid or rigorous? Would there be a non-negligible chance of introducing bugs in that process?


by the time you have your code in its final state (i.e. you're done experimenting) and shaken out the bugs, your types are mostly static; they're just implicitly so. adding annotations and a typechecker helps you maintain that state and catch the few places where type errors might still have slipped through despite all your tests (e.g. lesser-used code paths that need some rare combination of conditions to hit them all but will pass an unexpected type through the call chain when you do). it is very unlikely that you will introduce bugs at this point.


I agree it's a bit of a pipe dream. I'm more thinking of performance here, e.g. web services using Django. You could start off in dynamic/interpreted land and have a seamless transition to performant compiled land. Also lets you avoid premature optimisation since you can only optimise the hot paths.

Also types are self documenting to an extent. Could be helpful for a shared codebase. Again Python just now getting round to adding type definitions.

At the end of the day good tooling/ecosystem and sheer developer hours is more important than what I'm suggesting but it would be nice anyway. I dream about cool programming languages but I stick to boring for work.



I like the graph at the top of the readme as a summary.

The rest of the readme focuses on the delta between Go and Borgo. It doesn't say much about the delta between Borgo and Rust.

I think the delta there is mainly no lifetimes/ownership?



This seems to achieve a similar type safety<->complexity tradeoff as Gleam [1] does. However, Gleam compiles to Erlang or JavaScript, which require a runtime and are not as performant as Go.

I wonder if Borgo's compiler messages are as nice as Rust's/Gleam's, though.

[1] https://gleam.run/



> are not as performant as Go.

Ymmv, you might be surprised if you actually bothered to benchmark. Depending on the workload, either JS or erlang can ultimately turn out on top.

They're all optimized to a degree that each has a niche it excells at and leaves the others in the dust.

even with heavily scewed benchmark like techempower fortunes (https://www.techempower.com/benchmarks/#hw=ph&test=fortune&s...) you end up with JS getting ahead of Go with raw requests. And not just slightly, but by 1.5 times the throughput.

In other benchmarks, Golang does indeed win out with similar or even bigger advantages... so the only thing you can ultimately say is ... that it depends. Its a different story if you chose other languages though. But JS, Golang and Erlang are all extremely optimized for their ideal usecase.



Well hold on a second. The JS impl that you're talking about uses a minimal custom runtime (https://github.com/just-js/just) that you would never use—it barely implements JS. It's basically only used for this benchmark. It doesn't make sense to compare that to Go when we're talking about Javascript vs. Go performance.

Scroll down to the "nodejs" entry for a more realistic comparison.



Feel free to switch to json serialization and the the same pattern repeat with uwebsockets.js.

I'm not saying that JS is "just as fast as golang" generally. My argument is specifically that it's optimized to a degree that there are cases in which JS, an interpreted language, does end up on top.

The same applies to erlang and it's optimization for efficient concurrency.

On average you'll likely get better performance with go, but depending on the workload the results can differ



JSON is just one tiny part of most programs, sitting on the edge where the program interacts with other programs; it doesn’t permeate the entire codebase.

Structure privacy, OTOH, does. Count me in as someone who really enjoys the case-based approach. It’s not the only one which could work, but it does work.



The single most productive habit I picked up int the last few years is to always use exactly the same name for the same entity across source files, configs files, database entries, protocol fields, etc.


That’s funny, I did it your way for years and ended up considering it a big mistake.

Today I use idiomatic names - MyName in Go, myName in JS/JSON, my_name in SQL. There are many reasons but generally speaking, for me, it’s less effort and code is more readable.

Curious what your rationale is?



I have the benefit of writing mostly C++ where there is really no globally agreed idiomatic naming. At $JOB we use snake_case naming for C++ functions and objects (as opposed to types), which also matches the python naming convention we use.

Snake case is not idiomatic for xml, but we still happen to use it for leaf config options.

The main benefit is reducing ambiguity to what maps to what across files. Ease of grepability is also an advantage.



I just ran into this earlier today - it makes navigating code with grep more difficult.

I had a YAML file using `some_property_name`, which was turned into `SomePropertyName`, and it's a small annoyance. It's not a huge deal, but it adds friction where some languages have none. (Or alternately, getting reordered in a separate system like `property_name_some`.)



The issue that I ran into is dealing with lot of code across different languages, like plpgsql, go and JavaScript.

Especially with database code, something that's fine in Go, like EmployeeID, ends up being employeeid in SQL. You can use underscores in Go but that can trigger other behaviours. If you mix your own JSON with JSON from other sources, you get inconsistent capitalisation. And so on.

And when you have hundreds or thousands of identifiers like this, it gets really hard to read.

You can of course capitalise in SQL - even though it's not semantic - but that becomes inconsistent, too. And then of course the lifecycles of each of these things can be different, which adds another layer of complexity - maybe you refactor your Go code before you upgrade the database, so you end up with two identifiers anyway.

Ultimately I switched to using idiomatic names everywhere, and I really haven't looked back. The boundaries between these systems tend to be pretty clear, as mentioned by someone else, so finding things shouldn't be hard regardless of what they're named.

It's certainly takes slightly longer to deal with idiomatic names - but you read code way more than you write it, and it's easier to read idiomatic code.



And database col name, and validation and...

The moment you integrate with a third party your US centre zip_code field is suddenly coming over the wire as postCode. The conversions are going to go on, at least in go I can define all of that conversion with ease in one place.



Go's semantic use of case is objectively bad because most of the worlds scripts do not have the concept of it. For example ideographs, as used in eastern countries, do not have capitalization. This means programmers in many parts of the world cannot express identifiers in their native tongue.


It looks like something was lost in the middle of your comment. You open with something about it be objectively bad, but then it jumps to something about how it is subjectively bad. What was omitted?


I don't really think the sarcastic tone was called for, but the previous poster is right. "I can't name variables in my native language" is objective, but whether or not that's bad is subjective.


Very true but “bad” is always subjective so at least they came up with an evaluation that is binary — either you have capitalization in your language or you don’t, either the analogy fits or it doesn’t.

(Some linguist will point out that Bongo-Bongo has half-capitalization, or half has capitalization).



So if you have a concept that doesn't have an equivalent in English you just kinda translate it and add a comment for other people of your language to understand what it is?


> It's such a neat way to remove cruft.

I don't disagree, the problem I have with it is, I have to pay for that up front and have to factor it into my design immediately. This also combines with the fact that the namespace is very flat with no heirarchy, so, choosing good public names is something I feel like I spend way too much time on.

Go is the only language that causes me to pull out a thesaurus when trying to name methods and struct members. It's kinda maddening. Although, after going through this exercise, I end up with code that reads like English. I just wish I could refactor my way into that state, rather than having to try to nail it up front.



The casing rules are quite explicit and enforced by the compiler. A build would have immediately failed on whatever mismatch you had. A few hours and you didn't even think to try compiling it?

I'm guessing you are talking about something else entirely, like, perhaps, decoding JSON into a struct using reflection and encountering a situation where the field names didn't match? Indeed, implicitness can bite you there. That's true in every language. But, then again, as you prefer explicitness why would you be using that approach in the first place?



The rules are explicit but the actual changes in code are very small and unique to this language (or unique from the languages I had ever used). It’s one of those things that you can forget about — because it’s a small difference in code and arguably isn’t explicit.

I forget what it was, but basically my code wasn’t working the way I thought it should and it was solely due to a lowercased struct field. It happened twice where I spent at least a little while trying to figure it out.

And yeah I would guess that I tried to compile. Would be very dumb if I hadn’t although wouldn’t be the dumbest thing I’ve ever done



Same. It's ugly, it breaks acronyms, it doesn't work in all (spoken) languages, it doesn't work well with serialization, etc.

Frankly if they insisted on visibility being part of the name, I would have preferred they go with the age-old C++/ancientPHP tradition of using a _ prefix for private members.



That's what C# offers (except true* Rust-style enums).

The latter will be there in one of the future versions and is in an active design phase, which luckily focuses on tagged-union implementation strategy.

With that said, you can already easily use one of the Option/Result libraries or write your own structure - switching on either is trivial (though you have to sometimes choose between zero-cost-ness and convenience).

It already has struct generics, iterator expressions (LINQ), switch pattern matching, good C interop and easy concurrency primitives (no WaitGroup nonsense, also has Channel). Oh, and also portable SIMD, C pointers and very strong performance in general.

* True as in proper tagged unions with either a tag or another type of discriminant and aliased layout, instead of tag + flattening all constituent parts into a one jumbo struct. Or making it an opaque pointer to a box (like Java does, or C# if you go inheritance/interface route). These don't count. I'm curious about Borgo's lowering strategy for enums, but given Go doesn't have those, I'm not holding my breath and expecting something like F# struct unions at best.



As someone who is "C# curious," but haven't been able to keep up with all the horrific number of rebrands of the "new, open, .net, core, framework", what is the C# equivalent of $(for GOOS in linux darwin; do for GOARCH in amd64 arm64; do dotnet build -o thing_${GOOS}-${GOARCH}; done; done)?


You're very kind, thank you

For others wanting to play along at home:

  $ docker run --name net8 --rm -it mcr.microsoft.com/dotnet/sdk:8.0-jammy-arm64v8 bash -c '
  cd /root
  dotnet new -d -o console0 console
  cd console0
  dotnet publish --nologo --self-contained -v d -r osx-arm64 -o console0-darwin-arm64
  sleep 600
  '
although it didn't shake out
  $ docker cp net8:/root/console0/console0-darwin-arm64/console0 ./console0
  $ ./console0
  Killed: 9
I tried with and without --self-contained and the biggest difference was that self-contained emitted a bazillion .dll files and without just emitted the binary. I case the context isn't obvious, $(dotnet new console) is a skeleton for the infamous WriteLine("Hello, World") without doing crazy whacko stuff


For simple JIT-based but fully self-contained binaries, without adding any properties to .csproj, the command is a bit mouthful and is as follows
    dotnet publish -p:PublishSingleFile=true -p:PublishTrimmed=true -o {folder}
(you can put -p: arguments in .csproj too as XML attrs in ...)

This will give you JIT-based "trimmed" binary (other languages call it tree shaking). You don't need to specify RID explicitly unless it's different from the one you are currently using.

For simple applications, publishing as AOT (without opting in the csproj for that) is

    dotnet publish -p:PublishAot=true -o {folder}
Add -p:OptimizationPreference=Speed and -p:IlcInstructionSet=native to taste.

Official docs: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-p...



You're also very kind, and I realized that it's possible there were a bazillion "watch out"s on the docs and was just trying the true trick when I saw your comment

However, it seems this brings my docker experiment to an abrupt halt, and is going to be some Holy Fucking Shit to re-implement that $(for GOOS) loop in any hypothetical CI system given the resulting explosion

  /usr/bin/sh: 2: /tmp/MSBuildTemproot/tmp194e0a13157b47889b36abb0ce96cd2d.exec.cmd: xcodebuild: not found


You need an OS for which you are building to be able to compile an AOT binary - it depends on OS-provided tooling (MSVC on Windows, Clang on macOS and Linux, and a system-provided linker from each respective system). In fact, once ILC is done compiling IL to .a or .lib, the native linker will just link together the csharp static lib, a GC, then runtime/PAL/misc and a couple of system dependencies into a final executable (you can also make a native library with this).

Cross-architecture compilation is, however, supported (but requires the same extra dependencies as e.g. Rust).

If you just want to publish for every target from a single docker container (you can't easily do that with e.g. Rust as noted), then you can go with JIT+single-file using the other command.

Keep in mind that Go makes concessions in order for cross-compile to work, and invested extra engineering effort in that, while .NET makes emitting "canonical" native binaries using specific system environment a priority and also cares a lot about JIT instead (there aren't that many people working on .NET compiler infrastructure, so it effectively punches above its weight bypassing Go and Java and matching C++ if optimized).



The sibling comment pretty much sums it up. But if you want more detail, read on:

Generally, there are three publishing options that each make sense depending on scenario:

JIT + host runtime: by definition portable, includes slim launcher executable for convenience, the platform for which can be specified with e.g. -r osx-arm64[0].

JIT + self-contained runtime: this includes IL assemblies and runtime together, either within a single file or otherwise (so it looks like AOT, just one bin/exe). This requires specifying RID, like in the previous option, for cross-compilation.

AOT: statically linked native binary, cross-OS compilation is not supported officially[1] because macOS is painful in general, and Windows<->Linux/FreeBSD is a configuration nightmare - IL AOT Compiler depends on Clang or MSVC and a native linker so it is subject to restrictions of those as a start. But it can be done and there are alternate, more focused toolchains, that offer it, like Bflat[1].

If you just want a hello world AOT application, then the shortest path to that is `dotnet new console --aot && dotnet publish -o {folder}`. Otherwise, the options above are selected based on the needs either via build properties or CLI arguments. I don't know which use case you have - let me know if you have something specific in mind ("Just like in Go" may or may not be optimal choice depending on scenario).

[0] https://learn.microsoft.com/en-us/dotnet/core/rid-catalog

[1] https://github.com/bflattened/bflat (can also build UEFI binaries, lol)



This looks like an interesting sweet spot. Rust is often praised for the borrow checker, but honestly I really only like rust for the type system and error handling. Go is praised for it's simplicity, but hated for it's error handling.


Rust without borrow checker is much less feasible than Go with Result/Option types to address the nil overdose problem. Unfortunately Go team refuses to acknowledge the common themes coming out of years of user complaints. They don't have to cater to every wishlist but when nil/enum related complaints are the majority in every discussion about issues with Go, one would think to acknowledge the legitimacy of those shortcomings. Nope, not Go team and their band of simplicity zealots.


I'm not sure what exactly you mean by acknowledgement, but here are some counterexamples:

- A proposal for sum types by a Go team member: https://github.com/golang/go/issues/57644

- The community proposal with some comments from the Go team: https://github.com/golang/go/issues/19412

Here are some excerpts from the latest Go survey [1]:

- "The top responses in the closed-form were learning how to write Go effectively (15%) and the verbosity of error handling (13%)."

- "The most common response mentioned Go’s type system, and often asked specifically for enums, option types, or sum types in Go."

I think the problem is not the lack of will on the part of the Go team, but rather that these issues are not easy to fix in a way that fits the language and doesn't cause too many issues with backwards compatibility.

[1]: https://go.dev/blog/survey2024-h1-results



I guess I should have been more clear that I mean actions that have resulted from the feedback. Sure, the survey brings out the concerns in a structured form, but to anyone who has seen more than a few discussions about Go, the feedback regarding error handling or enum or sum types etc would not have been news. I can't imagine Go team at Google is stunned by developer demand for these things. Question is why there hasn't been a concerted effort to prioritize these top concerns (I will stand corrected if there is already something underway that I'm not aware of).

One of the proposals you linked has been raised in 2017 and it is still open with "No one assigned"; same fate for the other item. That doesn't inspire confidence in terms of Go team treating these things as top priority.

I think stuff developers are moaning about the most should be top priority but I guess that is just my simpleton thinking.

> I think the problem is not the lack of will on the part of the Go team, but rather that these issues are not easy to fix in a way that fits the language and doesn't cause too many issues with backwards compatibility.

They have made many changes to the language, some significant ones like Generics (which I would assume was also not an easy problem to solve) while they have largely left the elephant in the room unaddressed i.e. error handling - and the developers deal with that one on a daily basis and I would wager a lot more frequently compared to Generics. If I had to gauge their priority, I would go by where they they are putting their money instead of surveys and proposals. And their priorities seem to be different from what the populace is asking. And that is my point.



to me the problem is not the language per se but the emerging complexity of a project written in a language. I.e. say I'm familiar with go and a k8s user. Does that mean that I can understand the architecture of k8s project and be meaningfully productive in a short period of time? Far from it.

Sometimes I think we focus too much and formalize on the first order tooling we use, language being one of them, while we neglect the layers upon layers of abstractions built on top of them. I wonder whether a meta-language could exist that would be useful in these upper layers. Not a framework that imposes its own logic. More of a DSL that can capture both business logic and architecture.



The word "transpiler" propagates the misunderstanding that there is something special about a compiler that emits machine code, that requires some special "compiler" techniques for special "compiler" purposes that are not necessary for "transpiler" purposes because "transpiling" requires a completely different set of techniques.

There aren't any such techniques. If one were to create an academic discipline to study "transpilers" and one to study "compilers", all you'd end up with is an identical bunch of techniques and analyses with different names on them. (A thing that sometimes happens when diverse disciplines study what turns out to be the same thing; see machine learning versus statistics for a well-known example.)

Even "compiling" to machine code isn't special anymore, because CPUs don't actually execute assembly anymore. They themselves compile assembly into internal micro-ops, which is what they actually execute. So compilers don't even compile "machine language" anymore; it's just another target. This also dodges "is it a 'compiler' or a 'transpiler' if it targets WASM?", which is good, because there is no value in that question on any level.



1. Transpilers output to another programming language that is typically written by hand by others (so not assembly).

2. Transpilers don’t typically optimize code, leaving those transformations to the compiler of the target language.

3. Compilers will typically have an internal representation (SSA) which they operate on to optimize. Transpilers typically operate on the AST (because they don’t need to do any but the most trivial optimizations).

There are exceptions to the rules but these cover the majority of the reasons on why people make the distinction.



These differences aren't inherent to transpilers vs compilers, they're mostly the result of the fact that the vast majority of transpilers are less mature than the battle-tested compilers that you're thinking of.

The average hobby compiler—regardless of target—doesn't optimize code and works directly on the AST because that's simple to get started with. Most hobby compilers also target some other language rather than LLVM or machine code because that's simple to get started with, so the result is that most transpilers are hobby projects that don't optimize. But there's no reason why a transpiler shouldn't include optimization steps that adapt the output to use code paths that are known to be fast, and a production-grade transpiler typically will include these steps.



> the majority of the reasons on why people make the distinction.

You have provided some defining properties that might allow for distinction, but you have not given any reasons for why people make a distinction.

But perhaps we can suss it out. Given the statement "Borgo compiles to Go", what important information is lost that would be saved if "Borgo transpiles to Go" was used instead?



In that statement, it doesn't really add anything.

In the statement "XYZ is a compiler/transpiler", it does. It doesn't hurt to have a word that is more specific than others. Otherwise we should just refer to compilers as an "app" :)



> In the statement "XYZ is a compiler/transpiler", it does.

Okay. What important information is lost in "XYZ is a compiler" that would be gained in "XYZ is a transpiler"?

> It doesn't hurt to have a word that is more specific than others.

It can if the intent is not properly understood. And so far I'm not sure we do have that understanding.



I don't think anyone here is saying we shouldn't have the word "transpiler" at all, just that "transpiler" is a subcategory of "compiler" and there's no reason for OP to try to correct the title of this story.

It reminds me of how my 5-year-old son always corrects me when I tell him to get in the car—"you mean the van!". I have tried to explain to him that a minivan is a kind of car, and he's just about getting it, but it's been a challenge for him to grasp.



>I don't think anyone here is saying we shouldn't have the word "transpiler" at all

This thread chain is in response to jerf's comment "transpiler shouldn't be a word" (simplifying his comment for brevity's sake)



Eh, that's one possible reading, but their actual take is more nuanced than that:

> The word "transpiler" propagates the misunderstanding that there is something special about a compiler that emits machine code, that requires some special "compiler" techniques for special "compiler" purposes that are not necessary for "transpiler" purposes because "transpiling" requires a completely different set of techniques.

In context of the parent comment I read this to be a reaction to someone insisting that we use "transpiler" instead of "compiler"—more an observation of what is happening here than a call to stop using the word altogether.



It doesn't matter but I fully disagree with this. A transpiler emits code the user is supposed to understand, a compiler does not. At least that's the general way I've seen the term used, and it seems quite consistent.


There is a phenomenon I have observed many times where you can get a bunch of people in a room and make some statement, in this case, "Compilers are different than transpilers", and everyone around the table will nod sagely. Yup. We all agree with this statement.

But if you dig in, it will turn out that every single one of them has a different interpretation, quite often fatally so to whatever the task at hand is.

I mention this because my impression has been that the distinction between "transpiler" and "compiler" is that the latter is into some machine code and the former is not. I think if we could get people to sit down and very clearly define the difference we'd discover it is not as universal a definition as we think.

My personal favorite is when I say a particular term is not well defined on the internet, and I get multiple commenters to jump up and tell me off about how wrong I am and how well-defined the term is and how universal the understanding is, while each of them gives a completely different definition. As I write this it hasn't happened in this thread yet, but stay tuned.

Anyhow, the simple solution is, there isn't a useful distinction between them. There's no sharp line anyhow. Plenty of "transpilers" produce things like Python that looks like

    def f000000001_bch(a0023, a0024, bf___102893):
        __a1 = f000000248_BCh1(a0024, const_00012)
        if __c_112__0:
            f0000000923(__a1)
        else:
            f0000000082(__a1)
and it's really quite silly to look at what can be a very large process and make a distinction only in how the very last phase is run, and on a relatively superficial bit of that last phase too.


2 hours later, I think it's safe to say there are multiple definitions in play that are, if not outright contradictory, certainly not identical.

It seems the term is not terribly useful even on its own terms... it is not as well defined as everyone thinks.

Ultimately, "compiler" isn't a bright shining line either... I can take anything and shade it down to the point where you might not be sure ("is that a 'compiler' or an 'interpreter'?"), but the "transpiler" term is trying to draw a line where there isn't even a seam in the landscape.



> the "transpiler" term is trying to draw a line where there isn't even a seam in the landscape.

I don't think you have proven that it is a seamless landscape. In fact, I think that people's definitions have been remarkably consistent in spite of their fuzziness. The heart of what I have read is that most people understand a transpiler to be an intermediate text to text translation whose output is input to another tool. The common colloquial definition of a compiler is a text to machine code (for some definition of machine code) translation whose output is an executable program on a host platform. You can make an argument that every compiler is a transpiler or every transpiler is a compiler, but I think it requires a level of willful obtuseness or excessive pedantry to deny that there is something behind the concept of a transpiler. This discussion wouldn't even be happening if transpiler were a completely meaningless term.



I've observed this sort of behavior frequently, is there a name for this phenomenon yet?

Something like "Assuming all concepts are universal to one's own peculiar definition"

Maybe "semantic egocentrism" could fit the bill?



And in the other corner you have Chomsky with universal grammar... and in another you have Platonic Forms...

I love the "draw me a tree" idea of a Platonic form, we all have an idealized model of what that is, that is uniquely our own. With that in mind isnt everything subject to some sort of semantics?



> A transpiler emits code the user is supposed to understand, a compiler does not.

No, a transpiler emits code that another system is meant to understand (often another compiler or interpreter). Whether a human can understand it or not is immaterial to the objective of transpiling.



> Does that imply that a compiler emits code that nothing can understand?

Bizarre take. No, compilers in the classical sense target byte code and machine code which is meant to be interpreted by a byte code interpreter or a hardware machine.

> Or are you saying that 'transpile' is no more than another word for 'compile'?

Yes. Compilers translate from one language to another. Transpilers translate from one language to another. Both have the objective of preserving the behavior of the program across translation. Neither has the objective of making something intended for humans as a general rule.

That transpiled code (if we draw a distinction) targets languages meant for humans to read/write means that many transpiled programs can be read by people, but it's not the objective.



> Bizarre take.

Bizarre in what way? If compilers are somehow different, then they mustn't target systems, as that's what your previous comment says transpilers do. Which leaves even your own classical definition to be contradictory, if they are somehow different. What does that leave?

> Yes.

But it seems you do not consider them different, which was the divergent path in the previous comment. But evaluating both the "if" and the "else" statement is rather illogical. The evaluation of both branches is what is truly bizarre here.



Yea, not sure i disagree with anything being said here. Though to me, transpiler just typically means it goes from one language i could write, to another i could write. I don't necessarily expect to enjoy reading or perhaps even understanding the JavaScript output from any lang that builds to JS, for example.


> transpiler just typically means it goes from one language i could write, to another i could write.

What possible compiler target couldn't you write? Compilers are not exactly magic.



Fair, by "could write" i meant one intended for humans to write. Ie i would not say LLVM bytecode is intended for humans to write by hand. Can they? Sure.

The difference (to the parent comment) in my eyes is that the target language is the thing intended for humans, not the target output itself. As another commenter points out, transpiled code is often not intended for humans, even if the language is.



Machine code is intended to be written by humans. That was the only way to program computers at one point in time. Given a machine code target, would you say it is product of transpilation or compilation?


I would stand by my original statement, as i don't consider that "intended" or common by modern day standards. Humans hand wrote binary for a while too hah.

If it's not clear, these are just my opinions. Not an attempt at an objective fact or anything.



> Humans hand wrote binary for a while too hah.

Like, as in flipping toggle switches? Isn't that just an input device? They were still producing machine code from that switching.



All compiler outputs are understandable. I suppose you mean with the intent of it being a one-time translation? As in, like when the Go project converted the original C codebase into Go source with the intent of having developers work in the Go output afterwards and the C code to be never touched again?

What is meaningful about such a distinction?



I'm fairly certain that source-to-source transpilers rarely use anything like BURS or any other sufficiently smart "instruction selection" (or insturction scheduling, for that matter) algorithms because why would they, the compilers for the targeted language already incorporate such algorithms, maybe even of the higher quality than the transpiler's author are capable to write themselves.


All compilers end up with local considerations. Instruction selection or register allocation is not a consideration special to compilers that "transpilers" do not need to have, they are specific considerations for that particular compiler target. A compiler to Go must consider Go's identifier rules, which does not apply to compilers targeting a CPU. A compiler to SQL must consider valid SQL syntax and have their own optimization concerns that don't apply to Go. And so on.

The middles all look very similar, though, which is where the heart of "compiler" comes from; that process of some sort of parsing and then transforming down to some other representation. This has a distinguishing set of characteristics and problems despite what frontends and backends get slapped on them.



Yeah, the top of the project says "compiles", then the readme says "transpiles". Perhaps the author was just trying to get all the SEO terms in there.

> And it's written in rust. Kinda unholy.

Agreed, it's like, do you really hate writing Go so much that you'll really write all that Rust to get out of it? Haha. Reminds me of the web frameworks which generated the JS so you didn't have to touch it, like GWT of old.

I'm sure it was a fun exercise to create Borgo, though.

My favorite transpiler is Haxe. It targets so many other languages, the surface area is impressive.

https://haxe.org/



Nothing unholy there. It's easier to transpile to a less constrained language. Transpiling to Rust would require using one of the GC crates or refcount everything. Then you'd have to also satisfy the mutability and send/sync constraints. Go needs none of these things, so all the transpiler needs to care about is it's own added constraints.


Thanks, I'd always thought targeted transformations -> transpiler, but it makes sense it's really a subset of general compiler functionality, sans binary output.


> but it makes sense it's really a subset of general compiler functionality, sans binary output.

Tell us more about this compiler subset that does not produce binary output. What do they produce? Ternary? Qubits? Nothing?



Thanks for your response, but the Go spec asserts that Go source is represented as UTF-8. UTF-8 is a binary encoding.

We're talking about compilers that produce something other than binary output.

联系我们 contact @ memedata.com