(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40431835

在本文中,作者表达了他们对简化 Bash 或 POSIX shell 脚本编写的工具的渴望,并与用于 HTML 电子邮件的 MJML 等工具进行了比较。 他们批评当前方法的复杂性和对外部依赖的需要,并提出了诸如定义编译脚本的环境和改进 Bash 本身等解决方案。 此外,他们还讨论了在尝试将一种名为 Amber 的假设语言编译为 Bash 时遇到的困难,并对那些尽管精通其他编程语言但仍避免学习 Bash 的人表示沮丧。 总的来说,作者主张理解和掌握 Bash 以实现有效的 shell 脚本编写。

相关文章

原文


I kind of wish this was a tool to author bash, or better yet very compatible POSIX sh. (versus being using bash as a compile target that a human isn't meant to read)

It's a bit of a stretch, but comparing it to something like MJML for HTML emails. [0] It doesn't make great HTML, but it's not unreadable and standard quality for HTML emails. It just handles all of the weird things you have to consider for maximum email client compatibility. It's basically a bunch of HTML macros, with react being an implementation detail under the hood. It's not like it's bringing any new features to HTML emails, because if it did, the output WOULD be unreadable.

If this was just some type hints, and cleaner syntax around sh, that I think would be really useful. This in it's current state is just sort of a tech debt generating machine. If something is built in amber, it's because we can only use bash for some reason, but if we can use bash we could either... use bash, or we could call something else (Ruby, Python, etc), from a 2 line bash script.

[0] https://mjml.io/try-it-live (click "View HTML" to see outputted source.)

Edit: Idea, it would be cool if you could define the environment available to the final script when using the compiler. That way, if there was something you wanted to distribute, the person building your project could say "this docker container I'm running this in won't have sed, but it will have bc and awk" or something.



It isn't even pure bash, if it uses external commands like `bc` and `sed`. It should probably limit itself to posix shell (or bash, if they need to) and coreutils.

They are only listing `bc` and `bash` as a prerequisite, but the example uses `sed`, so this is also not complete. So a full list of all required tools would be the first step.

But I am working on embedded systems where I write lots of POSIX shell scripts that run in tiny initramfs, etc. so I am very picky with my dependencies. If I had a better language to target busybox, that would be welcome as well.



THIS. Why does this:
    if age < 18 {
      echo "I'm not an adult yet"
    } else {
      echo "I'm an adult"
    }
compile to this:
    __0_age=30;
    if [ $(echo ${__0_age} '<' 18 | bc -l | sed '/\./ s/\. \{0,1\} 0\{1,\}$//') != 0 ]; then
      echo "I'm not an adult yet"
    else
      echo "I'm an adult"
    fi
instead of this:
    __0_age=30;
    if [[ ${__0_age} -lt 18 ]]; then
    ...
If you're going to compile to Bash, then use Bash-isms.

EDIT: `-gt` is POSIX, but tbf if there's no input sanitization, then bash (or sh) will choke on the float. In that case, as long as you aren't trying to round, you could use parameter substitution to truncate:

    __0_age=17.6;
    if [[ ${__0_age%%\.*} -lt 18 ]]; then
    ...
EDIT2: TIL that parameter substitution is POSIX [0] Section 2.6.2

[0]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...



There might be reasons to use `bc` like this if you don't know the type of `age` and it could be a non-numeric string. Buuuuut a programming language where a simple integer-comparison leads to two subprocesses is going to be slower than the slowest existing programming languages, by orders of magnitude.



So you want it to compile to POSIX sh but you're unhappy it doesn't use more POSIX-incompatible features? Maybe full POSIX sh compat is WIP, or in some simpler cases maybe it already is?



As I've just updated my comment to reflect, TIL that parameter substitution is supported (with some exceptions) by `sh`.

More broadly though, yes, there's a tendency for people to pop sub-shells without giving it a second thought. Those aren't free; use shell built-ins whenever possible!



The alternative stance is that if it's already bad, can't hurt to make it worse ;)

Something you could do to alleviate this is to interleave comments with the original Amber source, along with line numbers (which would balloon the script size, but probably not a huge deal).

Edit: you could also bundle a tool to recover the original Amber source from those comments, to allow for easy in-the-field script edits and/or audits.



i think they do this since there's no source map or symbol mapping support on the debugger (which is just bash errors lol)

would be nice if it was the source and script at the same time. as in a comment at the bottom or top is the source or something.



It's interesting Amber doesn't seem to be leveraging Bash-native Regexes via:
    if [[ "${foo}" =~ ^someReg[e][x][p][r]$ ]]; then ...
What value exists in going to such great lengths to target super old and anqtiquated versions of Bash? It becomes very limiting for this poor new language :-s

Maybe still just the earliest of early days? Javascript-style RegExs along the lines of:

    if (s.match(/my[R]egex[H]ere/)) { ..
Would be pretty nice and handy.


To my eye, this does not look good.

I consider myself to be quite proficient in bash. I write a lot of bash professionally. I've written a compiler (transpiler?) for a strongly typed configuration language that outputs bash.

There are three main problems I see here.

(1) Pipelines, subshells, and external dependencies.

The example on the website compiles a conditional `if age < 18` into this monstrosity:

    if [ $(echo ${__0_age} '<' 18 | bc -l | sed '/\./ s/\.\{0,1\}0\{1,\}$//') != 0 ]
Pipelines and subshells are "slow". Not tremendously. But it adds up. Also, what's the point of having types in your language if you're not going to use native arithmetic when applicable:
    if (( __0_age < 18 ))
Perfectly fine to alert users that floats will require an external dependency (such as dc), but integers will be handled in native arithmetic contexts.

Seems mildly insane that a conditional this simple would require a pipeline including `echo`, `bc`, and `sed`. Now the user needs to ensure the correct external CLI tools are installed, and match the expected versions. Hope they account for BSD vs. GNU variants. Eek.

(2) Namerefs.

> Arrays cannot be nested due to the Bash limitations

Arrays absolutely can be nested... with some trickery. Consider:

    # Top level array w/ pointers to children.
    declare -a arr1=( arr2  arr3 )

    # Child arrays.
    declare -a arr2=( item1  item2 )
    declare -a arr3=( item3  item4 )

    for ref in "${arr1[@]}" ; do
       declare -n sub_array="$ref"
       printf '%s\n' "${sub_array[@]}"
    done
The `declare -n` line is where the magic happens: https://www.gnu.org/software/bash/manual/html_node/Shell-Par...

This is a fairly trivial operation that I'd assume the authors would know about. It's fast and allows for the creation of arbitrarily complex objects. Even rudimentary classes.

(3) Just learn bash.

I'm perpetually confused by people taking every opportunity to not just... learn bash. An otherwise skilled programmer will somehow forget everything they've learned when beginning a shell script.

Surely it can't be harder to learn the fundamentals of shell scripting than to learn whatever flavor-of-the-month alternative is?



> Just learn bash.

There are many reasons why it is to be avoided. For me the deal breaker was that one time I spent 2h debugging a script only to find out there was an extra space character that made the whole thing break silently.

I would be willing to learn a sane language, but bash isn't one. If you need to know of a million tiny gotchas to implement even the simplest task safely and portably, then there isn't any reason to _learn_ bash.



> I spent 2h debugging a script only to find out there was an extra space character that made the whole thing break silently.

When I was learning C++, I lost 2 days because of an extra ";" in an if statement. However this does not mean C++ is an insane language. It means it has quirks.

> I would be willing to learn a sane language, but bash isn't one

I kindly, but strongly disagree.

> If you need to know of a million tiny gotchas to implement even the simplest task safely and portably

Then you'd absolutely hate Perl, to the level of pulling your or The Camel's hairs out.

This is how programming languages work. They have straights and curvy parts, and there are some chicanes (quirks). So yeah, if you need to use bash, you need to learn it.



You're right. To be fair, I'm absolutely immune to these kinds of quirks in any technology, and just adapt. This causes me to dislike no languages, and just work with them if they fit to the problem I have at hand.

This trait baffles a couple of my friends with no end.

BTW, That's a great link. Thanks a lot.



As carlinigraphy points out, shellcheck [0] exists, and can easily be put into pre-commits, a CI pipeline, etc. This would have almost certainly flagged your problem immediately.

> I would be willing to learn a sane language, but bash isn't one.

It's a general language that has to be both an interactive interpreter and script executor, and it needs to support a huge variety of architectures and kernel versions, as well as historical decisions. It's going to have some cruft.

[0]: https://www.shellcheck.net/



> need to know of a million tiny gotchas to implement even the simplest task safely and portably

While this is clearly exaggeration, I'm not sure I find much merit in the argument.

C is full of gotchas, footguns, and a tremendous threat vector. But it is also the standard for many uses.

Yes, bash has legacy cruft, and potentially confusing ways of doing things. But for the realm in which it operates, it is extremely effective. We can make an argument for what we believe the perfect shell scripting language would be--but that doesn't exist. Bash is the current tool for addressing this category of problems.

The intention of this bullet point was to illustrate that just as I wouldn't work in an industry/role requiring C tools, and instead turn to some weird, novel, riddled-with-problems transpiler... I'd just learn C.

(P.S., bats[0] and shellcheck[1] address many problems noted in this thread.)

  [0] https://github.com/bats-core/bats-core
  [1] https://www.shellcheck.net/


> While this is clearly exaggeration, I'm not sure I find much merit in the argument. > > C is full of gotchas, footguns, and a tremendous threat vector. But it is also the standard for many uses.

(I'm not GP)

Yes, and the same way there's a plethora of bash-alternatives in the making, there are multiple languages trying to essentially be a "better C", such as Go, Rust and so on, not to mention C++ and its descendants.

I definitely think there's room for an improved bash. The biggest question is whether an alternative can become ubiquitous enough to be a fully qualified alternative.



> I definitely think there's room for an improved bash.

Absolutely! I wholeheartedly agree, and would love for such a project to exist, and see sufficient adoption such that it's a standard at my job.

I may not have articulated my prior point precisely enough.

Too often I'll see a professional JS programmer (for example) who will forego everything they've learned the second they touch a shell script. No useful comments, no functions, poor design & implementation, no care towards idioms, no regard for readability/maintainability.

Given that bash is currently the standard, someone is better served by actually learning it than a worse, novel alternative.

Any successor must encompass the power and feature-set that exists within contemporary shell scripting.



The reason why it is attractive is to enable people who aren't ever going to get proficient at writing/debugging bash/sh to the level that you're at, to write enough bash/sh to get their jobs done both quickly and safely.

The majority of people doing DevOpsey kind of stuff aren't remotely experts with what they're working on, and won't ever be experts, but work still needs to get done.



Maybe if those people started visibly failing at their jobs, they'd either upskill or be forced out, and then the magical hand of the market would raise salaries for the rest of us.



Oh look you're basically me in 2006 thinking that all the sysadmins that couldn't be bothered to learn a real programming language needed to be replaced by people in India who weren't intellectually lazy (because I was a bit of an asshole and wrong).



Have learned a bit of bash but that's what has kept me going on always. There's no point in learning million little inconsistencies all of which aren't even documented anywhere like a MDN or MSDN if you will.



That's about where I gave up too. You make it sound like you disagree with me when in reality you do. There is no real reason to write any production code in bash or any other shell language.



Bash is for automation, not building services. It's used heavily in sys admin and devops tasks

The reason to learn it is that it can make your dev experience better. Make falls into this category as well. Both are great ways to combine multiple tools together.

That being said, I'm using more Dagger for this purpose these days, which provides you SDKs for your fav langs.



> There are many reasons why it is to be avoided. For me the deal breaker was that one time I spent 2h debugging a script only to find out there was an extra space character that made the whole thing break silently.

This sounds like using `a = b` instead of `a=b`, something you learn in an introductory Bash tutorial. It's not a good argument to say "I didn't learn Bash and it cost me 2 hours, therefore I should continue to not learn Bash."



You know that your shell environment is not forced on you, right? You can just switch to ZSH, fish, etc.

The only time this complaint is valid is if you're in a professional environment and you're not allowed to use alternate shells in your account, for some reason.



> The only time this complaint is valid

I've dealt with bash scripts running in high load production systems and I'm very happy I don't have to any more.

There is a reason my comment was a response and not a standalone statement. The OP was explicitly speaking about using it a professional setting.



> The OP was explicitly speaking about using it a professional setting.

Note the little asterisk at the end there?

I've worked as a systems administrator/engineer and a DevOps Engineer at quite a few companies. Very few of them cared what shell your personal account used (and usually had zsh, if not also fish). And they wouldn't run shell scripts in production; but if they did allow that, they probably wouldn't care which one you included in your shebang (as long as the package was in our standard deploy set).

Obviously that doesn't represent all places, thus why I added the caveat. However, the number of places that A) allow you to run shell scripts in production and B) are also so overly paranoid as to limit user account shell interface are probably the exception, not the rule.



> ... learn bash.

I've been writing bash code regularly for my entire 20 year career. I'm not convinced that I've "learned bash" yet.

Bash has too many different ways of doing things, and it can be hard to determine which variant is the right way.



Agreed. For instance, the piped bc sed thing's error handling is apparently broken, and no one has pointed that out yet. pipefail and $() don't play nice with each other:
   #!/bin/bash
   set -euo pipefail
   echo $(false|true)
   echo a
   echo $(false)
   echo b
   false
   echo c
Prints 'ab'.


> Also, what's the point of having types in your language if you're not going to use native arithmetic when applicable:

I would expect it uses bc instead of relying on bash's environment/version-dependent 32/64bit integers.



I've done (some? a lot?) of bash scripting and failing correctly in the absence of dependencies is really important. And once you've incorporated them you've destroyed what was good about targeting bash in the first place, because you're back to giving the user a configuration management problem, and if you were going to do them, you might as well have them install $better_language anyway.



> Surely it can't be harder to learn the fundamentals of shell scripting than to learn whatever flavor-of-the-month alternative is? I've been using GNU/Linux for ~10 years, and Bash never clicked for me. Meanwhile, picking up other (not all) programming languages in the meantime occurred naturally, without much effort. At the same time, some of my friends picked up unix shell scripting just fine, even if typical programming is harder for them.

I suspect some people have trouble with the syntax used by Bash, and languages like Perl. But I can't pinpoint what causes it.



Yeah, I stopped reading at that conversion. If it cannot handle a single "<" imagine more complicated logic, it would make debugging it a true nightmare.



My reaction was that the bash version is a monstrosity indeed and I got shudders thinking about being an on-call operator getting a page and then having to read and understand what the hell is wrong with this script.



I was hoping for a solid way to handle subprocesses, and especially how to handle failures. They have something in the docs, but it seems a bit underwhelming, in terms of how status codes and failures are handled. It’s probably better than straight bash, but I feel like there’s more to be done here.

Also: I don’t see mentioned how pipes affect the exit code propagation (i.e. pipefail).



I love the concept of a language like this, though I haven't evaluated amber in detail.

I want to do more complex things using bash for devops stuff, because it's mostly working with other command line tools. You could directly use APIs and SDKs of cloud providers, but then things become an order of magnitude more complex than just writing the bash script.

However, when you do that you encounter bash's weaknesses, like not supporting a nested structure like Map> and so on. Having to wrangle tools like jq, sed and awk all together to work around the lack of proper data structures and string manipulation is no fun either.

Something like amber could bridge the gap while still being portable across systems.



One approach I've used for running arbitrary code, with arbitrary dependencies, without having to rely on much support from the host system, is to create an Arx file (a shell snippet followed by tar gz data, which is self-extracting/executing) using `nix bundle`. On one hand it feels like a "poor man's container image"; but it's better in some ways since it doesn't need any daemons, etc. to be running.

For example, we had an EC2 machine whose init would fetch and run a shell script from S3, which spun up a big JVM application. We started getting strange errors from deep inside the JVM, which seemed to be caused by some security update somewhere in the stack (either ours or Amazon's); but that machine image had evolved via snapshots of snapshots, etc. and was a pain to figure out what it was running, let alone update and manage it in a sane way. As a quick fix I bundled a known-good JVM into an Arx file (along with other dependencies like Bash, etc. since it's Nix, so I might as well!) and put that on S3 in place of the old shell script. That got the EC2 machines working again, without having to mess with the (already messy) image; we gained a more sensible way of specifying dependencies (via Nix); and, since we were now building an artifact, rather than slinging a raw script, we could do stuff like running Shellcheck on the code. This definition was also easy to re-use when we eventually switched it to ECS containers (just changing the `nix bundle` options).



There are lots of new shells around that does support everything you’re describing. Eg:

- nushell

- elvish (was featured on HN recently)

- powershell

- murex (disclaimer: I’m the author of that one)



If I’m going to install a tool to run my code, why would I pick any of these alternatives instead of going straight to a “real” programming language?

Most of the value in sh to me is that it’s ubiquitous. I write a script and the runtime is already available across most devices, operating systems, containers, and everywhere else without an extra steps.



This is a well-informed observation.

It's unfortunate Amber doesn't have a Map data type, pretty big limitation.

I can think of ways of doing it, e.g. something along the lines of:

* Set the constraint that all keys must be of type String

* Then abuse /dev/shm/.Amber/${pid}/${func}/${stackDepth}/



I sort of see that, but at the same time why not just compile to python instead? I get that python can be less convenient syntactically for "scripting" type things, but if you're writing in what's effectively a DSL that gets transpiled for a different interpreter anyway...



I am wondering why not take an existing language like python, and compile that to bash?

Also for this to be really unique and good it needs insanity like being able to avoid all external dependencies, or else idempotently installing them in ways that can not affect anything else externally, or even temporarily, in multiple crazy various environments including nix? my mind never shuts up



> I am wondering why not take an existing language like python, and compile that to bash?

You'll run into the Alternative Implementation Problem

https://pointersgonewild.com/2024/04/20/the-alternative-impl...

https://news.ycombinator.com/item?id=40337036 - 10 days ago, 84 comments

Relevant excerpt:

> You may read this and think that the key lesson of this post follows the old adage that “if you can’t beat them, join them”. In some ways, I suppose it does. What I want to say is that if you start a project to try and position yourself as an alternative but better implementation of something, you are likely to find yourself stuck in a spot where you’re always playing catch up and living in the shadow of the canonical implementation.

This concept doesn't exactly map 1:1, but gist seems correct. Who wants to program in a worse and very limited version of Python? Who's going to keep things up to date in e.g. Amber as Python continues to evolve? Not fun.



i guess the idea is that you can scp the output script to a machine that has only a minimal *nix install and run it there without having to install anything more. so bash, bc and sed but not python or ruby



It’s actually fork() that’s slow, rather than Bash itself.

If you were to write Python code that did a of its business logic by spawning new processes rather than using its core libraries (which are written in C) then you’d find Python would be just as slow too.

But the exact point of Bash and other shells is to make launching new processes easy. This is the point that people miss when they say “write it in a real programming language”. Shells solve an entirely different problem to other languages. And “real” programming languages suck at working with other executables in a quick and convenient way.



Same for me. There's still plenty of unused gemstones as programming languages so perhaps the authors may still decide to change their name.



At first I felt the same way, because as a language designer I feel there are enough good names out there that new projects should be able to go find one that isn't taken.

However, having looked at this case I'm not against it for several reasons.

First, it looks like the project has been abandoned. There hasn't been an issue closed in 2 years, and the latest copyright date is 2019.

That might not be enough to sway me, but then you have to realize the project doesn't even really call itself a language. They are "Amber Smalltalk". This is an implementation of Smalltalk, not a language itself.

  Amber is an implementation of the Smalltalk language that runs on top of the JavaScript runtime.

  Pharo Smalltalk is considered as the reference implementation.
So you see this is an implementation of Pharo Smalltalk that runs on JS.

Finally, and the nail in the coffin, over the past couple years in the mailing list there has been little activity -- nominally a monthly message from a single user advertising the "UK Smalltalk User Group Meeting" which sounds interesting but not really about Amber Smalltalk as a project per se.

No one is using it, no one is working on it, and it wasn't a language in the first place. I would say that it's fine for an actual language project to pick up the name.



I haven't seen a mention of the Oil Shell (https://oilshell.org) project's OSH/YSH yet and I'm quite surprised.

Oils goal is that OSH is just a new Bash implementation (although not bug-for-bug) but with an upgrade-path to the more modern YSH for real programming with type safety etc, but still as a shell language. One of their exciting ideas is using code as data in a somewhat lisp-like manner to allow function introspection in YSH etc.

Based on other comments it seems like Oil Shell is much more principled in working to actually understand Bash as it exists and presenting an alternative language. I would be interested in what differentiates Amber and whether they have a response to Oils?



Osh is stricter than bash. Anything written to osh should run in bash with no issues whatsoever.

Ysh implements new, non-compatible syntax and semantics.

Osh and ysh are two different shell languages from the Oils for UNIX project. The release build generates a single binary, oils-for-unix, and installation creates two symlinks for the two different shell personalities, similar to how the bash binary will behave in its POSIX-compliant mode when it is invoked as sh.

There is no "Oil shell" now, at least not without ambiguity.



It's an entirely different problem; the problem being solved here is not "use a better shell than bash". It's targeting bash as a universal runtime because it's (kinda sorta) ubiquitous.



What’s the use case for that though? Nobody writes shell because they want a shell language. It’s written purely out of necessity because that’s the lowest common denominator installed on all systems. Something that Amber here provides a potential exit path out of.

Oil would have to reach a really critical mass before it can compete, which is sort of a catch 22 situation.



oil's idea is that you do want a shell language (e.g. if a lot of what your code is doing is job control and manipulating unix pipelines I can see shell being the best language to express that neatly in). you just want a better shell language than bash. but there is a ton of existing investment in bash itself and things like nushell their that away to start from scratch, and oil says what if we build a better bash instead.



This looks very promising! I've been wanting something like this for years, a nicer shell scripting language which compiles to Bash so I can run it almost everywhere. I don't think Amber is not quite there yet, missing some pretty essential shell features, but is off to great start. I'll be keeping an eye on this.

Maybe this is still early in development, but I don't see anything for pipes or redirection. Is this a planned feature? That's an essential feature of any shell and I can't see myself getting much done in Amber without it, especially pipes.

I suggest adding more examples to the home page, especially side by side comparisons with Bash. The home page is somewhat lacking in content right now.



I like the idea, but the example suggests that it invokes external programs for trivial stuff, like comparing a variable to an integer (invokes bc and sed). I would prefer if it compiled to pure bash, and only invoked external programs if explicitly called for. But I guess emulating some feature in pure bash is hellish, that's exactly I don't want to write that manually either though.



I think this is a pretty interesting idea. I could see myself using this on one of my previous embedded jobs; we needed to stuff a bunch of complex logic into our initramfs and the only interpreter we could really fit in there was the sh from busybox. We ended up with a pretty hairy shell script that probably would have been nicer to write in Amber.



I don't love the $...$ syntax for executing commands. Using $ as a string delimiter is very strange to my bash-accustomed eyes.

It's a shame that they provided such weird syntax for the most important thing you tend to do in a bash script, while providing fairly nice syntax for everything else.



and i dont like how it's inconsistent that echo is not using the $ syntax (which makes sense internally, as it's a built-in, rather than executing the $echo command).

Overall, it is cute and neat, but i find that if you are looking to write bash scripts that require this level of programming, you'd be better off writing it in python, or perl. Only in very austere environments can this be utilized, but the requirement of having `bc` installed means you must also have the ability to run package installation, so might as well run the package installation for a full on programming language!



I think that this does fill a niche. You can still compile to bash outside of this austere environment, and run the scripts within it. And python isn't very ergonomic for running external shell commands (or, say accessing environment variables), the syntax for doing so in amber looks much neater.



> the requirement of having `bc` installed means you must also have the ability to run package installation

I don't remember ever installing bc, but I use it frequently and it's always there. Are you sure it's not already part of most base systems?



> I don't remember ever installing bc, but I use it frequently and it's always there. Are you sure it's not already part of most base systems?

In my bash scripts, using `bc` makes my script not work on git-bash under Windows. Almost everything else I do in a script that isn't linux-specific (including netcat/nc usage) runs in git-bash for Windows.



This is interesting, but I don't see why I would use this instead of Python or Go? With those I'd get portability, modern features, etc but with the added bonus of a powerful standard library and ecosystem.

Don't get me wrong, I hate writing shell scripts, so this is definitely an improvement over that. It just doesn't seem like the most practical solution (unless I'm missing something).



Bash is ubiquitous and stable, making bash scripts incredibly portable.

All of the other languages you bring up are great for authoring code, but have a non-zero amount of friction when running code in the wild. Python may be omnipresent, but you can rarely count on a specific version or the presence of specific libraries. Go requires compiling platform-specific binaries. Even JVM- or JS-based software requires installing a separate toolchain first.

If you want to write some code (e.g., a launcher, installer, utility code, etc...) that is almost certainly going to run on any computing device created in the last several decades, bash is your language.



I didn't realize we were going so far back. In that case, Perl may be more convenient than Python/Go, and almost certainly a better choice than bash.

Still,

> If you want to write some code (e.g., a launcher, installer, utility code, etc...) that is almost certainly going to run on any computing device created in the last several decades, bash is your language.

Can you give an example of a "several decades" old device for which you'd want/need to write a launcher or installer?



A few days ago, I tried running some code that hasn't been updated in about 5 years. The python launcher has bit-rotted, so now I need to rewrite it. The other 99% of the project compiles fine.

Things like perl (without CPAN) and bash generally take backward compatibility more seriously than python does.

My experience with python (even ignoring the 2 to 3 debacle) is that you can run code on machines that were setup +/- six months from when the software was written. That's unacceptable for non-throwaway use cases unless your company is willing to burn engineering time needlessly churning software that's feature complete and otherwise stable.



Why doesn’t Amber have equivalent issues? Such as depending on a specific version of bash, or specific executables to be installed.

Also does bash run on windows outside of WSL? Amber seems to argue that it doesn’t support Windows because Windows doesn’t support bash.

That would cut against the idea that bash can target more devices than Python, which runs natively on all platforms.



> Why doesn’t Amber have equivalent issues? Such as depending on a specific version of bash, or specific executables to be installed.

Because it's easy (for most cases) to write backwards-compatible or portable shell scripts and, since Amber is compiled, it can simply generate backwards-compatible code.

> That would cut against the idea that bash can target more devices than Python, which runs natively on all platforms.

The point is that Bash is more ubiquitously available, which is important if you write something like an install script.



"pretty ubiquitous" is what I'm referring to here. OP seemed to imply that because the other options have "non-zero friction" that targeting bash has zero friction. But you have to make sure bash is there, and if it's not, you have to install a toolchain that may include an entire operating system.

I guess I just don't understand how having the user install git bash or WSL any different from having them install Python or JVM?



Amber doesn't have equivalent issues because bash and the utilities it uses like bc and sed are incredibly stable. I've found nontrivial shell scripts I wrote decades ago that still run entirely unchanged.



GNU and BSD tooling differs in small, but sometimes breaking ways. One example off the top of my head is that GNU sed accepts `sed -i`, but BSD requires `sed -i ''`, i.e. an empty string to tell it not to back up the existing file. Or GNU awk having sorting capabilities. Etc.



That only applies to platforms on which those utilities already run. We are talking about portability here, so that means Windows, and those utils don't run on Windows. So you're left with git bash, which isn't bash and isn't running the same utilities; and WSL, which requires installing an entire operating system.

So I ask again, why does targeting bash offer a better portability story than say the JVM?



I suspect we may have different ideas of the use case here. To me Amber is not a language I would develop an application in. I would use it in the same places I currently write bash.

Given that, my production systems are likely a big target. None of my production systems have a JVM/JRE installed, and installing one just to run shell scripts would be (IMHO) a huge increase in attack surface for little to no gain. It would also bloat the hell out of my container images.

If I'm writing a GUI application or a web server or something, then I would agree JVM is more "portable." But if I just want a script that will run equally well on Ubuntu 18.04 and Fedora 40, and across all production machines regardless of what application stack is there (node.js, ruby, python, etc), and regardless of what version of node or python or ruby is installed, Amber feels highly portable to me.



But whenever people talk about writing Bash, you always have someone advocating that you should write POSIX sh instead if you want maximum portability. To paraphrase Qui-Gon Jinn, "there's always a more portable language."

Underlying all of these discussions is an attempt to reduce issues of portability to 0. It's a good goal, but IMO interpreted languages will by-definition never be the solution.

I've started reaching for Go in situations in which I'd usually reach for Bash or Python, and it's a godsend, because I can control for portability at compile time. If you make compiling for various GOOS and GOARCH targets the norm, portability is never an issue again.



> In Amber, there is a rule that allows you to use a : symbol where you intend to write only one statement, wherever you can write a block of code. This can be handy when you want to perform multiple conditions with a single statement each.

This feature, optional omission of delimiters in the special case of a single statement per branch, originated in early C and has since facilitated untold numbers of bugs, not least the spectacular Heartbleed bug (which has its own homepage, https://heartbleed.com).



Maybe I lack a bit in creativity: but what purpose does this serve? Almost all machines have a python or even perl interpreter if you really need to go there.



What you think is ubiquitous is actually pretty narrow, and your perspective is biased based on your own experience; by "machines" you could mean containers, VMs (of a dozen different virtualization technologies), on a bunch of OSes and OS variants let alone distributions of same. You probably think something like an embedded Linux on a fleet of soft switches is a "niche case" but in the real world people work on these kinds of things all the time.

I don't think Python is as ubiquitous as you think is is; but regardless, mere presence is not the only problem. What Python are you even talking about? Which version? How is the user supposed to handle dependencies?

If something like this has utility, it's to minimize the configuration management problem you're giving the user. "Use Python" is definitely not the answer to that.

This doesn't seem to be a terribly good implementation of the idea, but it's not a bad idea, on its own.



Both tools look useful (although I lean toward Amber's ECMAscript syntax). I liked this comment from batsh:

> Both Bash and Batch are messy to read and tricky to write due to historical reasons. You have to spend a lot of time learning either of them and write platform-dependent code for each operating system. I have wasted lots of time in my life struggling with bizarre syntaxes and unreasonable behaviors of them, and do not want to waste any more.



Something I noticed when I went to copy/paste something off the web page for this comment: at least some of the text on this page is in the form of an SVG?! That just feels gratuitous to me, particularly given how annoying it is to not be able to copy it :/.

https://amber-lang.com/pipelines/compile.svg

It also, hilariously, feels a bit fitting given my actual complaint ;P.

    if [ $(echo ${__0_age} '<' 18 | bc -l | sed '/\./ s/\.\{0,1\}0\{1,\}$//') != 0 ];
Look: I appreciate that this is a shell script, and they often aren't "fast", but if I'm writing code in a script and have a comparison it at least might be in some tight loop and the performance different of spawning bc and sed into a subshell is frankly brutal.
    if [ "${__0_age}" -lt 18 ]; then


I think a superset of bash with strong types makes more sense than a brand new language that compiles to it. This is what makes typescript so successful (aside from other things) is that you can just drop it into your project and incrementally adopt it.



I don't share the "oh my, why bash, why not English then?" sentiment in the comments.

I've done a bunch of DSLs (CLI with some elaborate syntax really) that compile to bash as a target. It just works for me. Bash is always available; the constructs you can use are there (coreutils are mostly enough for the primitives and xargs for parallelization) It has been great so far for basic cases. Where things get complex (as other's have said) is handling failures, errors and intrinsic cases. That's when you're reminded why people didn't stuck with bash in the first place and we got other scripting languages.



Worth it.

With Amber (and anything like it) you need an Amber compiler to be able to work on the script. And half of the reason to use a "script" is the ability to edit them if need arises.

Also since I'm 100% Nix now, I require `nix` installed everywhere anywhere, and if there's `nix`, anything else is available in a reproducible way.



I think this is a cool concept, and all, but honestly, at this point, why just not use Python3 as your project's scripting language, and reap all the benefits of a simple, typesafe/autocomplete ready language with broad IDE support, and no intermediate 'compile step' which generates "don't touch" code?



I can see some very niche cases where an embedded system has bash not much else and "taking the most you can out of bash" could end up being useful.

besides that? probably nothing.



This really can only work if the error messages map back to the right line number, right? I used to f with transpilation and if you can't get the error line at least, its a really huge overhead for not much gain, given that transpilation can only move the semantic needle so far.



Impressive project.

But I always feel that if you reach a certain complexity in your bash scripts, you should rather pivot to any other $lang.

Bash scripts should be simple glue between programs (unix-principle).



> Bash scripts should be simple glue between programs

If a script is literally just running a list of commands and occasionally piping the output from one to another, you may as well make it POSIX compliant (therefore a sh script) since you're not using any features of Bash anyway.

> But I always feel that if you reach a certain complexity in your bash scripts, you should rather pivot to any other $lang.

This is a common critique, but what other lang?

I make Bash scripts all the time for system administration tasks, and they largely just werk™ as long as you have Bash (90% of the time pre-installed) and the necessary commands. From there, any command can be readily called, piped, or saved to variables/arrays, and Bash has powerful (enough) native math and string manipulation capabilities.

Meanwhile with Python there's always some hassle installing dependencies with pip(x) and virtual environments, plus the unfortunate rare Python 2 encounter where you either gotta rewrite the whole thing in python 3 or figure out how to install the old Python 2 runtime and 3rd party python 2 libraries used in the script on your distro.

I don't see the appeal of sonething like the OP though. I can't imagine the "precompiled" scripts are so much more readable its worth the incomprehensible mess it appears to generate with the "compiled" script.

Why are people so afraid of Bash?



I have to agree. Bash works fine in its environment, and while there are times when another language might be more appropriate for whatever reason, Bash works. I can run it on any box I have access to and if I stay away from utils that aren't preinstalled, I don't need to worry about dependencies. I've found that python is far more version and dependency driven.



GitBash is a POSIX environment that includes the titular bash, and it doesn't include bc, which is a direct counterexample to your original assertion. Lots of people who are stuck on windows for one reason or another still want or need to use bash, and that's why they use GitBash!

Clearly, if Amber did not have a dependency upon bc, it could then potentially provide value to those users.



I sort of get that but does this hypothetical system not have a c compiler? If it does, you can compile and install lua in probably under 5 seconds and actually have a sane language with respectable performance to target.

It also seems odd to me that a place that would be so rigid about installing a tiny language runtime would be ok with checking in the artifact of an experimental programming language.



Great stuff. I hate having to write bash scripts and this seems like a great way to create scripts that will run on even old linuxes. Seems that there are a few sections missing in the documentation, stuff like the standard library and the extended documentation about handling failure, very alpha indeed :^D. I'll definitely be keep an eye on this one.



There is a typo in the docs:
    Notice that when executing a single expression, Amber automatically includes it's standard library for you.


Rust, except using existing executables (or dynamic libraries) instead of downloading packages and waiting ages for it to compile? Seems like a great idea.



Adding to your point, I also instantly regarded Amber as an alternative to other higher-level languages. Especially scripting languages that require elaborate ahead-of-time setup. Amber will surely score some points here due to its portability in the Linux world. But other commenters seem to see it as an alternative to writing Bash scripts. So which will it be?



The homepage is good looking and I applaude them for that, but I gotta admit, it's almost too dark for me to see some of its content. I think grey text on black background is risky move.



What I really want for scripting usecases is a language that has modern language concepts (easy to use arrays and maps, text formatting strings, etc) but that allows me to call commands as easily as I could call functions.

Maybe there are existing scripting langs that make this so? Ambers approach is not bad but I feel it could be even better.



It's generally quite doable to write functions for that, e.g. in perl -
  use IPC::System::Simple qw(capture);
  use Sub::Install qw(install_sub);
  
  foreach my $command (qw(foo bar baz)) {
    install_sub $command => sub { capture($command, @_) }
  }
  
  ...
  
  my $output = foo($x, $y); # will throw if calling 'foo $x $y' returns non-zero
(there's a Shell.pm but it's not as helpful with errors; I should probably consider rewriting it a more modern way)

Note that if you're distributing code p3rl.org/App::FatPacker will produce a single fat script that has (pure perl) dependencies included so you don't need to install anything (except for having a perl VM in $PATH).

There's also multiple libraries that provide a '$' function in JS (which may be why amber picked that delimeter) and then you can do

  let output = await $`foo ${x} ${y}`;
(the template string's variable parts get autoquoted)

Note that bun.sh has a 'bun compile' subcommand that bolts together your (pure JS) code and the static bun binary to produce a single file you can also copy around.

I'd suggest avoiding backticks in ... everything. In shell, $() is nicer, and both perl's and ruby's backticks require effort to use safely.

No idea re python but I see no reason you couldn't do the function generation thing if you wanted to, and somebody's probably librarified it already.



Have you taken a look at PowerShell? It can infer whether you're invoking a command or function.

Disclaimer: I work at Microsoft, but not on PowerShell.



I love the idea, but having spent so much time learning the ins and outs of bash I feel like I’d rather just use bash. To say nothing of the Lindy Effect. Bash will probably be here in 50 years. This probably will not.



> Variable declarations can be overshadowed - this means that you can redeclare the existing variable with different data type in given scope if you need to.

I'm having a hard time deciding why I would want this. It seems more likely to result in bugs than being a useful feature.



It's a common practice in Rust at least, where instead of having a mutable variable which you modify across several lines, you declare a new immutable variable with the same name on those lines. I like it, but I guess it really just comes down to preference and what you're used to.



I quite like that pattern, but I think 'with a different datatype' should result in a small gnome climbing out the back of the computer and hitting the developer with a mallet.



It works the same way in Haskell, eg
  main = do
    let x = 2
    let x = "foo"
    y <- pure 3
    y <- pure "bar"
    putStrLn $ x ++ y
which is really the same as
  main =
    let x = 2
    in let x = "foo"
       in pure 3 >>= \y ->
                       pure "bar" >>= \y ->
                                        putStrLn $ x ++ y
So it works pretty naturally where each assignment is kind of like a new scope. If the type system is good, I don't think it really causes issues.


It's useful in Rust where you might parse input as a string and then convert it to a number.

It's better than having a variable named age and another one named age_num or the opposite, str_age and age.



The Amber installer is itself created from Amber. I suppose that makes sense....

I would prefer that it produces code with a bit more whitespace for readability.



You certainly should. Even if you know some bash and zero python I suppose, because bash is just so "special".

But only if your target environment has python, and that's not a given, not a zero-cost change, in this age of short-lived containers. The use case for a "compile to bash" language would be compile in a dev-operated environment and commit both source and output, which would then run on later stages in the CI/CD pipeline that might very well not have python. And those stages will absolutely not have your "compile to bash" tool, which would ideally be included along with the source unless it's somehow as near-omnipresent as python. This one appears to be implemented in rust, I'd certainly feel more optimistic about depending on it as a script authoring tool if it was e.g. a python script that could be included with my "pre-bash" source, and/or packaged in, say, npm.



I had done a SQL-like ->Bash "language," and the reason was that I wanted to have access to CLI tools as-is, in my case, curl programs. Had I chosen Python as a target, I'd have to use requests/urllib, which would be much more verbose.

The same applies if one needs awk, sed, etc... The constructs are just there.

Also the fact that you can just pipe to bash, makes development much easier.



And enjoy a mad scramble to rewrite everything after Python 4 releases?

For all its faults, after a Bash script works once, it usually just keeps working.



I totally understand the drive to a more modern shell language. But why translate Amber into bash, why not execute it directly? Why there isn't an Ambersh?

We got some great BASH alternatives: Zsh, Fish, Elvish, Nushell, Xonsh. I did not evaluate them all, but if Amber is even better, why not have Ambersh?

Can somebody please explain?



I wish there was a solution for something like

Download(file.gz)

and that gets converted to either a wget or curl based on what the system has.



    dl_tool=$(command -v curl || command -v wget)
    if [[ ${dl_tool} == *curl* ]]; then
      dl_cmd="${dl_tool} -sLO $1"
    elif [[ ${dl_tool} == *wget* ]]; then
      dl_cmd="${dl_tool} -q $1"
    else
      printf "%s\n" "ERROR: curl and wget not available, exiting"
    fi
Can easily be turned into a function.


Very interesting. Don’t think I’ve seen anything like this before. How would the speed on something like this be compared to PowerShell, for example? Is it faster because it’s compiled? Could I write a web server in it?



It's not compiled in the way that C is compiled. Transpiled would be a better term (though there are debates on where the line is).

Amber code gets turned into bash code, and run by a bash interpreter. So at best Amber's performance will match Bash's performance.

I've seen people say bash is faster than PowerShell, but I don't have benchmarks to back it up. Even so, I wouldn't recommend using it for performance intensive tasks such as writing a web server.

The great advantage I see for Amber is being able to write scripts in a sane language (bash is not enjoyable to write), and have those scripts be able to run anywhere that Bash is installed.



Why would a person go this route instead of a higher-level scripting language (Python, JS, Ruby, etc.) or a full-blown binary (Go, Rust, etc.)?

I don't get it. Asking not to be rude but to be educated.



I would assume because bash is deployable to more locations with fewer assumptions. Scripting requires environments to be set up correctly to run, and binary makes architecture assumptions.



Looking at the output I find myself thinking: "if this kind of thing caught on, it would only be a matter of time until someone proposes a BASM (Bash ASM)..."



I guess I don't understand the point. Could someone summarize why I would use this rather than writing my utilities in something that doesn't compile to a quirky, limited programming language?



Because the runtime for those utilities is available on just about every Linuxy OS there is. Saves you having to install the runtime separately, eg in your CI pipeline.



It's not, really. The claim would be that it's portable, but bash can be a little finicky in that respect.

The documentation didn't mention what version of bash is required, but it'll be something. And plenty of systems will have old versions of bash - especially once you move out of the world of "my laptop" - obviously.

As has been mentioned elsewhere, it makes many times more sense to just learn bash, than to learn this. You likely already know some, and it excels at the jobs it's designed for - which is chiefly to launch processes and pipe output around. Subshells are elegant, even if making sure everything launched by your script is 100% finished before you exit can be a pain.

I doubt Amber fixes that problem in any case.



Why does the documentation need WebGL?

I have WebGL disabled in my main browser for privacy reasons (super-easy to fingerprint), and I only get to see 'Application error: a client-side exception has occurred (see the browser console for more information).' with an error about ThreeJS initialization in the console.

I assume it's for the fancy animation, and I find that ridiculous. Loading a whole 3D library just to show a fancy animation. This could have been done much simpler by using an animated image format, though admittedly with less functionality - but is it even needed?

I fear the whole industry suffers from overengineering and this is just a symptom.

Even if you're convinced you need the fancy animation, why not gracefully fallback to not showing it and still make the docs usable?



"Disabled" is not metric among those who have disabled it, while WebGL enabled allows identifying different browsers with WebGL enabled.

I know what you mean, but statistically, I don't think that it is easier to guess the person from those who have disabled it versus the list of all browser fingerprints where is that matching or very close unique fingerprint.



Sites can tell if you’re running Adblock too.

You’re always leaking some kind of information that advertisers will pay for. The game is choosing what information you’re okay giving them



It makes you look the same as everyone else with webgl disabled. This happens to be a reasonably large number of people, because it's the default setting in a variety of privacy focused browsers.



> Even if you're convinced you need the fancy animation, why not gracefully fallback to not showing it and still make the docs usable?

Yeah, it's not like adding a try/catch block around the webGL invocation was hard anyway…

But I agree with the sibling comment about disabling WebGL likely being a fingerprinting liability more than a win.



I find WebGL driveby risk and fingerprinting risks remote and hard to compare. To wit, I've never experienced a recognized loss from any kind of drive-by or fingerprinting attack.

However the severity of drive-by risk is so high, just in theory, that I am tempted to dismiss the risk of fingerprinting. But my reasoning is not very well informed.

What principles and information sources do you use when deciding that avoiding fingerprinting is more important than avoiding WebGL drivebys?



If you disable WebGL you're putting yourself in the 2% of devices that don't support it.

That's not fantastic, but with WebGL enabled just the extension list can be used to narrow you down way further than that - according to https://amiunique.org/ 0.35% of users share my exact supported extensions list. My `gl.getParameter(gl.RENDERER)` seems to be unique to me, for some ungodly reason. The output from rendering a single triangle also narrows me down to 0.8% of all users.



This is potentially a very interesting/useful site. My only problem is with some of the questionable statistics it's showing, that might suggest their dataset is not strictly representative. The time zones in particular seem to be significantly skewed compared to their respective population distributions. There's also a suspiciously large number of UTC0 fingerprints, which suggests they might have a large number of crawlers/bots in their dataset.



Thanks for the website, it's very interesting.

From their statistics, it seems that WebGL gives far less info than things like the list of fonts, the Canvas properties, or the audio input, and more comparable to effective height of the screen (or even just user agent in my case, using Firefox on Linux, though I have to admit I'm surprised about that: 0.02% sounds really low).



The reality is that in disabling WebGL, you have an incredibly unusual configuration and the author didn’t test for it. Stop making this a philosophical argument. Why does the documentation need WebGL? It doesn’t, but the web has moved beyond just addressing “needs”. Why do you need to be on HN? You don’t. Yet here you are. I’m not arguing that the juice is worth the squeeze here, but your argument has a false premise that you’re exploiting to make the situation sound more absurd than it is.



I don't think it's incredibly unusual. On sensitive machines in corporate settings you commonly disable pretty much everything on every page that isn't in the 'trusted zone' or whatever.

Personally, by default I shut down all the browser crap that might needlessly allocate RAM or allow memory leaks or slurp cycles without giving me some kind of value. I think this allows me to hold at least a few hundred extra tabs open. For technical documentation I expect text, hypertext and static images, if you're going to require JS and WebGL and whatnot I'm going to assume you aren't entirely honest with me about the information we're about to share and I'll probably sandbox a scrape of the documentation if I really need it.



If you mean block all network traffic that isn't in the trusted zone then sure. But disabling features you personally don't like isn't a good security practice.



Maybe I wasn't clear. First I described a somewhat common practice in corporate settings, which is ostensibly motivated by security policies. Then I described my personal preference, which I motivated with resource consumption and performance factors.

I did not say that the corporate practice is a good one, I just brought it up as an example to refute the suggestion that limited web browsers are incredibly unusual.



Quick note to the creators: the website looks almost exactly like every other "here try this cool command-line utility so we can hook you and start charging a service fee for it" startup website and I instantly distrusted it. The design communicates that you're selling a product, not providing an open-source utility.

That said, very interesting syntax. I agree with others about the $...$ being kind of odd, but other than that this is really cool.



Somehow I didn't get that impression from the website at all.

I thought it was clear, functional, told me everything I needed to know in the order I needed to know it, and also looked nice - tying into the "amber" branding well.

Maybe I just don't spend enough time on websites that are designed to hook me in and start charging a fee? ;)



It's not a big deal by I second that. I also thought, "let's see how they are making money out of this". I clicked on the github link and found out it's GPL 3.0, all of it, no open core or similar stuff.

Maybe a tiny note about it being open source could help.



GPL 3 isn't great for either. From a commercial perspective, you can only use it if you lock your stuff up in a datacenter, but not if you ship it on a device that customers own. In practice, that means GPL 3 is not only supporting surveillance capitalism, but is also banning use in commercial systems that do not spy on their users. (The US CLOUD Act says that you have to provide the government with access to all machines you have access to, even if they are overseas. In practice, that means that any commercial GPL 3 stuff that has a footprint in the US is globally subject to US-style dragnet surveillance.)

Granted, bash is now GPL 3 (which is why Apple has to ship an obsolete version, and now defaults to zsh), so you can't use Amber on machines where GPL 3 won't fly (unless it can also compile to posix shell, zsh, etc).

Anyway, if you're interested in freedom for your users, I'd suggest AGPL 3, since it prevents people from locking it up inside the cloud or shipping with proprietary operating systems. At least that way, you're not stripping users' right to privacy like GPL 3 (inadvertently?) does.

These days, Apache and BSD-style licenses are looking better than ever to me, at least when I'm at work.

[edit: You can sell machines with Coreboot (and maybe a proprietary BIOS) + bash. However, you can't ship things that use a secure boot mechanism.

From reading the FSF documentation, it's not clear to me if it's OK to ship a GPL 3 userland on a machine with secure boot enabled, even if it can be disabled later. Apple apparently decided that it is not.]



Precisely, beer costs money so "free as in beer" means "free as in money" as opposed to "free as in freedom" that means "free as in ideology"

Gmail is free as in beer, but not free as in freedom, while RHEL is free as in freedom, but not free as in beer.



It's taken from "Free as in [Free] Beer vs Free as in [Free] Speech"

Gratis vs Libre - the former is zero cost (i.e., you're not paying for the free beer), the latter is some cost (i.e., there is a price for free speech - we need to defend the right, accept we won't like everything that is said, etc.)



That is the nearest to an explanation in all the answers ie saying "Beer is NOT free' all the others still confuse me as I read its literally. and 'Free as in beer' makes no sense so the whole thing just does not compute.

I do understand what GPL does and the reasons for this just not this slogan.



I was more put off more by the initial example. Amber being used to calculate an age being less than 18. With sed. Seems almost like an attempt at humor, except the bit where one would laugh.



This is the compiled to Bash part. Sed is used because `bc -l` can return a float when integer is expected. Not sure if can be done better in generic case but it isn't unheard. Certainly can be optimized for some cases. (Here e.g. can use Bash's built-in integer arithmetic.)



But why does it use bc in the first place? Bash does integer arithmetic perfectly well on its own.

Amber claims to be type safe so it should have enough information to avoid the the use of bc in this case.

Ah, just looked at the documentation. Amber doesn't have integer types just a number type. That means that any numerical operation in Amber is pretty much guaranteed to be very inefficient.

It's a very strange choice to make because the underlying 'machine code' supports both integers and floating point.



This is the first thing I noticed. Using ‘bc’ and ‘sed’ for integer comparison seems quite odd.

But then they chose this for their front page example, the one that everyone will see, so I assumed this was to demonstrate something clever I just didn’t understand.

I wonder what the motivation was to not have an integer type?



I think the project itself sounds very intriguing and I am sure to follow how it evolves. But I have to agree with you about the website. There are also some little inconveniences in there that are sure to annoy a technical minded person. Like the links on the upper right corner (GitHub etc.) that are not in fact links ().



I had the same exact reaction. Being an open source project doesn't mean the website has to be ugly or bare bones (though plenty are). But this is sending the wrong expectations about the project.



I think the biggest issue is the image of the floating chunk of amber plugged into computers. It looks 100% like something you'd see from yet another Ethereum hustle.



What does this have to do with their comment? The poster was giving feedback about a concern (maybe not a concern but something that they noticed).

联系我们 contact @ memedata.com