(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40853845

Java 是一种广泛用于开发 Web 应用程序的编程语言。 Web 捆绑器是一种工具,它接受多个 JavaScript (JS) 文件及其依赖项,并将它们组合成一个紧凑的 JS 文件,也称为“fat jar”。 生成的文件包含运行应用程序所需的所有代码。 如果没有捆绑程序,手动管理依赖项和链接到所需资源可能会很复杂且容易出错。 Web 捆绑程序通过生成紧凑的、即用型文件来简化部署过程。 此外,这些工具还提供各种功能,例如树摇动(代码优化)和处理较新的语言功能,确保跨多个浏览器的兼容性。

相关文章

原文


How does it compare to esbuild or swc? Its good we have alternatives, and im still mentally scarred from the javascript ecosystem, where almost everything is slow and buggy. But when you compare to an already native tool (like esbuild) you start getting diminishing returns.



SWC doesn't bundle at all. Esbuild is a pretty good bundler but works well only if your code and dependencies use ESM, it's not as good as other options with CommonJS.



> Esbuild [...] works well only if your code and dependencies use ESM

I cannot attest to that. We are using Esbuild plus CJS at $DAYJOB no problem. Why would that be an issue?



It's an issue because CommonJS allows stuff that's forbidden in static ESM imports/exports, and it was normal to use. Newer code is usually fine, but there are many older backend libraries that can cause issues with Esbuild. Webpack had to learn how to deal with it because it existed at the time CommonJS was most popular, Esbuild didn't.



according the current situation in bundlers written in JS,there is no "really" winner in my opinion。 webpack or rullup,which one is winner is a very personal thought。 So i think there maybe some similar situation in bundlers written in Rust.



Wepack for web apps, rollup for libraries. Very much depends on what you're doing, the tools usually aren't good at all of them. There's 1 or 2 other use cases I'm forgetting.



This is built on swc, and they compare themselves to vite, which is built on esbuild. So the answer to your question is that they claim to be roughly twice as fast as esbuild (-based bundlers) in the benchmark in this article.



This supports all kinds of non-platform-standard features that may tie your project to this specific bundler, but will tie them to bundlers in general.

It would be much better to have projects that work without bundlers, that can use them as an optimization step.



Bundlers also tie clients to developers without them realizing. I work for a webhost and Many people still assume that if they have access to their hosting then they have their "source code". We see often that people migrate a sites after breaking ties with a developer only to find what they have may function, but is unmaintainable.



Sounds like a contractual thing, not like a bundler thing. The client should always include a clause into the contract that the client must hand all work over after closing the partnership.



> The client should always include a clause into the contract that the client must hand all work over after closing the partnership.

I think you might have meant that the developer should hand over all source material after the agreement has been fulfilled.



Yeah this is no different than receiving a binary rather than the source, bundled code is close enough for this comparison (though it would be probably easier to unbundle code rather than decompile a binary, it’s still a fair amount of work)



You'd struggle to extract a maintainable codebase from a C# or Golang web server after the fact too. As an industry we've been making simple websites on shared hosting for over a generation, clients who ignore the entire world of information about the dangers and pitfalls on this topic are squarely to blame as negligent. It ranks up there with not paying taxes and then acting shocked when the government comes knocking.

While Javascript could potentially be contractually mandated to be written in a way to facilitate production codebase recovery, if you knew enough to ask for that you wouldn't, you'd require them to use your source control and to provide build/deployment scripts instead.



You can get pretty far by using importmaps, you would not have treeshaking or a single bundled file, but it works pretty well. JSDoc can be used to add types to your project (that can be typechecked using typescript). I'm currently building a hobby project using preact, htm and jspm for packages. It's pretty nice to just start building without starting a build tool, having to wait for it to finish, make sure it's not crashed etc. But indeed, I won't use this for production.

The only thing I'm still missing is an offline JSPM/esm.sh.



If you don't have any external library (e.g. npm) dependency you're golden. Unfortunately, this means that you now have to write all your code from scratch, which is ok if you're writing a very light website, but it's unsustainable if you do anything non-trivial.



You mean EcmaScript Modules? The situation is quite complicated. Some libraries don't publish ESM at all (React doesn't iirc), and the ones that do often publish CJS and ESM side by side. In that case, you need to read the package.json and decide which file to use, which is not trivial (see Conditional Exports for example: https://nodejs.org/api/packages.html#conditional-exports). In almost any non-trivial case you need to write tooling to make it work, so you might as well use a bundler.


For offline esm.sh you can use service workers cache, no?

Also why not use this config in production? Http2 should give the same performance for multiple small files than a big bundle and it's much better to cache



It works well for a small website. For anything that requires more than a few dependencies, the package management is hell and load time will be insufferable. Also, not everything you grab from npm can just run in the browser even if written in ESM -- things get complicated quickly.



The browser specs largely have.

CSS has advanced enough that I haven't used Less or Sass in many years. Modules make loading easy. Import maps let you use bare module specifiers (or you could use a simple transform in a dev server). CSS modules let you import CSS into JavaScript.

I never use a bundler during development.



Unfortunately singleton peer dependencies (like react) are quite complicated with esm.sh. When esm.sh rewrites a module to import react from the cdn, it kinda "decides" which version of react is it at the moment the module is built on the CDN for the first time. That's why react is "special" and gets a stable build in esm.sh (essentially pointing to a fixed version no matter which version you specify): to avoid the dreaded "two copies of react" error.



> NOTICE: Plugin system is still under development, and the API may change in the future.

Killer feature of vite is to leverage existing plugins system of roll up.

Do you have plans to build a compat layer for existing ecosystem?

Other build tools are doing it. Eg: rspack can use webpack plugins, farm can use vite plugin



Rspack (ByteDance) just shipped 1.0. There's Farm too. This is from Ant Group. Major influx of build tools all built in Rust, made in China.

Turbopack is supposed to be coming, as a total rebuild of bundling. Rolldown seems solid, as a Rust roll-up redo.



Clearly Rust is catching on as a more approachable, safe and performant C/C++.

I personally also think about it as a more-likely-to-make-it-to-production Haskell, with how robust the type system, tooling, and other things are (not to rag on Haskell -- it's a fantastic language and there's lots of overlap in the communities).



they need to create new tracks for promotions and KPI, recreating a wheel in rust will achieve just that. It's referred to as technology investment, but it's really speculation.



I wonder if this is part of the Chinese de-risking/de-coupling. Major Chinese tech companies seems to be spawning their own open source developer tools.



I was confused by the "Rust" denotion in the title and presumed it was an alternative builder to compile rust for web (wasm?). It's "yet another" bundler for javascript. Built in rust.



Personally, I appreciate knowing when something is written in Rust. I know it is very likely I can easily install it and try it out immediately and that it is likely faster than any non-native tool I'm currently using. However, I do find "based on Rust" instead of "written in Rust" to be an odd choice of terms.



Just looking at their benchmarks, it's not particularly fast. es-build looks much better in benchmarks, but it's not written in Rust. It seems they wanted a tool in Rust just-because (experience on the team, preference foe the language...), and then only compared against those.

As for the language "based on Rust", it's likely bad wording due to them not being native English speakers.



esbuild is written in Go, which has similar "probably quite fast and easy to install" properties to Rust.

Compare that to the expected experience if it was written in C++ or JavaScript or Python or Java or ... All of those are either likely to be slow or painful to use.



One of the reasons few people do that is because the build process becomes much more complicated. It's also much more complicated to do any sort of dynamic loading which is not terribly uncommon.



A native executable includes only ... the language runtime, and ...

How small is that compared to the JRE? Also I guess this means the program cannot load arbitrary classes?



Yeah, that's how I ended up using prisma... until I realised they didn't have joins

All this aside, knowing something is in Rust tells me: - It's fast - It's maintainable (imagine the same project but in C)



Sorry I could be much clearer. Was typing amidst cooking.

I meant that Prisma got so much traction despite not supporting JOINs early on.

And then there's me postponing projects because 80/20 doesn't cut it for me. I need to get each and every feature completed before launching.



I don't work in web, and possibly live under a rock. I'm a little confused around what bundlers actually do?

I'd sort of assumed it was a typescript build thing before, but Mako's page gives me enough info to make me realise I'm wrong, but seems to assume people are working with some base knowledge I don't have.

Any pointers to information of exactly what bundlers do? The emphasis on speed makes it sound like it's doing a whole bunch of stuff, what are the bottlenecks? Package version resolution?



Are you familiar with Java?

If so, a web bundler is like a build tool which creates a single fat jar from all your source code and dependencies, so all you have to "deploy" is a single file... except the fat jar is just a (usually minified) js file (and sometimes other resources like a css output file that is the "bundled" version of multiple input CSS files, and other formats that "compile" to CSS, like SCSS [1] which used to be common because CSS lacked lots of features, like variables for example, but today is not as much needed).

Without a bundler, when you write your application in multiple JS files that use npm dependencies (99.9% of web developers), how do you get the HTML to include links to everything? It's a bit tricky to do by hand, so you get a bundler to take one or more "entry points" and then anything that it refers to gets "bundled" together in a single output file that gets minified and "tree-shaken" (dead code elimination, i.e if you don't use some functions of a lib you imported, those functions are removed from the output).

Bundlers also process the JS code to replace stuff like CommonJS module imports/exports with ESM (the now standard module system that browsers support) and may even translate usages of newer features to code that uses old, less convenient APIs (so that your code runs in older browsers). And of course, if you're writing code in Typescript (or another language that compiles down to JS) your bundler may automatically "compile" that to JS as well.

I've been learning a lot about this because I am writing a project that is built on top of esbuild[2], a web bundler written in Go (I believe Vite uses it, and Vite is included in the benchmarks in this post). It's extremely fast, so fast I don't know why bother writing something in Rust to go even faster, I get all my code compiled in a few milliseconds with esbuild!

Hope that helps.

[1] https://sass-lang.com/documentation/syntax/

[2] https://esbuild.github.io/



I already knew what bundlers do, but I’ll just say thank you anyway for writing such an approachable explanation. I might refer to it in the future when someone asks ME what a bundler does :-)



Bundlers take many - usually at least hundreds, often tens of thousands - individual source files (modules) and combine them into one or few files. During that, they also perform minification, dead code elimination and tree shaking (removal of unused module exports).

It's orthogonal to TypeScript - bundler will invoke a TS compiler during the process and also functions as a dev server, but that's just for nicer DX.

Package version resolution is done by package manager, not bundler.



When you say dead code elimination, do you mean if I import some huge library just to use a single function, the bindler will shimmy things about so only the single function is being included in package and not the big library?

If so, that's amazingly helpful, I'm mostly over in python data land and I wish that existed for applications, although admittedly there's less need.



Yes, exactly. Pulling a huge npm dependency is usually not a problem if they didn't go out of their way to make it super hard to analyze at build time.

This is tree shaking though, dead code elimination means it will find code that isn't used at all and remove it - for example you might have if (DEV) {...}, and DEV is static false at build time, the whole if is removed.

So first it performs dead code elimination, then it removes unused imports, and then it calculates what is actually needed for your imports and removes everything else.



That's very cool! I already knew that this was something compilers did, but somehow never even considered you might do the same for an interpreted language like js.

Makes me wonder why some js bundles are still so big, am I over hyping what dead code elimination and tree shaking might achieve? Do some teams just not use it?

Either way, I've come away from my question with a pretty big reading list. This is exactly what I love about HN.



I think it's not so much about interpreted vs. compiled but more about the delivery of client code to the user - every time any user visits any website the browser may have to download the code (if not cached), then parse it, then execute. The less code that needs to be shipped, the faster time to interactivity and also less bandwidth usage.

Some bundles may still be big if teams don't use it, and some libraries are not structured in a way that facilitates dead code elimination.

Consider libraries that use `class`, such as moment.js, all functionality is made available as methods on the Moment class. If you only use 1 method, you still have to bring in the whole class. Whereas if a library is structured as free functions and you only use one, then only that gets included and the rest is eliminated.



Conditionally yes. There are many libraries that cannot be tree shaken for various reasons. Libraries typically need to stick to a subset of full JS to ensure that the code can be statically analyzed.



Bundling is the equivalent of static linking, typically combined with dead code elimination (which is called "tree shaking" in the web world) plus optionally other optimizations and code transformations.



Dead code elimination is related to but distinct from tree shaking - it also means that unused code branches get removed, for example constants like NODE_ENV get replaced with a static value, and if you have a static condition that always results to true, the else branch is removed.



In my book that's all covered by the term 'dead code elimination', e.g. removing (or not including in the first place) any code that can be statically proven to be unreachable at runtime. Some JS minifiers (like Google's Closure) can do the same thing in Javascript on the AST level (AFAIK Closure essentially breaks the input code down into an AST, then does static control flow analysis on the AST, removes any unreachable parts, and finally compiles the AST back into minified Javascript). Tree-shaking without this static control flow analysis doesn't make much sense IMHO since it wouldn't be able to remove things like a dynamic import inside an if that always resolves to true or false.



Yep, that's how it works - you first perform dead code elimination and then tree shaking exactly because it wouldn't remove everything otherwise. Agreed that you need both done one after another in most cases; however you can usually disable either one in bundler configuration and it's a separate step.



I'm not a web developer, though I still develop web apps regularly.

What exactly is the point of a bundler in the rapid development cycle? If you want your web app to load up fast, it's better if you only need to redownload the parts that actually changed, so you're better off not bundling them.



In a dev environment you can use the Vite dev server, which serves every module separately, compiles them on the fly as they’re requested, and hot-reloads them when they change. All at the granularity of single files. Bundling then only happens when building the final output.



If you have a lot of files, the initial (dev server) page load times increases linearly with the number of files you have.

With a slow bundler, that tradeoff made sense, but with a fast bundler, it is suboptimal.

Also, typically the application is split into multiple smaller bundles, so only a slice of the application is rebundled on change.



The best solution (as always) is a hybrid. You want to bundle up a bunch of the small files, and split usually along loading boundaries such as page navs.

For development, you don't need to "bundle" at all but you still need to transpile.



You need to use some kind of automation to fingerprint your files for optimal caching.

Where applicable there simply does not exist a better caching strategy than fingerprint plus Cache-Control: immutable



Well but I might have a hundred files, only one of which changed.

The 99 other ones are still in the browser cache and don't need to be re-(down)loaded.

If I bundle everything, then I have to scrap and reload everything, which is probably great for the final user, but not the developer actively modifying it.



A bundler does not necessarily produce a single file. I have not tried Mako. But from the docs it appears to do code splitting just like the others.



Can I kick it off programmatically inside a Cargo build via build.rs? I tried to go down this road with SWC and ... failed.

To be clear: I have JS/HTML artifacts in my repo alongside Rust source. I want to bundle them then ship them inside a produced Rust binary, or at least with it. With one build step using Cargo.



I was looking if it provided a Rust crate as a lib, similar to how esbuild is just a Go lib (if you want to use it like that) but no luck.



Found the same thing with swc. They have all this tooling, written in Rust, but no way to invoke it as a lib so it could be used inside a Cargo build.rs. Not easily at least. I made some progress then gave up.



Well yes I'm using something familiar, to embed the HTML and JS directly. But want to embed a webpacked entity, and have it run through a typescript compiler. But would like something driven from build.rs



What happens when we reach the tip of bundling? Once you're in ms territory (like esbuild is), then what are the really creative things you can do if say every browser had a little WASM mako or some bundler in it?

It's very cool though and seems like a lot of effort went into this.



It's in the ms for a small projects. These improvements are not to shave a couple ms off some small codebase, but would shave seconds off of really large projects. The codebase I'm working on right now isn't really large, about 5 years of development with on average 2-3 developers working on it and in vite (esbuild) the build time is 20.78 seconds on my M1 MBP. This project claims to be twice as fast as vite, so it would shave off 10 seconds, that's a significant gain. It would probably have a nice impact on our CI/CD pipeline, if the benchmark is representative of real world codebases.



I ripped out webpacker and replaced it with esbuild in a big legacy rails app for the front end, probably 2-3 years ago, and its been fantastic and I haven't looked back. It's more or less made front end bundling an afterthought. Going from 3s to 1.5s on my M2 (esbuild to mako) isn't a gamechanger, so for me it feels like it's already getting close to the peak, whatever that might mean.

But I was more just asking what's the theoretical limit for this kind of optimization, and at the very least with rust. O(n)?



A that would be hard to say. A lower limit would be reading your entire project and the dependencies that are used once, and writing the bundled/minified code once. Possibly some parts of that could be done at the same time as you determine the bounds of the dependencies as you read in the code. So O(n) where n is in operations over lines of code in your project at least.

There's probably trade-offs too. Like do you bother with tree shaking to make your end product smaller, or do you not to make your build performance closer to that optimal read-once write-once lower bound.



Note that while Vite transpiles with esbuild, it bundles with Rollup, which is single-threaded JS.

Vite also uses esbuild to prebundle dependencies for the dev server, but this is separate from production builds.



20 to 10 seconds for a production build sounds very insignificant. How often are you building for prod?

For dev, you definitely want subsecond recompiles but prod can take a few minutes.



Time to reset the clock. 0 days since a new web bundler was released (in Rust!!).

So tired of this ecosystem and its ceaseless tide of churn and rewrites and hypechasing.



Feel the need to push back against the predictable nay-saying in here.

Announcing with Rust in the title is not because of a hype train, it's a way to communicate that this bundler is in the new wave of transpilers/bundlers which is much faster than the old one (Webpack, Rollup) which were traditionally written in Javascript and painfully slow on large codebases.

While the JS ecosystem continues to be a huge mess, the solution to the problem is not LESS software development ("Just stop making more bundlers & stop trying to solve the problem - give up!"). Or even worse - solve the problem internally, but don't make me hear about it by open sourcing your code.

The huge amount of churn and development in this space has a good reason... it's a desperate attempt to solve the problems that browsers have created for web developers. Fact is that most business has moved to the web, a huge amount of web development is needed, but vanilla javascript can compounds and compound in complexity in the absence of a UI framework and strict typing. So now you've added transpilation and dependency management into the mix - and the user needs to download it in less than a second when they open a web page. And your code needs to work on at least 3 independent browser engines with varying versions.

SwiftUI devs are not a more advanced breed of developer than web developers. So why don't you see a huge amount SwiftUI churn and framework/compilation hell with native iOS development? The answer should be obvious. These problems are handed down from on high

The browser/internet/javascript ecosystem despite its glaring warts is actually one of the most amazing things humanity has created... a shareable document engine grew into a global distributed computing platform where you can script an application that can run on any device in less than a second with no installation. Not bad.



I fully agree with you, and want to add: JS/TS due to it's accessibility is one of the largest eco systems. Hell, whether you are or aren't a devekoper you're part of it through using a browser.

People often scoff at complexity in frontend projects, but they need to handle various types of accessibility, internationalisation, routing and state including storage of those, due to its popularity it's also very frequently an attack surface. With advent of newer technologies (I don't just mean web Dev ones), that's been put into the browser as well, which compounds complexity even more. There's various authentication and authorisation standards most things need to handle as well (not isolated to JS, but it's also not free of it either). Not to mention the versatility and complexity of DOM and CSS that are some of the the most complex rendering engines with layers of backward compatible standards. Like you mentioned already, these engines are all subtly different. Also you have to handle bizarre OS+browser quirks. And things can move between displays with different DPIs, which can cause changes in antialiasing. There's browser extensions that fuck with your code too. Then there's also the possibility that the whole viewport can change. Networks change. People want things to work online and offline so they don't lose work while on a train... While working in an environment that wasn't explicitly designed to support that.

Christ, I'm exhausted just typing this. Most these people complaining probably barely understand what they're complaining about



> People often scoff at complexity in frontend projects

The complexity is there because everyone is trying to reinvent everything.

> accessibility, internationalisation, routing and state including storage of those

Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.

> There's various authentication and authorization standards

That's also more of a server concerns than the browser.

> these engines are all subtly different

It isn't the old IE days (which Chrome is trying to replicate). More often than not, I hear this often when people expect to implement native-like features inside a web app. It's a web browser. The apps I trust, I download them.

> People want things to work online and offline so they don't lose work while on a train

Build a desktop app.

> Most these people complaining probably barely understand what they're complaining about

Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.



I think you're hand waving a lot of problems away without giving them the thought and attention they deserve. And sometimes using arguments that aren't really unique to the JS ecosystem.

> The complexity is there because everyone is trying to reinvent everything.

That's not just JS. That's literally everywhere. People reinvent ideas in every codebase I've seen. Sometimes it's a boon, sometimes it's a detriment. But again, not something that's unique to JS.

> Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.

None of these are trivial, even with existing solutions. They're only trivial for trivial cases. Like, I'm sure we both understand people aren't building to-do demos.

> It isn't the old IE days

Probably happened accidently, but it kinda misconstructs what I'm saying. There are issues between renderin engines and variety in how much/quickly they adopt some features. Hell, you still need code branches just for Safari in some cases becaus of how it handles things like private browsing.

> Build a desktop app.

You're trading one world of complexity for another world of complexity (or I guess we could say it's trading one set of platform quirks for a larger set of platform quirks)

> Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.

I understand where you're coming from, but just because Svelte was released it doesn't make React (and spin-offs) or Vue less relevant. You're not force to use them.

Regarding the bundling topic, again you're not forced to awirch to a different bundler if you're happy with your existing one, or the project isn't at a scale where it matters.

I think the pressure is internal, not external.



The old joke is that there's a new JavaScript framework every month. That's not really true — we've had the same big three for a decade — but there has been an explosion of new bundlers: vite, esbuild, turbopack, farm, swc, rome/biome, rspack, rolldown, mako. And of course plenty of projects are still using rollup and webpack.

Some competition is a good thing, and it seems to have led to a drive for performance in all these projects which I'm not complaining about, but I wonder if more could be gained by working together. Does every major company or framework need their own bundler?



> The old joke is that there's a new JavaScript framework every month. That's not really true — we've had the same big three for a decade

Yup. I know a few people who were using React 10 years ago and still use it today. What has changed frequently is the tooling. e.g. Bower going away in favor of NPM; Gulp/Grunt going away in favor of Webpack, which is slowly going away in favor of Vite; CoffeeScript going away in favor of TypeScript; AMD/CJS/UMD going away in favor of ES modules, and so on.

ClojureScript has a great deal of stability in both the language itself and tooling, but nowadays I can't give up the developer experience of TypeScript and Vite. The churn in the tooling of the JS/TS ecosystem is wild, but since about 2021 I have found ESM + TypeScript + Vite to provide fast compile times, fearless refactoring, and a similar level of hot-reloading that I enjoyed in Clojure(Script). Can't say I miss Webpack, though!



> ClojureScript has a great deal of stability in both the language itself and tooling

Does it still use Google's Closure (they've chosen it just for the name, right?) compiler? Is that still supported by Google?



Major parts of the compiler have been unchanged since its original public release. It still uses Google Closure Compiler (GCC), but the community understands that was the wrong choice of technology in retrospect. The compiler is still actively developed and used internally by Google. What is going away is the Google Closure Library (GCL), since modern JavaScript now has most of what GCL offered, and it's become easier to consume third party libraries that offer the rest of GCL's functionality.

The reason ClojureScript has not moved away from GCC has to do with the fact it performs optimizations -- like inlining, peephole ops, object pruning, etc. -- that ensure ClojureScript's compiler output becomes relatively fast JavaScript code. The closest alternative to GCC's full-program optimization would be Uglify-JS, but it doesn't perform nearly as much optimizations as GCC does.

For a concrete example, consider the following code. I am intentionally using raw JS values so that the JS output is minimal and can be pasted easily.

  (ns cljs.user)

  (defn f [x]
    (let [foo 42
          bar (- foo x)
          baz (+ foo bar)]
      #js {:bar bar
           :baz baz}))

  (defn g [x]
    (let [result (f x)]
      (when (pos? (.-bar result))
        (js/console.log "It works"))))

  (g 0)
The ClojureScript compiler will compile this code to something like this
  var cljs = cljs || {};
  cljs.user = cljs.user || {};
  cljs.user.f = (function cljs$user$f(x){
    var foo = (42);
    var bar = (foo - x);
    var baz = (foo + bar);
    return ({"bar": bar, "baz": baz});
  });
  cljs.user.g = (function cljs$user$g(x){
    var result = cljs.user.f.call(null,x);
    if((result.bar > (0))){
      return console.log("It works");
    } else {
      return null;
    }
  });
  cljs.user.g.call(null,(0));
Paste this into `npx google-closure-compiler -O ADVANCED` and the output is simply
  console.log("It works");
On the other hand, `npx uglify-js --compress unsafe` gives us
  var cljs=cljs||{};cljs.user=cljs.user||{},cljs.user.f=function(x){x=42-x;return{bar:x,baz:42+x}},cljs.user.g=function(x){return 0
This is quite larger, and possibly slower, than the output of GCC.


Haven't you heard ? Rebuilding everything in rust is the new meta. To be quite honest, call me old fashioned but the fact we need so many bundlers that we are considering which are more performant is a symptom and and not a blessing.



For me is the fact that we need a bundler is the underlying issue. I would love that bundlers became first class citizens and come already with the Javascript runtime, similar on how Bun and in some degree Deno does (AFAIK their bundler is intended to use to bundle apps to use in the server and not in the browser).



I would say not really, at least compilers are an essential component of a compiled language in my eyes. Javascript is transpiled, and I know you can say the same for all compiled also in a roundabout way.

Thinking about it only recently, Go fits in nicely with fast compile times for ' 'builders' esbuild comes to mind. But Rust.. Crazy



By all of them being written in Rust they could reuse other's packages (crates), AFAIK the vite team is writing rolldown (to replace rollup) and they are using crates from the oxc project, not sure about the others.



What features do they want? In my experience people always welcome webpack alternatives and not having CPU starvation issues or having to wait for minutes to webpack to work.

The problem is that we have already n-thousands alternatives, so it's a slightly different setup everytime - but generally as long as it's not webpack, it's all good.

Recently someone disabled turbopack on a next.js project because one new dependency wasn't supported and the developers started complaining right away the app was unbearably slow. The team couldn't work on latest for a week, they were just reverting the latest changes breaking turbopack support, working and then pushing.



Intrestingly esbuild isn't included in their benchmarks.

Esbuild is the only current build-tool that keeps one sane. The serve-mode is excellent and elegant with no brittle constantly breaking hacks like HMR or file watching.

Sadly configuring especially the serve-mode is a bit badly documented, and not usable via CLI flags if one needs plugins.



> Intrestingly esbuild isn't included in their benchmarks.

i noticed, and was very surprised by that. surely esbuild is the "standard" fast bundler these days; everyone knows webpack is slow so doing better, even significantly better, than it isn't a very large claim.



That's what Vite is for: all the zip of esbuild plus HMR that works. Usually works anyway... looking at one project where I never have to reload, and another that I'm doing that every ten saves or so. Much sloppier legacy sources in the second tho, Vite really pays off when you write more modern code from the start.



I rather hit Ctrl-R on each iteration than worry about whether I've hit the 1/10 buggy state on every change. With esbuild the reload is practically instant.



It was pretty infuriating wondering why my changes weren't taking effect, but like I said, it only hit me for that one project, and now I'm ready for it (I had to manually refresh every time beforehand anyway). It's a legacy codebase, I'm already used to intermittent nonsense like that. HMR never fails on the other project -- but now I've jinxed it for sure!



I don't find hitting Ctrl-R or F5 to be much of a hinderance for iteration. Especially when you don't have to worry whether the system has been left to some incorrect state by HMR.



A couple more:

https://wayland.emersion.fr/mako/

https://makoframework.com/

It can be hard sometimes to come up with names that aren't already in use. I think as long as it's clear in the description what it is, and the same name isn't shared for two projects that do approximately the same thing, maybe it's not so bad. There could also be an issue where command names might be the same so one would have to be changed. I recall this may have been a small issue when the Go language was new, as there was also a game of go available in some distro repositories. I believe that's generally solved now.



I've recently taken a legacy Typescript clientside codebase that was using webpack to generate tens of js packages from minutes to seconds by using bun [0] for both dev and build:

For dev I replaced webpack with "bun vite": it loads scripts ad hoc and is thus super fast at startup, and it still supports hot reloading.

For build I use "bun build". I've created a small script where I don't specify the output folder, but just intercept it, and do some simple search & replace things. It works perfectly, although it's a bit of a hack (simplified code):

   const result = await Bun.build({
     entrypoints: [srcName],
     root: "./",
     minify: true,
     naming: `${configuratorName}.[hash].[ext]`,
   });
   for (const r of result.outputs) {
     let jsSource = await r.text()
     jsSource = jsSource.replaceAll("import.meta.hot","false")
     Bun.write(outdir + r.path.substring(1), jsSource);
   }
It might not be pretty, but it works super fast, and it only took me a couple of hours to convert the whole thing...

Update:

For the record, the real script also uses the HtmlRewriter from cloudflare (included by default in bun) to alter some basic things in HTML templates as well...

[0] https://bun.sh



Bun.build actually has a `define:` option that does the same thing as your replace. If you use it, it'll even propagate the value, and treeshake away any `if(import.meta.hot)` you have.



As someone who highly values minimalism and simplicity in software, seeing another web bundler paraded around as if it's something to celebrate does not spark joy.



We have bundlers because for a long time we didn't have a module standard due to browsers hanging on to their minimal and simple model of `script src=`. Even now modules are pretty minimal fare. Plus there's all the transpiling and asset transformation, but hey we should all be using document.write and not those "bloated" frameworks on top of JS, right? Maybe jQuery if we want to get really bougie?



A bundler is necessary evil and should be thought of and developed that way. Not celebrated. There should be like two or three flavors, e.g. like Cpp compilers (gcc/msvc/intel), ideally with a big corp backing and they should be rock solid and not change much.

The amount of bundlers I've seen in my time is borderline obscene. Nowadays it's even worse, as every javascript framework developer's actual secret fetish is to build their own bundler. Ideally in Rust because that's hip I guess.

Webpack, Snowpack, Parcel, Rollup, Esbuild, Vite, Turbopack... just stop. Enough.



All of these are lessons learned from previous iterations. Also, some of these were probably in development for a while. If you're close to finishing a product, do you just stop and abandon months of work just because a challenger appeared? I wouldn't! Especially if you think you're doing something better than the competition

I've managed to cut down build times from ~1min (sometimes up to 3, but I couldn't even tell you why) when using Webpack and Babel to less than 200ms using just Rsbuild.

So, I welcome the improvement! The fact multiple people/orgs felt the need for this clearly means they felt the pain of slow builds too.



Ha, I think I know what you mean, but I'm not sure we're talking about the same thing. For me personally, I'm glad that Rsbuild didn't stop decekopem when something like esbuild popped up.

I'm sure people tied up in the roll-up ecosystem think the same about Vite and rolldown!

All these do things subtly differently in ways they think is the correct way. Maybe one will come on top or maybe something else comes along and integrates lessons from both and that eventually wins and all meintence moves there eventually.

The JS ecosystem is complex (for better or worse), and bundling for it isn't a simple as people believe. So it makes sense there's multiple things trying to tackle the same problem!



Everyone is making the point that using JavaScript on the server was a BIG mistake, with this ongoing rewrites.

We already had our bundlers in Java and .NET land, before nodejs came to be, and life was good.



The fact that people keep releasing new bundlers, minifiers, transpilers, package managers and so on, for JavaScript is a loud and clear warning that something is amiss.

People (re)write such tools either for fun or to solve a problem (or best: both). Apparently after so much re-writes the problems haven't been solved. To me, this indicates fundamental problems. I'm not familiar enough with the ecosystem to know what those would be, let alone how to truly solve them.

But the very fact that we see new builders, transpilers, bundlers every few months is enough to conclude we aren't solving the problems on the correct level or that maybe it cannot be solved at all. Because otherwise one of the many attempts would've solved the problem and "everyone" would be using that.



The situation with the tooling constantly changing isn't nearly as bad as the front-end frameworks themselves. I've been updating my knowledge of front-end, and it's an absolute shambles. The official React documentation(https://react.dev/learn/start-a-new-react-project) is telling me that in order to use their framework, I need to use another framework to solve (quote)"common problems such as code-splitting, routing, data fetching, and generating HTML"... At their suggestion I've picked NextJS, which is a "full-stack" React framework. This means that it has its own back-end which does most of the heavy lifting. So not only will our company have a traditional back-end, we'll also have a BFF (another thing the kids nowadays want), and a back-end that is actually our front-end application. At this point I've forgotten what problem we set out to solve.

NextJS' documentation is also *terrible*. This situation is made all the worse by any material online about NextJS that's more than 3 months old being totally inapplicable because the framework changes so often.



Is this how other people feel about NextJS? I've been trying to keep an open mind about it, but its entire design seems so antithetical to what I'm trying to accomplish. Is there a better mainstream alternative? From what I've seen NextJS is pretty commonly used.



The mainstream alternative is still to not have a "backend-for-the-frontend". If you use something like Rails, django, nodejs, use React connected to them. Or directly to something like supabase. NextJS is the extra complexity nobody needs.

It is marketed as the solution to slow starts but React is slow so the solution is terrible over-engineered.

A much better fix is to remove React and use something that is already fast like solidjs or Lit. There are much better UI Kits in Lit that I have seen in React and in the end it is just JS so the same people that can could React, can code Lit and SolidJS.



Thank you for the suggestions. Unfortunately, the only reason I'm writing React at all is because so many companies want React experience, and I figure I'd better stay up to date. If I was given the opportunity to choose technologies for myself all of the time, I'd steer clear of React at this point!



Next.js et al. provides a set of opinionated packages designed to enable a specific paradigm. For Next.js, that's server-side rendering. For Remix, that's progressive enhancement.

If you are happy with client-side rendering and do not desire React on the server, there is not a strong reason to use Next.js; it introduces complexity and churn.



I believe we see such diversity because of the unique environment in which JS has come to exist. Folks who are writing all these tools are trying too solve problems from the outside in for their small corner of an incredibly large ecosystem.

It's a snow ball rolling down an infinitely long mountain. I believe this may never settle.

Mainstream browsers have already coalesced on a no build solution but it's profitable, by fame or fortune, to continue building solutions that require bundle and compile steps. Then others use those because off the shelf libs require them and save time and money.



Esbuild did solve a major problem, which was very slow builds.

Vite wrapps esbuild. Not sure what it provides itself.

Then there came several specialized (Rust) tools. For the same reason esbuild was made.

I think ultimately they try to solve the same issue:

JS is supposed to be a productivity gain over compiled languages. But with ultra slow builds that goes out of the window.



> JS is supposed to be a productivity gain over compiled languages. But with ultra slow builds that goes out of the window.

But what business problem are we solving? Why do we need compiles, transpiles and so, in a dynamic language in the first place¹? And if so, is compiling the right solution to that problem?

My point was mostly to question if we are solving the right problem. And if the direction in which we are solving it, is the right one. After some 20+ years we still haven't converged around a single solution. But instead we keep firing out "new" solutions and solutions to the problems that those solutions then introduce on an almost weekly basis.

To me that shows we are either simply looking in the wrong place, or have a much deeper, fundamental problem that simply cannot be solved. And should probably either stop looking for the solution or just abandon the whole stack.

¹ I'm not looking for an answer to this question. I know several reasons why we build, compile, transpile, minify and whatnot in JS. But all those are also solutions to deeper problems. Problem's that can be solved in several ways, only one of which is "compile pipelines".



Having the same language on the client and on the server is a huge productivity booster for me. I can’t imagine writing so many things twice again. Have you tried it?



Unfortunely I have to, thanks to the Next.js and React only SDKs, that many SaaS products now have as extension mechanism.

Also if it is such a great experience, people wouldn't be rewriting the node ecosystem in Dart, Go and Rust.



It’s not necessary to use the same language for that.

In many cases you can achieve the same with a clearer separation, with data driven methods and by generally not running so much JS.

The typical example is input validation. You want immediate feedback on the client, but obviously you validate on the server.

But instead of running literally the same specific code twice, you can use json-schema or your own general data description of what valid input is. You move the specifics from code into data.



Now that's a game I wouldn't mind a rewrite of. Imagine Heroes 3 with a modern engine and modding available out of the box instead of people having to reverse engineer it just to keep it alive today.



Exported in WASM to play directly in the browser, including Horn of the Abyss.

I would pay for this.

You'd probably have to redo the art though, since 3DO will unlikely do this port and I doubt they would licence the IP.



Can't people figure out some other tooling besides bundlers? I mean, how many do we really need?

It's probably fine, but so are all the others as well. The authors have probably spent a fair amount on time on this project so I don't want to be negative but it's just hard to be excited when it brings nothing new to the table.

Why should I use this over Vite or esbuild? Because it's written in Rust? I don't understand why that even matters. Even if it was 10 times faster I wouldn't use it because Vite is fast enough and have all the plugins I would ever need.



None of those tools you quoted are production ready based on my investigation, in the sense that if you manage the JS infrastructure of a company of 2000 developers, you would stick with webpack. Lots of Rust based tooling is still half baked and missing things here and there, so much that you wish these people work together to create one (or at most two) tool that is comparable to webpack.



> None of those tools you quoted are production ready based on my investigation

This is very true and almost all of them are taking far longer to develop than they initially thought. swc/turbopack is being pushed by Vercel and it has been a huge ongoing disaster.



Yeah okay, but that's not the reason why people write it in the title. They write it in the title because they know that many engineers like Rust and think people will immedietly be drawn to it.

But the language itself is not a goal or at least shouldn't be IMO. Thus it have the opposite effect on me, who do not care about what language my bundler is written in.

If I did, it still wouldn't have any competitive advantage since as you point out Vite will soon also be based on Rust.



I've read some faq and docs.

Their reasons are to have fast builder with flexisbility needed for business cases. If other words they are making internal tooling publically available.

Being faster than es-build is not a goal, get people excited about speed is not a goal. Have control over tooling, flexibility, be fast-enough, be opensources are the goals.



I didn't enjoy the Rust hype on here in years past, but I'm always glad of any better tooling. Just an example from the other week... I swapped out NVM for FNM (Rust) and now I don't have to put up with performance issues, especially slow shell startup times.



Just me being curious since I have used nvm for years without any issues. What do you mean by slow shell startup times? In what way do you use nvm in order to experience any slowness?



I followed the standard nvm install process, to get it loaded from my .zshrc

I noticed a second or two in lag between launching the terminal and getting a shell prompt. Commenting out the nvm load as a test removed the delay. I installed fnm, aliased it to be nvm, and everything is snappy. Also nicer if you use tooling to 'nvm use' when changing into a project directory.

There are a few issue threads such as this one : https://github.com/nvm-sh/nvm/issues/2724

BTW, this blog post was great for finding the culprit if there is zsh startup latency : https://stevenvanbael.com/profiling-zsh-startup

联系我们 contact @ memedata.com