(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41215727

用户对当前的 Web 技术(例如 HTML、CSS 和 JavaScript)表示失望,指出这些技术已成为质量和创造力的障碍,主要是由于其复杂的布局模型、向后兼容性问题以及由单层引起的性能问题。 JavaScript 的线程本质及其垃圾收集。 用户提出了一种创新解决方案来改进 Web 技术的愿望,特别是通过使用 WebAssembly 和一种名为 Pax 的潜在新编程语言。 用户承认 HTML、CSS 和 JavaScript 提供的好处,称赞浏览器的零信任执行环境并强调向后兼容性的重要性。 然而,他们强调,由于担心向后兼容性而拒绝创新会导致停滞并阻碍进步。 最后,用户提到 Swift 作为一种潜在的替代解决方案,尽管他们对其 Web 应用程序的就绪性和适用性持保留态度。 100字总结: 用户呼吁 Web 技术的进步,以克服当前标准所施加的限制,特别是 HTML、CSS 和 JavaScript。 他们建议使用 WebAssembly 等新兴工具和 Pax 等开发语言来解决复杂性、性能问题和向后兼容性挑战。 该用户指出,不愿创新可能会导致技术停滞,并强调 Swift 是 Web 应用程序的一个有前途但不完美的替代品。

相关文章

原文


So we will, in ~5 years time have two new browser, one in Rust and one in Swift.

I hope in the process of doing it we will find new ways of doing things.



> I hope in the process of doing it we will find new ways of doing things.

HTML & CSS themselves have become a major bottleneck to quality and creativity. The arcane layout model, the baggage of backwards compatibility, the cognitive dissonance — played out over decades of design-by-committee — between "this is a document engine" and "this is an app engine."

The rendering-thread-is-the-main-thread architecture of JS plus the JS garbage collector cause jitters & frame drops that most people don't consciously recognize, but everyone subconsciously recognizes. These little bits of jank are why consumers can recognize webview-wrapped apps vs. native apps.

Don't get me wrong — HTML and CSS and JS have brought us far, and the zero-trust execution environment that is the browser is an amazing feat of humanity.

I hope the "new ways of doing things" you describe include a major innovation on HTML and CSS and JS. WebAssembly makes this possible today — and I have dedicated the last few years of my life to proving this concept[0], and I hope others explore similar avenues. We deserve a better substrate, and this can be done without reinventing the browser.

[0] www.pax.dev



`Waiting for fonts.googleapis...`, pax website was blank for a couple of minutes, might want to reconsider making third-party fonts non blocking on your website so it doesn't appear broken/empty when a third-party CDN edge is slow.



I disagree. There has never been a better markup language. HTML allows precise control of atomic elements that make up a component. It has been the best thing for accessibility we ever came up with.



It sounds like we agree about the good parts of HTML.

> There has never been a better markup language.

> It has been the best thing for accessibility we ever came up with.

I observe past tense in both sentences above.

So perhaps we agree, the salient question is: is it the best thing we can come up with? With a focus on the future.

I argue we can do better, while celebrating and building off of what's great about the past & present.[0]

[0] https://docs.pax.dev/intro-priorities-and-prior-art/#:~:text...



I can imagine no better way for readability and accessibility than a text-first representation of content, which is what HTML is. I will even argue that having the modern web emerge from a text-content first approach is the best thing that could have happened to us.

All other content representation infrastructures that weren't text first, like java applets or flash died because they were no match for HTML.



> These little bits of jank are why consumers can recognize webview-wrapped apps vs. native apps.

That and, of course, the half-page box telling you how much better off you'd be if you were using the app.



Care to point us to a better combination than HTML/CSS when it comes to layouting flexible interfaces?

Don't get me wrong, I also could imagine better markup languages for that purpose. But everything I have seen in the wild was worse in multiple, show-stopping ways.

Maybe instead of coming up with a new thing, we just need a clear way to solve the pain points and put the solutions into CSS4 and HTML6



> Care to point us to a better combination than HTML/CSS when it comes to layouting flexible interfaces?

HTML/CSS is pretty good!

SwiftUI's HStack and VStack are solid, though they suffer from ambient API complexity & Conway's law.[0]

Figma's autolayout is a strong reference, but it's not a tool for creating software.

> Maybe instead of coming up with a new thing, we just need a clear way to solve the pain points and put the solutions into CSS4 and HTML6

Why not both? Innovate freely and independently (the prerogative of any builder) and adopt as spec any innovations that pass committee.

[0] arguably same diagnosis, different course of disease vs. HTML/CSS



Both Figma autolayout and HStack / VStack are careful subsets of flexbox; not quite the same thing. Consider whether a marble statue is "just a subset" of the block that contained it.

None of these is a perfect technology, but I believe the most practical answer to the question driving this thread[0] is to study/understand the landscape, then build towards a better future.[1]

[0] > Care to point us to a better combination than HTML/CSS when it comes to layouting flexible interfaces?

[1] www.pax.dev



I think CSS is excellent and pragmatic.

Alternate layout engines for the web might be a fun experiment, PhD thesis, or talent retention program, but it's not practical.



> Alternate layout engines for the web might be a fun experiment, PhD thesis, or talent retention program, but it's not practical.

Flexbox was once an "alternate layout engine for the web," as was Flash player, as is Figma. Framer, Retool, and Squarespace all offer alternate layout engines tailored for visual building. All of these seem practical to me.



Doesn't the rendered page run in a different thread to its JS? If the website does use some SPA framework to lazy-render the components, load the new view, or execute some logic, maybe then, but when I say run `(() => { const s = Date.now(); while (Date.now() - s < 10000); })();` and try to scroll the page, it doesn't freeze.



Don't break backwards compatibility.

It's one of the core things that has enabled the web to be as useful as it is. One of the things that draws people in, and keeps them using it.

Yes, there are problems with what we have. But if you break compatibility, you'll either not be adopted, or part of the crowd that the audience yells at for taking away their favourite things. You'll kill efforts and bury knowledge bases.



I don't think the world should abandon HTML, nor break backwards compatibility across the HTML spec. The first cars drove on roads designed for horses, and horses are still around. At no time did we gather a committee and decree that horses were deprecated.

Taken to an extreme, "don't break backwards compatibility" has an insidious failure mode, which is "don't innovate." The car could not have come about without a willingness to break backwards compatibility with horse drawn carriages, plows, mills, hitching posts, etc.

The adoption of a radically new technology like this is voluntary, collaborative, and progressive. Provided it offers enough value to exceed the switching costs, there's no need to kill efforts or bury knowledge bases.



Interesting project. Have you tested any non-XML markup languages for the user interface declaration so that metadata doesn't take as much space as data in

Hi {firstName}

and also requiring double the number brackets

There are modern cleaner markup alternatives like kdl



We chose an XML-like syntax because it's clear & explicit & established (HTML) — you can know where you are in a hierarchy clearly at any point thanks to closing tags. The major downside is verbosity, both for reading and for typing. Pax's closing tags compile away so they aren't transmitted across the network like they are for HTML.

We predict 95%+ of Pax will be written by machines, especially visual design tools and LLMs, so verbosity becomes less important (LLMs may even benefit from explicit closing tags.) We're innovating on multiple axes, including language, so we chose to make the syntax itself as boring and uninnovative as possible.



But clarify is exactly what's lacking because it's verbose, it obscures content at no benefit to simple matching bracket highlights

Also, will it be read 95% of the time by machines during design? That's not what the demo shows with the side-by-side xml and output, which I expect is a much more common workflow than 5%

The innovating part is what puzzled me and prompted this question - why cling to the old garbage when you're doing new design?



Point taken! And yes, especially for developers (our core target demographic,) you are right that hand-writing should account for >5% of code volume.

Syntax is fairly contentious. "One man's trash is another man's treasure," on your note of "old garbage." It's hard to please everyone, however:

We could offer syntax "skins," since the data storage mechanism is a layer separated from the AST (thus different ASTs/syntaxes could de/serialize from the same persisted data.) So folks who want closing tags can have them; folks who want a YAML-like format could have it; folks who want a KDL-like language could have it.

At the language level, Pax's distinguishing characteristic is that it's the union of an expression language and a markup language; this is the reason we couldn't use an off the shelf markup language / parser. But again, KDL or YAML could be extended with PAXEL to make pax-kdl and pax-yaml flavors alongside pax-xml.

We had to start somewhere, and starting as close as possible to the markup lingua franca (HTML) made sense to us. It'd be a dream for Pax to be loved / adopted enough that we or anyone else cares to make a syntax skin.



syntax flavours / "skins" sounds like a great idea if Pax isn't so tightly coupled with the current one

(HTML makes sense as the target to start with, but I'm not sure most people "speak" raw HTML, they'd design apps/pages in other languages)



Swift is pretty next level compared to Rust. Rust does seem like a fairly constant level of slow and linearly slows down as the project gets larger. Swift tends to have huge degradations on seemingly innocent changes that make the type inference go off the rails. Although at least if you spend the time to track these down you can fix the worst offenders.



Pardon me for not getting your context, but are compile times a big issue in software development? I have never programmed professionally and all my experiences with code is from a couple of classes taken in college a decade ago.



When doing UI things, a common workflow is to have code and the app side-by-side, make a small tweak in the code, recompile/hot-reload, look at he result, repeat. A long compile time makes workflow a pain.



There's been a lot of work on this in the last few years with Windows support, preliminary Android support is also being worked on and should appear at some point in Swift 6.



There probably won't be a cross-platform UI layer and many Apple frameworks won't work, although apparently the Foundation Swift rewrite already works. For a lot of companies though, simply being able to share business logic will be a big plus. At my current company we have a lot of talented Swift people and existing code written in the language so being able to share anything would be a huge advantage.

I'm biased but I personnaly find the language really productive to work with, runs fast enough for my needs and let's me target more and more platforms.



Actually, it sounds like they only just decided to use it in the future once Swift 6 is released. I would guess much of it will still be in C++ for a while.



For everything. Swift is—at least in theory—a general purpose language, it's not exclusively for Apple technologies.

> First off, Swift has both memory & data race safety (as of v6). It's also a modern language with solid ergonomics.

> Something that matters to us a lot is OO. Web specs & browser internals tend to be highly object-oriented, and life is easier when you can model specs closely in your code. Swift has first-class OO support, in many ways even nicer than C++.

> The Swift team is also investing heavily in C++ interop, which means there's a real path to incremental adoption, not just gigantic rewrites.

https://x.com/awesomekling/status/1822236888188498031

https://news.ycombinator.com/item?id=41208836



> > First off, Swift has both memory & data race safety (as of v6) > But v6 is not released yet, right

As of Swift 5 there is zero data race protection and the process model (DispatchQueues if memory serves) is woefully. No advantage over fork and much more convoluted

I am well clear of the Apple development world now (thank goodness) but the tools were of very poor quality, albeit very nice looking, as of this time last year

For the sake of my friends still in the world I hope Swift 6 is better, but I fear the dumpster fire that is Xcode is too far gone to rescue.

The comparison with Rust demonstrates the utility of "design by committee ". Rust is far from perfect, but feels like the future where Swift feels like warmed up Objective C



> As of Swift 5 there is zero data race protection and the process model (DispatchQueues if memory serves) is woefully. No advantage over fork and much more convoluted

Swift Concurrency is the replacement for Dispatch and has been around since Swift 5.5 in (IIRC) 2021. It’s a completely different system, uses lightweight tasks (a la tokio in rust, or goroutines in go, etc), has a concept of “Sendable” for thread-safety a la rust’s Send, async/await, and a native `actor` type, among other things.

Swift 5.5 didn’t get all the way towards rust-style data race safety due to a few things they had to make warnings instead of errors (to avoid breaking existing code), and introducing keywords like `@preconcurrency` when importing frameworks that predate Swift Concurrency, to facilitate incremental adoption. They’ve also been adding more checks in each minor release to tighten things up.

IIUC Swift 6 is mainly going to turn all the warnings into proper errors and tweak some defaults so that you get proper data race protection on a default compile.

Point is, it’s totally inaccurate to say that Dispatch Queues is all that exists in Swift 5. You’ve had much better stuff for a while now (although SC still has a ton of issues worth discussing.)



> Swift 5.5 didn’t get all the way towards rust-style data race

When I experimented with it it was trivial for one thread to interfere with another. So Swift got nowhere towards data race safety. Still stuck in the 1990s

I know not what you mean "Swift Concurrency". When I was doing it all we had was DispatchQueue which was an obfuscation of `fork`. Quite shameful really.

I think the main point is that Swift is a failure.

"although SC still has a ton of issues worth discussing" once I would have cared, but this year (decade, century) I am just very glad putting meat in my fridge no longer depends on those rouges from Apple who treated me so badly when I was sweating so hard making software for their platforms (not to mention paying them so much money). In 2024, for a company like Apple, for their flagship developer offering, why would anyone still have "a ton of issues" with it?

Apple is now an example of why popularity is a terrible metric to estimate quality of technical offerings. What a shame. How far the mighty have fallen



Okay, so you're saying you don't know what Swift Concurrency is and they say Swift is a failure.

Swift has async/await built into the language with many compile time guarantees of thread safety.



> Okay, so you're saying you don't know what Swift Concurrency

I just looked it up. It is Swift's version of async/await. That is a different thing from threads. I know what that is, used it a lot, because using threads was such a nightmare in Swift.

> language with many compile time guarantees of thread safety

From two separate threads you can access the same memory. No trouble (apart from crashes memory corruption....) at all.

Async/await is always a bad idea, and without a garbage collector it is a nightmare (look at the mess Rust has gotten into). Whatever, async/await is no replacement for parallel programming with threads. It is a different beast.



> I just looked it up. It is Swift's version of async/await.

Since you “just looked it up”, maybe don’t make blind assertions about something you clearly don’t know very much about?

It’s a lot more than async/await. It is a way to offer compile time guarantees about thread safety (through Sendable, which is part of SC), it’s an actor model (allowing shared mutable state to be isolated, leveraging async/await for coordination so that callers can pause if the actor is currently servicing another message) and a bunch more stuff.

I explained all this in my post you replied to, maybe read the whole thing before making wrong claims about stuff you spent 1 minute looking up?



> Whatever, async/await is no replacement for parallel programming with threads.

Is it not for the vast majority of use-cases?

Sure, you can use async/await without parallelism, via a single-threaded runtime to just get single-threaded concurrency, but with a multi-threaded worker-pool async/await-like tasks or fibers I think mostly cover the use-cases you'd have for parallelism?

You have to make sure that you e.g. don't starve other tasks via having no yield points in a task that does a lot of computation (if you're doing cooperative tasks which Swift is doing iirc), but that's not a big one, and can mostly be solved by the runtime too (e.g. Go had cooperative fibers for a long time, until they chose to introduce preemption).



> Whatever, async/await is no replacement for parallel programming with threads. It is a different beast.

Have you missed Tasks and Task Groups as well? And Actors? For now, they are an abstraction over threads, and IMO a good one. It’s actors + structured concurrency, borrowing from Kotlin‘s Coroutines and sprinkling some Erlang on top. Additionally, in Swift there is AsyncSequence + AsyncStream, a (woefully incomplete) Kotlin Flow alternative.



Async/await may or may not be a replacement for highly concurrent and parallel programming, depending on what is the execution model of the async runtime.

If Swift's model is anything like .NET's lightweight Tasks + async/await or Rust's async/await Futures and Tasks as implemented by Tokio or async-std, then it is such replacement.



Xcode 16 rewrote much of the autocomplete behaviors and the new engine is blazing fast in terms of UI blockage compared with 15, and stable. Everything to do with tagging etc not just completion itself.



I don't know if this is irony or you've been in Apple ecosystem way too long.

I had to do a very simple macOS app for my personal consumption recently and XCode dx is nothing to write home about. The only reason I finished the project there was that I couldn't set up vs code quickly for Swift/Cocoa project. I had to endure the slow compilation time, slow reaction time of the IDE UI. You make a change you keep seeing squiggly lines for a while, it's as though the UI is booting up each time. It was a horrible experience coming from IntelliJ and VS Code daily experience.

Computer is 32Gb Apple M1 Pro. Imagine what will happen on some 8Gb i5 macbook.



Can you get the profiler to do anything useful? Even comprehensible?

We tried and tried but there was nothing we could use amongst the blinking lights and wonderful graphics.



You can disambiguate two types with the same name from different libraries, e.g. `Factotvm.URL` and `Foundation.URL`. Do you mean something more full-featured? You are not prefixing types with three letters, if that's what you think has to be done.

I don't know if it's still the case, but there was an annoyance where you couldn't have a type with the same name as the package. But that is hardly a lack of namespaces.



Objective-C had some minimal adoption outside of Apple (probably due to NextStep nostalgia), so if Objective-C managed to get some traction, Swift will do it to, probably.

However, Apple's history is very much stacked against Swift becoming a mainstream language outside of Apple's platform.



You're right but this touches the hot stove.

HN majority doesn't like hearing that ladybird et al might just be wandering around, even if the goal is catnip for the bleachers, and we should be skeptical this is the year of multiplatform Swift, because it wasn't last year if you actually tried it. Or the year before last. Or the year before that. Or the year before that year. Or the year before that one.



I’m slightly more ambivalent than you about it. Swift is a nice language and has better ergonomics than C++ and I imagine a Swift codebase might find more contributors than a C++ one (maybe I’m wrong about that!)

I also think it’s separate from the dream of “multiplatform Swift”. For that you need a healthy package ecosystem that all works cross platform, Swift doesn’t have that. But a lot of Ladybird is written at a low enough level that it won’t matter so much.



I wonder what convinced Andreas Kling to abandon his own language Jakt [1] in favour of Swift.

In the long run, it would be good to have high-level languages other than Java that have garbage collection (at least optionally) and classes, and that are still capable of doing cross-platform system development. I don't know if Swift fits that bill, besides cross-platform ecosystem (a la Java), submitting the language for ISO standardization (not just open sourcing one implementation) would be a good indication of being serious about language support.

[1] https://github.com/SerenityOS/jakt



> In the long run, it would be good to have high-level languages other than Java that have garbage collection (at least optionally) and classes, and that are still capable of doing cross-platform system development.

C#



One of the major differences between ladybird as part of serenity and ladybird the separate project is using 3rd party libraries. When what you are building are is for fun and you build everything yourself it makes sense to also build a language.

Ladybird as a separate project has the goal though of something usable in the shorter term. So similarly with switching to 3rd libraries for things I don't think it makes sense to spend potentially years first building the language before building the browser.



You might be powerfully dry here, imparting a Zen lesson. If not, or if you, dear reader, doesn't see it: it is worth meditating on that there was a language, an OS, and a web browser. Then, on common characteristics in decision-making that would lead to that. Then, consider the yaks that have to be shaved, the ISO standardization / more than one implementation hints at this.



Problem is Swift engineer supply is low, there's not a viable business case to learn Swift because it's not actually viable cross-platform for development unless you have $X00 million to throw at your macOS/iOS team to build it from scratch, platform by platform (to wit, sibling comment re: Arc Browser)

So best case we're looking at: Swift isn't ready yet, the next major version will be, and we can't build UI with it, so we'll put in the effort in to bootstrap a cross-platform ecosystem and UI frameworks. Or maybe we'll just do our business logic in it? It's a confusing mess that is irrational. Even with great blessings of resources. ex. $X00M that Arc has obtained one incremental platform after a year. And "all" they had to do was Swift bindings for WinRT and connect it to the existing C++ engine.

All of this is easy to justify if we treat it as an opportunity to shoot for how we wish software could work in theory, instead of practice. I hope I'm wrong but after being right the last few years, I'm willing to say it's naive wishcasting out loud, even though its boorish. I see it as unfortunately necessary, a younger me would be greatly misled by the conversations about it on HN. "We've decided to write the browser in Swift!" approaches parody levels of irresponsible resource management and is a case study in several solo engineer delusions that I also fall victim to.

It's genuinely impossible for me to imagine anyone in my social circle of Apple devs, going back to 2007, who would think writing a browser engine in Swift is a good idea. I love Swift, used it since pre-1.0, immediately started shipping it after release, and that was the right decision. However, even given infinite resources and time, it is a poor fit for a browser engine, and an odd masochistic choice for cross-platform UI.



> Problem is Swift engineer supply is low, there's not a viable business case to learn Swift because it's not actually viable cross-platform for development

The Swift business case is that in many situations native is strongly preferable than cross-platform. Excluding some startups that wants to go to market super fast and consulting companies that have to sell the cheapest software possible, usually the benefits of native outweighs the ones of cross platform.

For this reason now there are plenty of companies of all sizes (faangs included) that build and maintain native apps with separate iOS/Android teams. There are very good business reasons to learn Swift or Kotlin in my opinion.



> and has better ergonomics than C++

It is unfair to compare a twenty first century language with one from the 1980s

Rust is the proper comparison

The only advantage Swift has is an easier learning curve , but the ergonomics of Rust are far superior than Sift's



It’s not really a question of fairness. The existing codebase is C++, the new stuff is Swift. Hence the comparison.

I’ve written both Rust and Swift while being an expert in neither. I wouldn’t say Swift has no pluses in comparison, reference counting is often a lot easier to reckon with than lifetimes, for one. I’m actually curious what a large multithreaded Swift codebase looks like with recent concurrency improvements. Rust’s async story isn’t actually that great.



> Rust’s async story isn’t actually that great

I agree. People's perspectives differ. It abhor `async/await` in Rust. It has poisoned the well IMO for asynchronous Rust programming. (I adore asynchronous programming - I do not need to pretend my code is synchronous)

But that is taste, not a comment on poor engineering!

The lack of the borrow checker in Swift is what makes it approachable for newcomers, as opposed to Rust which is a harsh mistress.

But reference counting is such a silly idea. Swift really should have a garbage collector with a mechanism or subset to do that very small part of programming that cannot be done with a garbage collector. That would have been design!

I fear that Swift is going to get a borrow checker bolted on - and have the worst of both worlds....



Reference counted objects have deterministic lifetimes, like Rust or C++. Garbage collected languages don't have that. Essentially a reference counter automates some* of the ideas of the borrow checker, with a runtime cost as opposed to a compile time one.

The great thing about automatic reference counting is you can elide the incrementing and decrementing a lot of the time. Stronger guarantees, such as saying "this object is single-threaded" lead to even more optimizations.



I don’t think Apple would want to sacrifice the determinism that refcounting provides for garbage collection. iOS apps are still more buttery smooth than Android in many cases.



Ladybird exists and is funded because the main author is an extremely skilled software developer with a background in building web browsers.

He also built a totally from scratch operating system, and built a very productive and skilled community around it, which branched into building their own web browser as SerenityOS has an 'everything from scratch' policy. How is Serenity not an interesting feat of engineering?

You seem to call it a 'toy' because of where the development is at - this is grossly unfair criticism. By your measure, is all software that isn't at a 100% finished state a toy?



Why is it weird that a company that built their entire business on the web wouldn't want there to be a web browser monopoly controlled by an advertising company? Shouldn't you fund projects that will be beneficial for your company in the future?



Is there a good software engineering resource about the complexity of a web engine? I mean, we all know that is complex but what are the critical areas. Performance is one, compatibility another.



This is a good post; as far as Ladybird is concerned while they may have started things as fun, they seem to have taken a turn towards seriousness recently.



The problem with these smaller webbrowsers is that you have nobody to sue if the browser turns out to leak your personal information or credentials etc. Therefore it is better to stick with browsers made by big corporations.



I'm curious what exactly you think suing Google or Apple in such a case would accomplish?

On the (long) odds that you get all the way to a successful class-action suit, the lawyers get rich, and the class members eventually end up with a free year of credit monitoring

(we have seen this over and over again in settlements for high-profile data breaches)



Some people just get warm fuzzies at the idea of theoretically being able to sue someone if they need to. Usually these are people who've never actually looked into what it would take to even start a lawsuit.



That was true at the start, but it's most definitely funded development now and that's what will probably get them over the finish line.

If the development work went into debating, specifying and expressing required behavior as a written spec more exactly (beyond w3c specs and towards the more pragmatic reality of what current browsers actually do) then very long term we can probably have engines that are AI built [or just more easily developed by humans] from a combination of the written specs and the set of tests they need to pass.

Using AI for adversarial development (e.g. one group tries to break and hack it, the other group defends and refines) could get interesting and wasn't really an option before. Anything that's now available to reduce the human resource cost of development could make a big difference.



Itappearsthatspacesdon'tworkformewhentyping,butitotherwiseseemsthatevencommentingonhnworksinVerso.Editingappearstoworkaswell,thoughlinebreaksdon't(maybebecauseofspaces...).

Edit (with Firefox): the above text is from Verso. Logging in works (though the session is not stored across restarts), as does both commenting and editing comments. Since space can't seem to be entered, word wrapping doesn't work with text entered in Verso, though it does seem to work when there is text with spaces (eg, this edit). A cursor also doesn't appear for me, making editing a challenge.



New Reddit rendering “properly” is as useful as a chocolate fire guard. Do you want to view this content on the app? You Can Only View This Content In The App, THE APP is the WAY, JUST FUCKING DOWNLOAD THE APP!!!

At least on mobile.



Can you imagine being on the team responsible for the mobile web Reddit experience? Having to implement a feature that tells every user, repeatedly, that their work is subpar and to please use something else.



The newest Reddit - not the React version, but the Lit version that's replacing it - uses very, very modern HTML and JavaScript. I'd love to see Servo get to the point of rendering it correctly!



TIL. new.reddit seems to be a return to old.reddit in a lot of ways. It also seems really really fast unlike the current www.reddit. If they remove some annoying ads I might even be convinced to stop using old.



The only downside is that new is still being developed. Old is nice because it is shielded from their post-success incompetence. Whatever bad ideas they decide to implement will hit new first.



new.reddit seems almost... sane. Like they kicked out whoever pushed/implemented reddit aka the current version and figured out that old.reddit had a lot of virtues.



It is so crazy. I normally use "reddit.com" but a few weeks ago i followed a link to an article on "old.reddit.com" and saw I had notifications. I clicked the item and it was all notifications from days ago. So the read status of notifications on old reddit and current reddit aren't linked?



Most pages, logged-in and logged-out, seem to be using the Lit version for me. I don't think it contained many visual changes, so you have to look for the presence of web components.



Pretty amazing how a company can overengineer a basic feed that displays images and text (and ads). New Reddit is a dumpster fire, can't imagine another rewrite is going to fix it.



Yeah, one of the weirder things is that if someone blocks you, you are not allowed to reply to any comment in the thread (for example, if someone posts a toplevel comment and doesn’t reply to anything, and you reply to some comment a dozen layers below and have an exchange with some third party, if the toplevel commenter blocks you, you can’t reply to anything including comments left by others in response to your own comment) and you get a generic “internal server error”-type pop up with no information about why the request failed.

This is most likely a server “feature” and thus a rewrite of the frontend won’t fix it but it seems like at very least the frontend could display a sane error message (assuming the backend forwards some information about the cause of the error).



Rust can absolutely crash. It crashes ("panics" in Rust terminology) in a memory safe way but it's still a crash (please don't try to redefine the word "crash" to be more specific than it actually is).

And Rust can still have unsafe code so it can crash in memory unsafe ways too (though it is very unlikely unless you're doing things very wrong).



> And Rust can still have unsafe code so it can crash in memory unsafe ways too (though it is very unlikely unless you're doing things very wrong).

From when I grokked the code a bit (back in 2017) there was a non trivial amount of unsafe code, especially related to integration with SpiderMonkey (the js engine) so it wouldn't even be particularly surprising to see segfaults in servo, unlike most rust projects.



Most crashes I've seen trying this out seem to stem from panics. As an end user I would call that a crash, but from a programmer standpoint you could say it's an unexpected normal shutdown.



I just compiled it following the Linux instructions. I honestly thought Servo was better than this. Maybe there's something up with the Linux builds? I'd much rather use Ladybird in all its pre-beta glory than this.

There are two window title bars, one by the OS and one from the application. The text in the URL bar is misaligned and is shifted down by half its height. There's a black bar between the browser chrome and the web view. Entering a domain name without http or https and hitting enter crashes the entire application.

Clicking refresh spawns a new window that sort-of-but-not-really shares the same website being rendered.

Very few websites work. Anything with a cookie banner just plain breaks. I can't tell how to edit the URL bar after failing to load/loading a page. Google.com is very wonky. The search box on Google doesn't seem to take space bar for some reason.

For the websites that do work, rendering is very fast and scrolling is pretty smooth. I can see the potential, but there's a lot of work to be done.



I don't understand what the advantages are over Servo's inbuilt web browser¹. So far they look the same when opening but the inbuilt web browser is more stable (I see rendering bugs with Verso and it panics when entering a domain without the http(s):// prefix).

¹ https://servo.org/download/



Different project scopes. Servo's browser is just intended for testing. It recently got an address bar. Verso seems to aim for more features.



Does this do anything to improve the browser as a user agent? That is an agent that obeys the user over the server. None of the current browsers are user friendly and none are scriptable except by experts wielding large external programs.

It should be possible to write a simple shell script to navigate the web, to log in to web sites, to extract information. Or something like Visual Basic.



that's really an interesting case, most puppeteer and playwright deployments are using chrome. If Verso will be faster they might have an advantage in crawl/scrap/QA.



"Let me throw shade on this open source project that does incredibly ambitious thing X and that tons of people are devoting lots of time to, by suggesting they should instead do this other esoteric thing Y that 99% of users don't care about but that I, the entitled power user, think they should be doing instead."



I'm happy to see work being done in integrating Servo into custom browser chrome. My dream browser would be a servo based Qutebrowser. I can only hope.



There are two simple solutions to this:

1. Translate the word to another language.

2. Get creative and make up an original name. Mixed translations, word-bashing, not-actual-words, there's a lot of options!

Okay, so the latter isn't super easy but can be a lot of fun to do!

In this case, it seems like a play on the core dependency name than choosing the actual word of "verso": servo -> verso.



You can say that “comment” is a French word. But it’s also an English word. Every language that we speak today draws on the vocabulary of others.



Have a read about Servo, it is binding with SpiderMonkey as JS engine.

Is there some chance, that servo decomposed from SpiderMonkey? If it is not, I don't think anyone can tell difference between firefox and other browser use Servo.



There's a lot more to a browser than the JS engine. And in general, we would want all of the browsers JS engines to be essentially the same. I think starting with an existing engine is the most logical approach



That still leaves you with at least the layout engine, all the DOM apis, all the networking, multi process model & sandboxing, web extensions support, that are different implementations.



Interesting. Kinda reminds me of the 360 Safe browser that's one of the most popular browsers used in China. It can render a website using WebKit, Blink, or Trident (Internet Explorer)



Apple makes it relatively difficult to support old versions of their OS. You pretty much have to keep an old computer around and just not update its software. So it might not have been a conscious choice, rather they just didn't go to the extra effort to support older versions.



Their build and testing platform is on Github Actions which supports macOS 12. Furthermore, Apple supports running macOS in a virtual machine or a dual-boot setup.



Stylo (CSS Engine) and WebRender (compositor) came from Servo and are still used in Firefox. I believe the Mozilla team still upstreams patches as well.



It might be the head cold I'm recovering from, but it took me about 3 passes to comprehend their tagline:

> A web browser that plays old world blues to build new world hope



I'm healthy at the moment and I'm still not sure I understand. Poetic perhaps, but a bit nonsensical.

Old world as in the past? Older technology? Older ideas? Bad ideas?

Blues, as in the musical genre? Or the feeling it conveys? Are we riffing on it here? Plays strongly suggests music, but the blues originated from specific cultural roots tied to the end of slavery (which is implied even further by 'old world blues').

New world, as in a better tomorrow, or something more akin to a new world order?

Hope I think I get.



Any post or article that explains what this does better or new as compared to existing engines like Chromium or WebKit when it's feature complete?



It'd be nice if people stopped recommending "yet another new and exciting package manager for Windows" that you have no idea of whether you can trust it or not.

Git, Python, llvm, cmake, and curl all have perfectly normal windows installers available from their own websites, and if you're a programmer who has to, or chooses to work on Window, it's a good bet you already have either most or all of these already installed, making the job of completing your bonus objective probably one, maybe two installs at most.



That was my favorite part of this (especially since the build didn't work). Whatever scoop is, it actually worked and installed those things without any complaint... might be my new favorite package manager



I’ve typically used chocolatey. I think it typically uses the regular installer for a package, just in a headless/unattended mode configured by the maintainer.



Ideally, winget would be the way to go, but I haven't had good luck with it. There are a number of limitations and it doesn't feel like a polished and curated repo. It just feels like a wrapper around chocolatey, which has its own problems.

Microsoft is the reason alternative package managers exist



I'm sure, but it's also "yet another package manager" because everyone has their own favourite package manager. It's a nit, but it's nice when a README.md goes "you need X, Y, and Z" and doesn't pretend you need a specific method for that. Tiny phrasing change, zero real world difference of course, but it's a nice little sign that the folks running a project know that you know what you're doing on the OS you're working with (either by choice, or by paycheque =)



I've been using Windows as dev platform since it was called "DOS but don't look too closely", I know chocolatey, ninite, winget, and powershell's own built-in nonsense, and yet had never head of scoop until just now. So... that really just tells us that any application manager we think is popular, ubiquitous, and the obvious choice is still really just a niche program =(

Contrast that to brew on MacOS: ever non-devs know about brew.



And from all of those, which I'd heard of, I think scoop is the only one to allow a package author to just create a git repo, and publish a package that way.

Like brew does. That why I use scoop.

Chocolatey would rather charge money for that, for some reason, and people are still willing to donate them their free time.



Genuinely interested, if the ' represents skipped letters, how does this read to you?

As an aside, if you have the sentence "This is Lewis' reply to the parent comment" the ' at the end of Lewis is used to avoid Lewis's with the extra s at the end.



As an acceptable compromise we can separate all punctuation and put it at the end of the paragraph Although some oldfashioned enthusiasts of the Chicago Manual of Style will object But I think its good to bring some fresh ideas to our old and worn orthographyevery now and again,.-…’—.



There's still zero day exploits found in chromium, wouldn't using using this put you at a huge risk of running into malware in the wild that this browser can't protect against?



Malware generally can't just "hijack" any arbitrary browser. You have to have exploits for a specific browser / browser family / JS engine. Chromium has defense in depth techniques to try to raise the difficulty bar and to try to limit the fallout of an exploit in various parts of the tech stack, but that still requires a Chrome-specific attack in the first place. Attacks for Safari / WebKit / JavascriptCore would similarly require a different vector for the exploit (even if some techniques are common, the specific details should be quite different).

There is a newer class of generic malware that exploits CPU bugs (e.g. Spectre) - are you perhaps referring to that? If so, that's a fair concern but unlikely to matter much in practice. For Spectre itself, I believe the mitigations were applied within the major JS engines directly (or at least for v8 they were).

Anyway, security issues are best compared when there's a lot more attention to your browser. But given that there's a huge amount of exploits that depend on buffer overflows that are simply impossible in the first place with Rust, it's likely that the browser's likely to mostly suffer only from architectural issues & fewer implementation issues whereas other browsers will still have architectural issues and implementation issues to boot that prevent them from addressing it. Yes, newer browser = likely more immature architecture, but at the same time there's fewer implementation issues to worry about in terms of exploiting architectural issues in the first place.



I somewhat wonder—Firefox and Chrome are in a constant race to have the best JavaScript performance.

In general, the sites I want to browse use minimal JavaScript, prudently, if at all, just where it is strictly necessary to add little dynamic features. So, I don’t really care about JavaScript performance at all.

Optimization sometimes introduces additional complexity, which might open up the possibility of security holes (at least it seems to be the case to me, as a not-security-related programmer. I don’t know anything about security on a technical level, so I’m interested in other perspectives on this from people that actually work in those sorts of fields). I wonder if there’s room for a browser engine that ditches performance and just focuses on correctness and safety.

Rendering documents ought to not be computationally intensive, right? Advertisements of blazing fast JavaScript performance make me worry what corners have been cut.



> wonder if there’s room for a browser engine that ditches performance and just focuses on correctness and safety

Isn't this just the noscript, which breaks most sites to a degree where they're impossible to use or load?



I assume he means something like turning off the JS JIT, not turning off JS completely. IIRC iOS turns off Safari's JIT when in lockdown mode. Ladybird browser also abandoned its JIT apparently due to security concerns. JS JIT is one important example but also in general if you write your code to only focus on correctness and not performance then you will get safer code (all else being equal).



Haha, well that’s what I use now. I think it is the opposite though. I’d like a JavaScript implementation that doesn’t break any sites, but which makes absolutely no security compromises, even if that means they have to give up a lot of performance.

Sometimes, I just have to load a site that has JavaScript running. Or is unfortunate, but some work sites don’t work without it, etc. I’m fine with those sites being slow (I’ll minimize my use of them naturally), but totally blocking them is slightly inconvenient.



Disabling (all of) the JITs is a decent approximation of this. It's very site-dependent as to how much of a performance impact it makes, but for many sites it'll be fine.

Obviously this isn't the same as making "absolutely no security compromises", but in practice most JS-related security exploits go through the JIT iiuc. Your JS will be executed with a safe interpreter, where by "safe" I mean the dispatching and basic value manipulation are going to be simple enough to be bulletproof, and also slow enough to prevent most timing attacks. The underlying implementation of all of the built-in methods is still going to be more vulnerable, but those tend to be relatively safe as compared to JIT-optimized versions of them. They also don't change much, so have been tested for much longer as compared to the JITs that tend to get refactored and rewritten relatively frequently.



There's ungoogled chromium for that.

Also doing that doesn't solve the standards problem we currently have.

Google is able to push through any "standard". And those standards "unwittingly" help them maintain their search/ad dominance or prevent competitors. For example see manifest v3 or FloC (a.k.a Topics API).

Once they gain enough traction and become indispensible, you either implement them or risk losing users.



They don't have to. But we have seen how Google can play this game of open core & official versions with AOSP and blessed Android versions with playstore.

There will be two classes of products one officially sanctioned version and the others that are used by enthusiasts. Apps or sites in this case may chose to work on one and not on others. Imagine the new wave of "works best on IE" with "best viewed on chrome variations a,b,c". Its not farfetched as some sites already do this.

As much as it is easy to maintain a fork, it is that much easy to give up or change path and accept upstream changes.

联系我们 contact @ memedata.com