(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41571606

* Swift 5.7 允许通过“string.firstRange(of: substring)”检索字符串中子字符串的第一次出现,从而绕过需要循环迭代的旧方法。 * 第三个问题收到的答案解释了字符串索引的行为,并提供了执行常见字符串操作的替代方法,解决了第一个问题和第三个问题。 * 第四个链接解释了函数参数期间“inout”关键字的使用,澄清了有关参数修改的混淆。 * 这篇文章指出,考虑到所有可能的字符串格式,在 Swift 中执行基本的字符串操作可能会很复杂。 * 这篇文章推荐 `utf8` 或 `utf16` 视图,使 String 能够被视为数组并正常索引,尽管在处理复杂字符、不同语言编码或不常见符号方面存在限制。 * 三个响应针对字符串操作的特定区域; 然而,它们被认为是困难的,甚至对初学者来说是难以承受的。 * 作者的结论是,在 Swift 中选择“正确”的方法来表示初始字符串可能无法满足预期,并指出处理所有可能的字符串表示可能会过于复杂。 * 此外,本文还通过“inout”关键字讨论了引用参数机制,说明了其在防止无意的状态共享引起的问题方面的重要性。 作者认为,遵守这种机制可以鼓励有意识的编码决策并降低整体复杂性。 * 最后,作者谈到了可视化布局编辑器与源代码的主题,赞扬了前者,但认识到它目前对于一些用户(包括那些管理复杂项目的用户)来说不切实际。 他们赞扬 SwiftUI 的引入,赞赏其简化图形界面创建的潜力。

相关文章

原文


Swift would be perfect if it wasn't dying a death by 1000 cuts thanks to the inherent conflict in its governance.

Swift is caught between two clans: the Swift Working Group™ open-source community, and the Apple corporate entity who pays most of their salaries. Both have their own incentives and their own imperfections, but you guess who has the majority influence.

Ridiculous, permanent, tech debt such as hardcoded compiler exceptions are permanently living in the compiler codebase. Even worse, half-baked concepts such as result builders are pushed through without any real discussion because Apple wants the SwiftUI syntax to look pretty.

It's an amazing language still, but I can't see it surviving as nicely in the next 10 years if Apple doesn't learn to let go.



FWIW, that answer (to the second link, after edit) is really old, and you can do string.firstRange(of: substring) since Swift 5.7.

The top answer to your third question gives a pretty good explanation of why Swift string indices work the way they do (as well as showing nicer ways to spell a lot of the operations on them), which mostly addresses the first and third questions. It really seems that your last link is just asking for the `inout` modifier; I'm not sure why that one is especially confusing.

Obviously, there's always stuff that can be further improved, but none of these are especially onerous (once you get past the "string indices are not just integers" step, at least--for people who really just want to pretend the world uses ASCII or work with UTF8 bytes directly, string.utf8 may be an nicer interface to use).



The thing is, each string-related answer ended up extending it with some methods that everyone wanted it to have in the first place, and the top-voted comments are like "why do we have to do this." It also shouldn't have required multiple updates to each answer.

The time I was doing a lot of string manipulation in a team Swift project, we had to write our own wrapper that basically stored strings as arrays of printable characters because the native one was too annoying. This also protected us from all the breaking changes Apple made to strings across Swift versions.

The inout one is different. It's confusing that arrays and dicts are structs, which have different rules from regular objects, and the syntax takes some getting used to:

  func addItem(_ localArr: inout [Int]) {
    localArr.append(4)
  }


> It's confusing that arrays and dicts are structs, which have different rules from regular objects

As a long-time assembly and C programmer and now Swift programmer, I would say that structs _are_ regular objects, and things with reference semantics are weird. It all depends on your point of view!



The answer you link to there is from the Swift 1 days in 2014. It was absolutely true then, but array has had true value semantics since shortly after that answer was written.



I'm fine with either approach if it's easy to know which one I'm using at a any particular call site. C# started the tradition of moving this information to the type level, far away from the call site. Swift has adopted this terrible idea.

The philosophy of making code look "clean" at the cost of hiding critical information in some far away place is the biggest mistake in programming language design. If code happens to look clean it should be the result of being simple rather than being cleansed for effect.

Other bad ideas in the same vein are computed properties and function builders.



In C and C++ (and many other languages) the distinction between value and reference/pointer is part of the variable declaration. In C# and Swift that information belongs to type definitions (except for inout parameters). Structs are value types and classes are reference types.

Type definitions are usually further away from the call site than variable declarations. Also, in C, accessing a member of a struct through a pointer requires the -> operator providing yet another clue as to whether you're using a pointer or a value.

In my opinion, this distinction is absolutely key. It should always be obvious at the call site. I don't think there is anything that has caused more bugs in my Swift code than getting this wrong.

Change a type from class to struct or vice versa and chances are your code still compiles when it really shouldn't because it's now completely broken.



If there are identity having classes (reference/pointer) that may be mutable, and value types that are always immutable, then I think it can be an “implementation detail”, part of the type, not changing semantics.

If you can’t change a struct’s field, only the whole struct, then I believe it’s fine - and the compiler may decide to copy it or change in-place depending on its available context, is it not the case?



Three of those really are very specific to string manipulation, and doing it "right" (with all the possible representations of what a string can be) is inherently complex. I think Swift landed on the wrong side of defaults for this API, opting for "completely safe and correct" over "defaults to doing what I expect 99% of the time"

You can get a `utf8` or `utf16` "view" of a string, and index it like normal array (`myString.utf8[0]` gets the first utf8 character). But it's not going to work with complicated emoji, or different languages that may have representations into utf16, etc. Again, I think the vast majority of people don't care for complete correctness across all possible string representations, and possibly Swift has gone too far here — as noted by all the Stack Overflow posts and clunky API

On the array-pass-by-reference, I'd argue that it's valuable to learn those semantics and they aren't particularly complicated. They offer huge benefits relating to bugs caused by unintentionally shared state. Marking the parameter `inout` is a small price to pay, and really forces you to be explicit about your intentions



Swift was designed around emojis it seems. First page in the manual shows how you can use emojis as variable names. I get why Apple wants to be clear how there are different ways to index into strings (even if this is motivated 99% by emojis like "family: man woman boy boy skintone B"), but still, the API didn't have to be this confusing or have so many breaking changes after GA.

About structs and pointers, I'm familiar with them in C/C++ where the syntax is clear and consistent. It's not consistent in Swift. And arrays don't even act like regular structs, I forgot: https://stackoverflow.com/questions/24450284/conflicting-def...



Because they rely on variadic generics they can make type inference very slow or just fail completely, leading to the infamous and singularly unhelpful compiler error, "The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions".



Swift is the only language where I've had to fight the compiler to do its job. In earlier versions like 1.x and 2.x, it would often segfault. By 3.x it was still really slow to build. I regretted moving a project off ObjC back then.

I thought maybe that was all fixed by now, but guess not?



On paper Swift has a lot going for it. In practice it's easily the worst devx out of the modern languages. And SwiftUI is still so full of bugs and performance pitfalls I'm actually quite pessimistic about the future of native apps on Apple platforms.



To be honest, the way they are damaging their brand/products/OS just to make a bit more money is enough to be pessimistic about Apple.

But it's very true that the state of the language can be felt in their native apps, that tend to suck pretty bad recently. I still can't get over the nightmare that is the split up of iTunes; at least we knew that it was clunky because of old age, the new stuff is just bad.



Yeah there's a reason people go to all that effort with React Native to avoid writing Swift code or dealing with Apple's UI frameworks, and it's actually a reasonable approach for the majority of apps.



My main app is a cross platform Flutter app. I've considered rewriting it in Swift because most of my users are on macOS or iOS but all the prototypes I've written are actually slower even after extensive performance work and the development experience makes me want to tear my hair out.



I'm actually surprised at this because while UIKit is hard to use, at least it's fast. Though I remember the concurrency model being confusing, so you could accidentally block your UI thread.



UIKit is pretty fast although a major step down in dev velocity.

AppKit on the other hand seems to be pretty intrinsically slow and the controls are looking increasingly dated.



Odd criticism.

UIKit is the iOS counterpart to MacOS’s AppKit and both are implemented as convenience wrappers around CALayers. They are also infinitely customizable. You can overload UI/NSView and draw vector-pen style on a blank canvas or render whatever you want on a GPU frame buffer. This is how MapKit, Safari, and the Camera view is implemented.



It’s a criticism from recent experience trying to build AppKit based UI. The examples you list barely use the stock widgets.

There’s decades of accumulated cruft in Cocoa that Apple discarded when implementing iOS.



Yeah I worried about that going in too but in fact I've found it much easier to get good performance with Flutter than SwiftUI, especially for large collection views and especially on the mac.

The work the Flutter team did on Impeller seems to have paid off.



You should try to implement the iOS photos.app in flutter and see how that goes. This requires scrolling through gigabytes of photos as fast as your finger can swipe without a single hint of slowdown or a loading spinner. And it’s been that fast since.. iOS 7?

Yeah it’s not the language or the SDK that’s slow. Rather it’s inexperienced, lazy, or overworked developers that can’t/won’t write performant software.



I’ve been building iOS apps since before Swift existed. Sure like I said if you code directly to UIKit and take a little care performance is good. It’s also very fast in Flutter with even less care. Rendering images in a grid isn’t hard unless your core abstractions are all wrong.

Now try that in SwiftUI. You’ll be forced back to UICollectionView.



That’s cool. I’ve been developing on Mac before Objective-C 2.0 and iOS since the AppStore was released. Millions of downloads, featured in the store, worked on dozens of projects from video games to MFi firmware, and have been invited to Cupertino to meet with teams.

I’m not defending SwiftUI. I mostly use it as a wrapper around NS/UIKit because it’s still buggy and not as flexible.

By the way, SwiftUI is also implemented on top of CALayers just like NS/UIKit. It can be fast in theory, but you have to know exactly where the pain points are and most developers don’t know how to do that.



I don’t think it’s impossible with proper caches to smaller dimension versions (that supposedly Apple already generates/has access to - like they are doing a bunch of processing, like object recognition, etc).



I have made a lucrative career by porting fragile, slow, bug-ridden react-naive disasters to native code bases. There is a lot of demand for this from startups that took the cross-platform shortcut and the MVP became the product.



You can make a disaster in any framework. SwiftUI is a mess, for example, and slow.

React Native took a while to mature, but with the right tooling you can ship amazing UX now.

I don’t doubt there’s a ton of crap out there.

But you’re wrong if you think you can’t make seriously great stuff with it. It’s matured quite a lot.

And the React programming model is untouched, hot reloading and dev tools far ahead, and code share is worth it with something like Tamagui that actually optimizes to each platform. If I never had to touch an ObservableObject again that would be great.



Like you dont have to know native components anyway?

In one way you centralise as much logic as you can and are encouraged to write clean code that doen't depend on platfrom quirks. In the other way you... give up and just do whatever.

I can see how some devs find it hard to not give up and just write the same logic in multiple languages, great job security!



Ok but these are mainly academic research languages. Swift has the backing of the most valuable company in the world and is what they're pushing as the right way to develop for their platform.



Haskell is definitely a real industrial language!

Many of the other languages in the formally verified/dependent type space are academic, but there's government interest in things like Ada too because they don't want their planes to crash. Couldn't say how good its error messages are though.



I've seriously used Erlang for a while, and Haskell looks kinda similar. Ingenious ideas there, cool features, but in the end it's cumbersome to use. So I can see why these are niche and wouldn't consider them next to big ones like Swift or C++.



If Rust is one, yeah I have to fight that compiler but it's because it's doing its job and not letting me do invalid things. Not because the compiler has some feature "not yet implemented" or has bugs.



Also, is anyone familiar with the weirdness with tuples in Swift? All I remember is they never worked the way I expected, and for some reason something in our code specifically worked for tuples up to size 5.



Swift only got variadic generics fairly recently, and before that you couldn’t write code which was generic over tuple size. Instead you had to codegen versions for each size of tuple you needed to support.



> Ridiculous, permanent, tech debt such as hardcoded compiler exceptions are permanently living in the compiler codebase.

A little search-engining didn’t surface info about this, could you point me in the right direction?



I get what you’re saying and largely agree, but without result builders SwiftUI wouldn’t exist, let alone “look pretty”. You seem to be devaluing the syntactic sugar it brings, which in this case makes a massive difference.



That is a fascinating take and approach to contemplating a language.

Can you think of any other languages that share a duality like Swift? I mainly play in the python ecosystem, but I am looking to branch out and really learn a compiled language. What you wrote about Swift makes sense and would be concerning if I had recently picked it up.

"Yadda, yadda..." regarding picking the right tool for the job aside, I don't want to waste time on a language that can be usurped by a giant corporate borg anytime they see fit.



C# for Microsoft and for a long time people were afraid of Amazons influence on Rust

But the reality is that many major languages already have very outsized corporate influence. Either at the language level or the compiler level.

Swift is open source and has been separating from Apple ownership as of this year.



> I am looking to branch out and really learn a compiled language

Tip from a friend - just learn C++. It's not sexy, it's not new-fangled, but it's rock solid, runs half the world, and you will never struggle to find a job.

You'll also understand what it is that all of the new-langs are trying to improve upon, and will be able to made an educated judgment about whether they succeed or not.

A good resource for modern C++ (albeit it seems to be down rn?) is https://www.learncpp.com/. I'm not affiliated, it's just good.



I’ve been meaning to do this for years, and just played around with rust a bit (liked it, but the wrappers around some c++ stuff i wanted to use were half baked). Learning rust, there was this “rustlings” thing [0] that was a set of guided exercises to go alongside the rust book. Fastest I’ve ever picked up a language, thanks to that. Do you or anyone know anything similar for c++?

[0] https://rustlings.cool/



C++ has a lot of things I would call new-fangled, in addition to many old ways of doing things, with no good ways to settle on which iteration to use so devs can avoid learning all of them. And some things simply require templating nightmares to work at all.

I would also not use "rock solid" in comparison to how easy it is to hit undefined behavior.

Used all over and easy to find jobs, yes.



C++ gives you a garage full of tools and lets you decide what to do with them. Ultimately, it does this because the years have shown that different people need different tools for different use cases. Few people need or use them all at once. But when you do need a specific tool, C++ has it ready for you. Some consider that a con, I consider that a pro.

I find that a lot of the newlangs take the view that since most programming only uses 20% of the toolkit, they can just dispense with the remaining 80%. Which is great, until you discover that virtually every sophisticated project needed one of those things from that remaining 80%, and they all needed a different one. A nice language for writing 'Hello world's isn't going to cut it. And so either the language insists on its simplicity and stagnates, or it just reinvents all the complexity of C++.

At which point, you were better off just taking the garage full of tools that you already had, rather than going all in on some newlang, getting blocked, and stalking a github ticket for years waiting for the lang to get that feature you need. (If you spent the time in C++ instead, its full garage wouldn't look so imposing over time!)

What's the famous quote? 'There are only two kinds of languages: the ones people complain about and the ones nobody uses.' :P

Re generics, aren't C++'s virtually the same as Rust's? Especially now that C++ has 'concepts'?



There's a lot of redundancy and things you probably should never use in C++, though. It's not complexity that needs to exist other than for backwards compatibility.

> Re generics, aren't C++'s virtually the same as Rust's? Especially now that C++ has 'concepts'?

I'm not worried about generics when I talk about template nightmares, that's more about rvalue and const overloads and vararg templates and careful manipulation of SFINAE, all coming together.



Something I’ve been curious about recently, is how did Linux get away with straight C for so long, considering how complex of a project it is. Did they end up reimplementing a bunch of C++ features?

Actually, regarding sophisticated projects, there’s quite some complicated projects that succeed without C++ power, like Postgres and Python.



I didn't mean to imply, if I have, that C++ is always and in all circumstances the best choice for any given software project.

The question was about the first compiled language someone should learn, and for that, C++ is great. It's going to cover most of the use cases for compiled languages, while providing relatively familiar abstractions and semantics.

C is fantastic when you need to eke out every single cycle of performance, which is why it's a great choice for Python and Postgres. But you do this by effectively writing assembly instructions, which is why it's a terrible choice for someone coming to compiled languages for the first time.

C++ gives you equivalent performance to C, for a fraction of the effort, in about 90% of the use cases. For the remaining use cases, C is a better (and often only) choice, but no one who is only learning compiled languages for the first time is anywhere near being able to write that kind of C code. (After a few years of C++ they'll be in a much better place to do so!)



I use C++ at work but am glad I didn't learn it before. Yes it's a good language for what it's made for, but there are so many features that anywhere you work, you will use it differently from how you used it before. Better to learn it on the job imo.

Just getting good and greasy with Python and JS with all the typical libs has been more rewarding for me. Nobody taught me, but it was useful.



> Tip from a friend - just learn C++.

Is that good advice for certain domains only? For example, you likely wouldn't want to use C++ for web server backend? You could, but may not want to.



Definitely. Use the right tool for the job.

But if you're looking to learn a compiled language - presumably because you want to write applications, games, or systems - then C++ is a great one to learn.



I can certainly think of a web browser that has that same conflict of interest... in fact we are actually right in the midst of a "leopards ate our face" moment with ad blocking becomes undesirable to Google in Chrome.



You can apply the same complaint to Java and .NET ecosystems, both doing quite well, despite not everything being open source, or being managed by entities that FOSS folks rather not deal with.

"Oracle, IBM, Azul, AWS, Google..... this and that."

"Microsoft this and that."



I was always puzzled by the swiftUI syntax. Thanks for pointer me to result builders, I understand better now. I can't help but thinking the whole thing could be re-implemented with macros now? (result builders appeared in 5.4, macros in 5.7)



Apple never set out to create Swift. Chris partner worked on it for fun in his spare time, and when he realized it could go somewhere he went through the channels to get it turned into an official project. With that being said it was tailor made to address important issues for Apple developed. For example, it has great interop with Objective-C which was a requirement to get it adopted, first at Apple, and then with the wider community. It is also built with safety and security in mind removing lots of undefined behavior and adding null safety. It is also statically typed while JavaScript isn’t. There are a whole host of other goodies that it has that make it better for Apple developers than just adopting JavaScript. Chris partner has lots of talks where he goes over the reasonings



I'm still fairly new at Swift and like it "OK" so far. One thing that I find particularly annoying however is how you very often run into "Compiling failed: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions" when using SwiftUI...

It's just shocking to me that the compiler is asking me to help it out.



I think SwiftUI is actually causing Swift a lot of reputation damage. I rarely see the issues common to moderately complex SwiftUI views when writing Swift with code-only AppKit/UIKit, for a CLI program, etc.



I jumped directly into SwiftUI for a few macOS apps I published, and the weird performance issues / UI lag / Compose complexity for even simplistic apps were hard to navigate, even after a year.

Yet rewriting components in AppKit/UIKit feels regressive since even Apple's docs migrated to SwiftUI, but sometimes that was the right answer to regain some control over what was going on.

Though some of that was limited to macOS which seems to get a lot less attention from Apple than iOS, or because you're naturally building more complex layouts on macOS. But it always makes me appreciate how much better the kit is for building web frontends.



Most of the layouts I've built on macOS are actually simpler than their iOS counterparts because the mode of interaction is so different. Getting data onto the screen in a somewhat natural way is usually less of a contorted process because one doesn't need to deal with a software keyboard, a narrow viewport, lots of scrolling, etc.

The problem with SwiftUI on macOS in my opinion is that it's just not well-suited to the types of layouts common on desktop and especially macOS. It's best at uniform grids and lists which is more of a mobile thing. On macOS you want things like visually balanced whitespace, optical centering of labels+controls in forms, etc which are often rather arbitrary and exactly the thing that SwiftUI is worst at.



Eh, I'm not too bothered by it. SwiftUI has its issues but I've put the result builder feature powering it to use for things vastly more simple than SwiftUI, for which it works very well. It's a net positive overall and I prefer it to having to rope in libraries for every little thing.



I've heard about SwiftUI just some months ago, and was initially excited about the demonstration in WWDC, so I started to pay more attention to it. Now I've been reading comments on how it doesn't handle well more complex UIs, but haven't really read anything in depth. Can you recommend any resources on that?



I use it full time, I’d say if you’re targeting iOS17+ you’re probably good. On macOS it’s completely busted still and I wouldn’t bother with it. The future of mac apps looks even bleaker than it did before.



SwiftUI errors in general are mostly unhelpful.

It will go so far as to suggest a change in a thing only for the developer to find out the cause was completely unrelated or even in a model in a different file.

Helpful error messages for SwiftUI would go a long way



In my case, Metrowerks CodeWarrior student edition in the late 90s, when I was going though "C for Dummies" in highschool.

Miss a semicolon, every line after but not including the one with the error.

ResEdit was better in the 90s than SwiftUI was in Xcode last year. Hoping to find that standard will be re-achieved when I install the update…



Admitting SwiftUI to have been a mistake and reinvesting back into AppKit and UIKit would go a long way.

In what other industry does anyone use text to create visual layouts? We still do drag and drop to design the layouts anyway, followed by re-creating the drag and drop version... using text...

It's all quite maddening if you try and make sense of it.



I think a text-based, declarative rendering API is the right choice. This approach has been vindicated on the web.

But the actual implementation of SwiftUI is terrible. Long compilation times, cryptic compiler error messages, bad performance, and a baked-in two-way binding observable state API that's great for code snippets and WWDC talks but makes building a coherent state model in a real app very difficult add up to a big mess.



iOS and macOS had a visual editor forever. Interface Builder was a really interesting take on visual layout, instantiating real objects in the interface and serializing them into nib / xib / storyboard files ready for use in your app.

Most developers I know in the ecosystem tried and tried to like the visual editor, but ended up falling back to code. Why? Source code is pretty great. It’s easy to read, write, share, diff, and organize.

SwiftUI has, again, a very modern and interesting take on visual editing. The code is the truth, but there’s a rich, realtime preview canvas that makes rountripping possible. For reasons, it’s unfortunately all but unusable for a lot of folks with complex Xcode projects, but it’s a very good idea.

In summary: Visual layout editors and text are both pretty great. The devil’s in the details.



I find that Interface Builder, while not great for editing XIBs on iOS and storyboards on either platform (it's slow and buggy in those cases), is actually still fairly pleasant when working on Mac XIBs. My go-to is pure code UIKit on iOS, but I still reach for XIBs frequently for Mac stuff.

If IB's performance and stability issues could be fixed, I think its biggest problem is the near-unintelligible spaghetti XML that it generates for XIBs and storyboards, which is a huge pain for VCS and frequently cause of merge conflicts (which must be manually corrected). If they just cleaned up the XML output or switched to something more friendly to manual editing that'd help a lot.



> the compiler is asking me to help it out

Yeah, this sucks, but it's also mind-boggling. The SwiftUI body result builder returns "some View" which means the compiler has to infer the specific type that the function produces, as a complex combination of whatever different if/then and while loops, along with their transitive function applications and closure contexts (where the type inference can be outside-in instead of inside-out).

Then layer in in the @Observation macro magic for observing data types and re-building views when data changes, Swift Data pulling with fully typed predicates from a data store automatically replicated across devices....

It's like rocket thrusters from every angle, and pretty easy to start tumbling off the happy path.



Swift 6 is a major leap forward for cross-platform batteries-including development IMHO. Foundation is the big win here, because it provides so much out of the box.

Swift 6 is the first version with enough low-level improvements and cross-platform capabilities to make me curious if the Swift team is trying to aim for long-term replacing C, C++, Rust, Zig, etc.



Long-term replacement of C, C++, and Objective-C on Apple's ecosystem has been on Swift documentation as mission statement since day one.

I don't get how people still get surprised by this.



It’s too slow. I can’t stand waiting for builds on what I would call a medium sized project.

I haven’t given 6.0 a full effort yet, but a couple test compiles without any tuning showed only incremental improvements.



If if it does, it isn't like using NDK and going through JNI for anything useful outside game development is that great experiece.

People keep doing that, because I guess they hatred for Java and Kotlin is higher than the pain of using NDK.



I feel like the whole issue with c++ is that in order to create a language able to do everything you more or less have to include a feature set that is 'unharmonious' to say the least.

To me it kind of feels like swift's place here is going to replace the subset of c++ that apple is mostly interested in. But not necessarily the c++ that they aren't and then likely not the c++ that rust et al are able to replace (although I guess we'll see what they have in mind as time goes by).

I suspect they'll be disappointed if they try to replace c++ in totality. As the end result will likely not be particularly habitable.



> I suspect they'll be disappointed if they try to replace c++ in totality.

Why is that? That is their aim, I think they’ve made it clear.

They’re writing all-ish new code in Swift (not always practical in existing code) from what I understand. They’re already using it for the code that runs in places like the Secure Enclave or other embedded processors.

Can it replace C++ 100% everywhere today? Probably not. I don’t know enough to know why. But that absolutely appears to be the goal.



Been using Swift since the pre-release announcement that it existed.

O'Leary's Law of Swift Comments on HN, much like Betteridge's Law of Headlines, says the answer to Swift speculation on HN is always no.

If there is any global aim, it is to migrate internal teams still working in ObjC/C++ to Swift.

If this sounds close-minded, it's worth talking with an iOS developer 1:1 and get info on the day-to-day experience of SwiftUI 5 years in, and concurrency/actors 3 years in. They have a lot of fires to fight to keep the house standing.



> Apple is not trying to potentially attempt to aim for maybe replacing every other programming language with Swift

Apple's stated goal is to make Swift a viable language up and down the technology stack, which goes way beyond Objective-C. They are actively working on making Swift viable in baremetal environments, firmware, drivers, etc. IIRC they even referred to it as a C++ successor language at WWDC this year.

I agree that they're not trying to "replacing every other programming language," but they're investing in it becoming a viable language choice in most environments.



Which I think that makes sense for them to want it to be viable all over the stack. It also directly benefits them also. I am assuming a lot of the motivation for embedded swift is being able to use it in their own stacks. I think even the presentation for WWDC gave examples of it being used in some pieces of hardware like on the AppleTV?



I think the goal of Swift on the server, or embedded Swift, is mainly to offer Mac and iOS developers a way to write everything in the same language.

Right now, my Mac app depends on a few simple web services that are written in a different language. It would be neat if those services could be written in Swift, so that I could just use the language I already know instead of relearning how to do decode JSON in Python/Ruby/PHP.

Swift on the server doesn't have to become widely used. As long as there is a simple way to write basic web services, it is useful.



> It would be neat if those services could be written in Swift, so that I could just use the language I already know

Then somebody else would have to now learn Swift even thought they write in different language like: Android Developers (Kotlin/Java), React Native devs (JS/TS ), Windows Devs, Linux Devs etc. As long as Apple don't invest more in official cross-platform tooling Swift is not gonna be mainstream (even though I like Swift). They have to bless other competing platforms.



I was more thinking of smaller teams or individual developers that are Apple only.

If you have a bigger team, then it doesn't matter as much, because you have different people who do the iOS app, people who do the website, people who do the back end etc.

Swift on the server doesn't have to go mainstream to be useful. If you just need some basic web service that syncs highscores or verifies in-app purchases then it would be neat if you could write that in the same language as the app itself.



> They are actively working on making Swift viable in baremetal environments, firmware, drivers, etc.

I don’t think Swift has a place in these niches FWIW. Writing low-level Swift code is very verbose and unnatural. Personally I just don’t think it’s practical to have a single language excel across the stack.



Do you have any evidence they’re not targeting it as their main everywhere language?

It’s already been used in their libraries, their OSes, and even firmware on embedded processors.



I think this conversation has a lot of parallels to their public views on SwiftUI. Are they working on it? Yes. Will it maybe end up being what they consolidate around sometime in the future? Maybe. Does that mean you should believe them when they say it’s the biggest thing you should invest in right now? No.



They’re pretty clear on both. UIKit, AppKit, and Objective-C(++) aren’t getting new features except in rare cases.

This is the way forward. And they’re dogfooding it. Even in some of their embedded processors.



> they even referred to it as a C++ successor language

Nice!

I'd gently warn against parsing that too closely: having been in the community and the weeds starting in 2014(!).

ex. the cross platform Foundation was announced and open sourced in 2016.

I'm sure a lot of things were upgraded or more consistent and its better etc., but it's a painful to remember what was expected + communicated at the time, and how much later it is.



There is code in the kernel to enable treating types like `os_log_t` and Clang blocks as Objective-C objects higher up the stack, but the XNU kernel itself is almost entirely C and C++.



At the time they thought it would be too slow. In retrospect it would've been fine, I think. IOKit is a pretty strange subset of C++, but at least it doesn't use all those templates.



The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close. The ecosystem is amazing, the language itself is awesome and follows sane rules, you can chose between native code, JVM, WASM, or JS-IR, or all as backends to compile to (for the exact same code) depending on your use case. Compose Multiplatform is also wonderful and by far my favorite UI library now and I’ve shipped a fair bit of code with it. It’s a single UI library that I can share my UI across every platform (including iOS and android), and even seamlessly interop with native widgets on platforms like iOS if I need to, like the camera viewfinder for example.

Kotlin’s real strength is the compilers ability to target as many different backends as you want and cross compile into any platform you want. I have an app in production that save for about three lines of swift that declares the view controller for iOS specifically, shares its entire codebase all in kotlin between Android, iOS, Mac, Linux, and Windows. I could add a line to the build file and add web as a target and turn it into a web app.



The comment you are replying to was focusing more on the low-level features of Swift for systems programming (hence comparing it to systems languages). Kotlin is cool but it is not a system programming language and the native code compilation from Kotlin is not aimed to be.



That earlier comment didn’t actually mention “systems” at all (although it does include “low-level”).

I feel like “systems programming” is getting increasingly ill-defined these days, anyway. If Kotlin and Java aren’t allowed, how about Go? Are you ruling out all languages with garbage collection?



A basic litmus test I use for systems programming languages is: “can I imagine the Linux kernel including this language one day?”

Currently that’s C, Rust, and some assembly.. obviously there are more languages used for systems programming tasks too. Memory management, concurrency, and low-level access are also super important in a system language. How would you define a systems language?



C++ would sob in a corner, but the Sob static object was instantiated before the Tears static object because the link order changed and so there's a dangling reference and nothing runs any more.



> That earlier comment didn’t actually mention “systems” at all

It doesn't really need to mention "systems" specifically, since the comparison to other common systems languages, in addition to the words "low-level" implicitly imply systems programming languages.

Usually, languages with GC will not be considered contenders for systems programming.



I think systems programming, like system tools or cli tool for example can also be done with a garbage collector e.g. in Go or Ocaml. For low-level development you would usually expect that the program manages memory itself, not some garbage collector. This is not black and white, some approaches of reference counting for example might be suitable for some low-level development. Or in some languages you can choose like Nim, D or if I understand correctly, now in Swift ?



> The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close.

I've been using C#/.Net lately and I've been very impressed. Very large ecosystem and community. Perhaps larger than Kotlin. And you are not stuck with one IDE from one company.

Microsoft also has dotnet MAUI that sounds similar to Compose Multiplatform. I have not used neither so I can't speak to any strengths or weaknesses in the UI libraries.



I’ve gotten the impression that MAUI might be a dead-end, with MS pushing Blazor instead.

I’m amazed people like C# so much. I think it really shows its age when you compare it to Swift or another modern language.

Some things I’ve been frustrated with:

- throwing isn’t a contract made at the function level. Anything can throw, and you can ignore it if you want. And the APIs are inconsistent because of it

- nullable types and default values are weird and seem like two solutions to the same problem

- Blazor bindings are very boilerplate heavy

- hot reload doesn’t work most of the time, and iteration times are bad compared to every other stack I’ve used



Kotlin isn't bad, but having written plenty of both it and Swift, I'd rather write Swift if I had the choice. Their syntaxes share a lot of similarities, but subjectively I find that Kotlin is less ergonomic, not as conducive to code that reads smoothly, and is finicky about odd details in comparison. I'll take SPM over Gradle any day too.

On the more minor side of things, Swift's built in JSON serialization is super handy. It's really nice to not have to increment the dependency counter to parse JSON and makes spinning up projects that much faster.



I've been using Flutter and Dart for multiplatform apps, as I found Compose Multiplatform to be far behind Flutter thus far in support. Dart also compiles to a VM, WASM, JS, and native code, although not the JVM but I don't know how useful that is if you already have the previous three.



> The most batteries included multiplatform language right now is _by far_ Kotlin and nothing else is remotely close.

Kotlin is nowhere close to Java in this comparison. For all intents, there is just one realistic IDE for Kotlin. And realistically only one build system. And the community is very small. Java, by comparison, has many IDEs to choose from, many build systems, and a very large community to answer questions.

K



With safe concurrency and typed throws, Swift is starting to look a lot like a friendlier Rust to me. Honestly pretty excited to take a look at it again, though I doubt it will become my daily driver due to the smaller package ecosystem. Hopefully cross-platform Foundation is a step towards improving that though.



> Swift is starting to look a lot like a friendlier Rust to me.

That’s what i thought and rewrote my cli util in swift. Ran great on macOS, tried to build for windows and found out there’s no well maintained, actively developed http server for windows for swift.

Dont let these wooing crowds fool you



> story for web development

Under the hood, Swift-NIO and async Swift is a pretty powerful basis for writing performant servers. Aside from Vapor, there are other small/fast containers like hummingbird.

Not mentioned (surprisingly) is Swift support for wasm/wasi, for deploying code directly in the browser.

Also, "some say" that macros could revolutionize both static and dynamic generation by moving a good portion of site generation to compile time. I'm not aware of any libraries realizing that promise yet.

Finally, Swift concurrent actors have supported distribution for some time, so you can have all-Swift distributed systems, where the client code works with both local and remote servers.



You can't compile for Linux from XCode (defacto IDE for all things Apple) and all web dev runs on linux.

If you like having an IDE instead of scrolling multi-page compiler error dumps in your terminal window - this is a complete non-starter.

The leading Swift web framework (Vapor) suggests you use Docker to build for Linux. I gave it an honest try - their empty starter 'hello world' web server takes more than a minute to compile. Ok, but surely it'll be faster after I make a one liner change? No - their docker workflow has 0 compiler caching - you'll be waiting more than a minute every time.

Complete non-starter.

I ended up installing a VM, installing the swift compiler and that only takes 2-3 seconds to re-compile a 1 liner change (in a print statement, in an empty project, lol). Consider me very deeply unimpressed.

By comparison - a visual studio code + docker + python/ruby/javascript setup is a well oiled, working machine.



Is it possible to write code in Xcode, press compile and have the debugger show me where an error is when compiling for linux.

If yes, please show me the way because I've failed and I've given it an earnest try.



For fast web servers, you could use .NET, especially if you care about Windows. It gives you good ecosystem and consistent experience across all platforms. Even FreeBSD support has improved considerably as of lately. It is already built on top of what Swift, Java and others call "NIO". In .NET it's just "SocketAsyncEngine" users will never know about unless they look for it :)



Swift does not use virtual machine and garbage collection, it competes more to c++ and rust and if Apple is serious about pushing it cross platform that's definitely a welcome move, in fact, I can't wait even though I have never programmed in swift. the main point is that, it's memory safe, and seems much easier to code than rust.



Man the term must have changed since I was in school; i thought garbage collection was a much more general concept than a specific tactic to achieve this end of automatic memory collection. Pity, it was a useful term.

It's worth noting many others also consider automatic reference counting to be a form of gc, albeit one with different strengths and weaknesses than stack- and heap-scanning varieties



Maybe he should then read a book on garbage collectors that all start with ref counting..

Also, is it “pause execution and clean up” together? As ref counting obviously has to clean up, that’s the whole point - and it actually does so by blocking the mutator thread (the actual program written by the user). Then we didn’t even get to the point where syncing counters across threads are possibly the slowest primitive operation a CPU can do, so if we can’t know that an object will only ever be accessed from a single thread, ref counting has plenty shortcomings. Oh also, nulling the counter in case of a big object graph will pause execution for considerable amount of time (particularly noticeable in case of a c++ program exiting which uses a bunch of shared ptrs)



Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former, to the point where the fact that ARC does not have GC pauses does not help if the whole things is multiple times slower, in many ways it's like Go's """low-pause""" GC design discussions that completely ignore allocation throttling and write barrier cost.

Swift lacking proper performant GC is a disadvantage. Upcoming features solve it by likely enabling more scenarios to sidestep ARC, but their impact on the Swift as a whole, and user applications that use them, is yet to be seen.

It's important to always remember - there's no free lunch.

I'm sad that Miguel de Icaza seems to have a bone to pick with C# nowadays, but it's not surprising given Xamarin story.



> Perhaps? Most scenarios that explicitly involve .NET's GC vs Swift's ARC display much better performance of the former

By which you mean "less CPU cycles on a desktop machine with plenty of memory"?

That's not when ARC is more performant; it's better on smaller devices that are under memory pressure and have swapped out some of your memory. In which case you have to swap it back in to go scan for pointers. And if you're a low-priority daemon then you evict higher priority pages in the process.



Perhaps? You assume GC takes unreasonably more space. It's purely a function of a tradeoff between running it more frequently, tuning heap sizing algortithms, choosing to run them as part of allocation calls on the same thread, sacrificing throughput in the process. GC can be more compact than what you assume. Modern good GC implementation are precise and don't have to mark dead GC roots as live, even within a scope of a single method. .NET and I assume Java GC implementations work this way - that's what "precise" means in "precise tracing GC".



It's not that it takes more space, it's that it has to read memory more often. Not all memory pages have the same cost to read.

Most memory swapping on most people's home computers is from web browsers for this reason; it's part that everyone uses them, but it's also because they're running JavaScript. And they're pretty well tuned, too.



> it's that it has to read memory more often

Wait until you learn about "reads become writes with ARC" :)

ARC as implemented by Swift, on top of ObjCs retain and release, is design that has an advantage in being more simple, but at the same time worse at other key aspects like throughput, contention, memory traffic and sometimes even memory efficiency. Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.



You're lecturing me about my job here. I don't need to learn nothin'.

> reads become writes with ARC

That's not a big problem (it is a problem but a smaller one) since you can choose a different tradeoff wrt whether you keep the reference counting info on the same page or not. There's other allocator metadata with the same issue though.

A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

> Originally, Swift was meant to use GC, but this failed because Apple could not integrate it well enough with existing Objective-C code, leading to a very crash-prone solution.

It was Objective-C that had the GC (a nice compacting one too) and it failed mostly for that reason, but has not come back because of the performance issues I mentioned.

> Also, JavaScript has nothing to do with the lower in abstraction languages discussed in this chain of comments.

Oh, people definitely want to use it in the same places and will if you don't stop them. See how everyone's writing apps in Electron now.



> A more interesting one comes up with GC too; if you're freeing all the time, everyone compresses their swap these days, which means zeroing the freed allocations is suddenly worth it because it compresses so much better.

Moving GCs solve it much more elegantly, in my opinion, and Java is just so far ahead in this category than anyone else (like, literally the whole academic field is just Java GCs) that not mentioning it is a sin.



If your point of reference are Objective-C and Swift only, and you have not looked at how .NET's or Go's (which makes very different tradeoffs w.r.t. small memory footprint) GCs work, it might be interesting to re-examine prior assumptions in light of modern designs (I can't say Go is modern per se, but it is interesting nonetheless).

Also, .NET tends to heavily zero memory in general, as the spec dictates that fields, variables, arrays contents, etc. must be initialized to their default values before use (which is zero). Compiler can and will elide unneeded zeroing where it can see, but the point is that .NET's heaps should compress quite well (and this seems to be the case on M-series devices).



It has nowhere near the performance characteristics of those languages. It could, but it doesn’t. Look up a variety of language benchmarks. It’s typically ranked around Python/Javascript. You can get as fast as C but the code is very atypical.



The thread you just posted has a bunch of posts indicating this was not the actually the same program in Python and Swift; further, the Swift version was written poorly. Plus, the graph in the final post shows whatever Swift version someone ran tests on as much faster than Python.



> garbage collection

Reference counting is garbage collection, and it performs significantly worse from a throughput perspective than tracing GC, which is possibly the most common metric for web server type workloads.

It really is not nitpicking, we should just really use tracing GC when we mean it.



Which is reference counting, not garbage collection. Ref counts free when count = 0. Garbage collection scans all object pointers and looks for loops / no missing pointers.



Of course reference counting is not tracing garbage collection. I never said it was. The comment I replied to claimed reference counting was not garbage collection at all and seemed to think tracing garbage collection was the only kind of garbage collection. Reference counting and tracing garbage collection are two different types of garbage collection.



Usually, it's the other way around: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

binary-trees is almost completely dominated by the time spent in allocator code, and stresses its throughput. This benchmark showcases how big of a gap is between manual per-thread arenas, then tracing generational multi-heap GCs, then ARC and more specialized designs like Go GC. Honorable mention goes to BEAM which also showcases excellent throughput by having process-level independent GCs, in this case resembling the behavior of .NET's and OpenJDK GC implementations.



A tree is indeed a bad fit for RC; so is anything else where you have multiple references to something but know there is a single real owner.

I'd suggest keeping strong references to all the tree nodes in an array, then having everything within the tree be unowned. Basically fake arena allocation.

Actually, the way it's written:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

is a common way you see toy data structures code written, but it's inefficient (because pointer chasing is slow) and there's better patterns. If you use the arena method above, you could use indexes into the arena. If not, intrusive data structures (where the references are inside Node instead of inside Tree) are better.



Pointer chasing is irrelevant here. It takes

Please read https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

The requirement to implement the same algorithm with the same data structure is what makes this benchmark interesting and informative. Don't tell me "allocate parent object A, allocate child objects C, D and E and assign them to A's fields, then allocate array F and G, and assign them to D's fields" isn't the bread and butter of all kinds of application logic, something that this benchmark stresses.



Some CPUs are good at it, but most aren't. (Apple's are.)

But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Also, I don't recommend intrusive structures merely because they'd be better for language games. I think they're better in general ;)



> But that's not the actual issue; the issue is that pointers are big (8 bytes) and indexes are smaller, so now you can fit more in the cache. It would also help GC because it doesn't have to trace them.

Please read 'binary-trees' description and submission rules (#2). You are missing the point(er).



Isn't Kotlin based on JVM and Swift is natively compiled? That's a pretty significant difference and I'm not aware of any "to native" compiler for Kotlin like the NativeAOT approach exists for .NET...



There are in fact two "to native" compilers for Kotlin, the one for Kotlin only is called Kotlin Native but you can also use graalvm native-image to compile any JVM language to native.



Edit: Kotlin is perfectly fine for _just_ web-servers, Vert.X is great. On systems programming, read below:

All JVM languages are not viable by definition for this domain. Object oriented and heavily abstracted nature of the underlying runtime implementations prevents their effective usage in systems programming due to lack of fast FFI, structs, particularly so of custom layout, and the historical aversion of the ecosystem to low-level features.

Kotlin native does not count because presently it has 0.1-0.001x performance of OpenJDK, it is that bad, and I assume is subject to the common subset of features that must also be expressible with JVM.

.NET, especially with compilation to native statically linked binaries (NativeAOT) is an option, and I believe, due to ecosystem maturity as well as very heavy focus on performance in all recent .NET versions as well as continued improvement of low-level features (portable SIMD, byref pointers with simple lifetime analysis, static linking with C/C++/Rust/etc.), it is a strong contender. Unlike Java, C# has great systems programming story, after all, it was influenced as much by C++ as it was by Java, sadly many only ever think about the latter.

However, I'm looking forward to Swift 6. Once it is out, I'd love to see it offer more opportunities at ensuring static dispatch and generic monomorphization (in .NET, generics with struct arguments are always monomorphized like in Rust, so you have tools for zero-cost abstractions) and happy paths allowing to bypass prohibitive cost of ARC with new annotations. By using LLVM, Swift has theoretically great performance ceiling, even if it does not deliver on it just yet, losing to C# by a good margin on the more complicated code due to ARC and dynamic dispatch. But because Apple seems to be invested in using it for tasks that will require addressing these shortcomings, it is pretty exciting to see where they will take it.



Sneakily hard, actually. There's different versions of HTTP (of course), so pick your target. But when you hit HTTP/2.0, it's not a simple request/reply model (if HTTP/1.1 can be described as such). The intermixing of various client headers and what they imply to server behavior, handling of the streams and when to open vs. when to close, http/2 multiplexing, etc. Don't forget HTTP/3 which uses the QUIC protocol (UDP based) instead of TCP.

Interestingly though, a trivial HTTP server is actually very easy to implement as well. A very crude HTTP/1.0 server (or maybe even a limited scope HTTP/1.1 server) can actually make for a fun afternoon project. Like minimal (or no) concurrency support, all TCP connections closed after the request/response cycle, GET only (or maybe POST only), etc.

So it's a mixed bag of what you want and how you define an HTTP server.



I can't think of a good reason to want to implement the complex parts. Write an HTTP service if you must, but make it HTTP/1.0 (or 1.1 for keepalive) and stick it behind nginx to do the newer versions and SSL termination.

(I also think all HTTP services should if possible be written in a shared-nothing CGI type language, and not embedded in your random stateful program. This way you can't accidentally leak info across user sessions.)



I’ve migrated to swift for some nontrivial projects that were formerly C++. Quite happy so far, and didn’t find rust nearly as pleasant when I tried the same thing there. I don’t want explicit memory management very often, so ARC works great for me. Haven’t had any issues with portability either, although I don’t test often on windows so I’m not confident there but Linux and Mac have been great.



I'm glad you found something you like. I just want to make it clear that the things about Rust that make it "unfriendly" are also the things that make it able to do things other languages can't do, like compile-time memory safety. Depending on what you are making, that might make little difference. I just wanted to make sure you appreciated what Rust can do that other languages can't.



Curious! In what ways do you do you see swift as friendlier than Rust? I perceived it as functionally equivalent, although Swift had a lot more "magic" involved and it was less clear how things might work memory-wise.



To me, Swift has better ergonomics for most people.

Ref counting by default makes most programs easier to deal with.

Guard let (though recently somewhat introduced in rust) are much more friendly ways to unwrap optionals.

Being able to provide parameter defaults and aliases.

Easy passing in of callbacks. Easier async.

Null chains with the Question mark operator.

I really like rust, but it’s much faster to get to a working program in Swift.

And with the new CXX interop, I now reach for Swift by default since I have to deal with a lot of C++ libraries.



> Easier async.

I was on board until this one. Async is a rough spot for rust, but I find the async strategy swift went with absolutely baffling and difficult to reason about.



> More magic (thus less required explicitness) and less involvement with memory management are typically considered as friendly traits in programming languages.

Really depends on the context. I really, really, really hated this instinct in the ruby on rails community when I was still doing that. Magic is nice until it doesn't work the way you expect, which is when it becomes an active liability.

I really don't spend much time thinking about memory management in Rust, but I can certainly understand why one might be happy to not have to design around ownership and lifetimes. I really like the explicit nature of it, though, makes it super easy to read and reason about code you've never seen before.



I'm incredibly excited for this. I thought swift was basically going to be stuck on macOS.

Last time I converted the swift compiler from the Ubuntu package to work on Debian, stuff was looking really awry. Most things work but not simple things like sigterm signals.

Swift is a fantastic language. I think the most advanced and smart language today. And I say this having used over 20 professionally over 25 years.

Just look at how swiftUI is implemented. It's not even a DSL, it's Swift naturally! Compare it to flutter and you'll see how incredible it is. (I do like dart too though)

As for the language it's full of clever features and advanced ideas that don't suck to use and consider the developer real world use of the language.

Two things really suck in swift though; compiler error messages are straight out of the university of assholery and documentation was crafted in Mordor probably.

Of course most libraries probably won't work well on Linux yet but there is a future with the right balance between safety and speed and joy of developing.



I recently started learning Swift and Swift UI and was surprised at how complicated the language is. Especially regarding reactive instance variables. eg. @observableObject. Didn't understand it. There are like five different ways to do it. Ended up playing whack a mole until it worked.



TBF, reactivity in UI is still basically an unsolved problem with frameworks going in circles between data-binding, dependency-tracking, memoization & compilation.

SwiftUI initially promoted their ReactiveX alternative.



If you’re targeting newer OSs you can try the @Observable macro instead of ObservableObject. It fixes a lot of the weird problems with the latter (although does introduce some new, weird edge cases).



The download sizes are quite large. 775 MB for swift-6.0-RELEASE-ubuntu22.04.tar.gz ~500MB for windows.

Is this shipping an entire copy of LLVM? What could possibly make this so large?



Yeah, they contain binaries for Swift itself (the driver interface, the compiler, the repl, the package manager, swift-format, etc.), as well as the toolchain's dependencies (llvm, lldb, clang, the sanitizer runtimes, etc.).

There are other (also large) downloads to enable producing static binaries on Linux, and those contain the required system libraries (musl libc, libicu, libcrypto, libpthread, etc.). Those are about twice as big as they need to be, because they bundle x86 and aarch64 together.



It's super nice that they support moving up incrementally.

Moving to Swift-6 mode with full data-race safety checks can be daunting. They wrote a separate post on that, and Holly telegraphed that they're still actively reducing noise, i.e., warnings where the compiler could use a bit more analysis to prove there's no race.

The really nice thing is you can use the new tooling, but stay with the 5.10 version of the language that works for your code, and feature-by-feature add Swift 6 checking. You can build the same package under both language modes, so libraries can move ahead in their versioning while still supporting clients who are not ready.



I’m sort of in two minds. On paper, I think for greenfield projects data-race safety checking looks great. If I switch to thinking about my day job, I know we will likely never adopt Swift 6 for some of our components because they’d need to basically be rewritten, at which point we’d probably consider something more cross platform like Rust or Kotlin. So despite liking the feature on paper, I feel like the overhead it introduces in practice is kind of pushing me away from the language.

It’s hard to say at this point though, adoption might get a lot easier with subsequent point releases.



I agree! I'm a Go programmer, and while I do wish it had some more features at times, Swift is an example of how it can easily go out of control and ruin a promising language.

For example tests, there's so much magic. How do I know it runs the test for each item in the arguments array? What if there were multiple arguments? After using Go for close to a decade now, I'm really seeing the wisdom of avoiding magic, and making your testing code the same language as your building code! Compare:

Swift:

    @Test("Continents mentioned in videos", arguments: [
      "A Beach",
      "By the Lake",
      "Camping in the Woods"
    ])
    func mentionedContinents(videoName: String) async throws {
      let videoLibrary = try await VideoLibrary()
      let video = try #require(await videoLibrary.video(named: videoName))
      #expect(video.mentionedContinents.count <= 3)
    }
Go:
    func TestMentionedContinents(t *testing.T) {
      tests := []struct{ Name string }{
        {"A Beach"},
        {"By the Lake"},
        {"Camping in the Woods"},
      }
      for _, tt := range tests {
        video, err := library.FindVideoByName(tt.Name)
        if err != nil {
          t.Fatalf("failed to get video: %v", err)
        }
        if len(video.MentionedContinents) > 3 {
          t.Errorf("video %q mentions more than 3 continents", tt.Name)
        }
      }
    }


Go with timeout handling in case the FindVideo function takes too long (idk Swift magic well enough to know if it'd do this automatically!)
    func TestMentionedContinents(t *testing.T) {
      tests := []struct{ Name string }{
        {"A Beach"},
        {"By the Lake"},
        {"Camping in the Woods"},
      }
      for _, tt := range tests {
        t.Run(tt.Name, func(t *testing.T) {
          ctx, cancel := context.WithTimeout(context.Background(), 30*time.Millisecond)
          defer cancel()

          video, err := library.FindVideoByName(ctx, tt.Name)
          if err != nil {
            t.Fatalf("failed to get video: %v", err)
          }
          if len(video.MentionedContinents) > 3 {
            t.Errorf("video %q mentions more than 3 continents", tt.Name)
          }
        })
      }
    }


@Test, #require and #expect are just macros. You can expand them if you want to see what they do (or just look at the swift-testing code itself).

Perhaps I'm just used to Python unit testing with similar decorators. Presumably, if you need to pass in two arguments, you'd either pass arguments: an array of tuples or a tuple of arrays for combinatorial testing.



> How do I know it runs the test for each item in the arguments array?

At the risk of coming across a bit rudely: this feels analogous to asking “how do I know `for _, tt := range tests` loops over every element in the array?” Both are language/syntactic constructs you have to learn.



> How do I know it runs the test for each item in the arguments array

I mean the APIs aren’t magic; you can “inspect macro” to see what code is generated at compile time which boils down to something similar to the Go code with better ergonomics.



Since you asked:

The provided `drinkable` example I think is pretty bad and it's very surprising to me that this is a headline feature.

  protocol Drinkable: ~Copyable {
    consuming func use()
  }

  struct Coffee: Drinkable, ~Copyable { /\* ... */ }
  struct Water: Drinkable { /* ... \*/ }

  func drink(item: consuming some Drinkable & ~Copyable) {
    item.use()
  }

  drink(item: Coffee())
  drink(item: Water())

Here we have a drink() function that either accepts something `Copyable` OR non-Copyable (uh, I mean, `~Copyable`) and either consumes it...or doesn't? That seems to me like a fountain for logic errors if the function behaves completely differently with the same signature (which is, in fact, explicitly labeled `consuming`). It seems like it should not just compile if you try to call this with a `Copyable` type like Water, but it does.

The syntax for representing this complex and weird concept of "maybe consuming" being `consuming some Drinkable & ~Copyable` is also just totally gross. Why are we using bitwise operation syntax for some weird and logically incoherent kludge? We cannot apply these & and ~ operators indiscriminately, and they do not mean the same thing that they logically mean in any other context, but this function definition definitely implies that they do.



Here’s my take. I haven’t used this feature yet so I haven’t dug in too deep.

drink() takes a Drinkable. A Drinkables can be non-copyable.

Copyable is the default, so it has to mark itself as accepting non-copyables.

Coffee is non-copyable. Water doesn’t say which means it’s copyable (the default).

You can use a copyable anywhere you’d use a non-copyable since there is no restriction. So since drink can take non-copyables it can also use copyables.

I’m guessing the function definition has to list non-copyable otherwise it would only allow copyable drinks since the default is all variables are copyable.

“consuming some” means the function takes over the ownership of the non-copyable value. It’s no longer usable in the scope that calls drink().

For the copyable value I’m not sure but since they can be copied I could see that going either way.

On syntax:

Yeah it’s a bit weird, but there was a big debate about it. They wanted something easy to read and fast to use. NotCopyable is really clear but typing it over and over would get real old.

~ is not special syntax. My understand is “~Copyable” is the name of the type. You can’t just put ~ in front of anything, like ~Drinkable. But since that’s the syntax used for bitwise negation it’s pretty guessable.

& is existing syntax for multiple type assertions. You can see the evolution in this Stack Overflow answer:

https://stackoverflow.com/a/24089278

Seems to read like C to me. It has to be Drinkable and not Copyable.

Like I said I haven’t gotten to use this yet, but it seems like a nice improvement. And I know it’s a step in the path towards making it easier to do safe asynchronous programming, object lifetimes, and other features.



Yeah after thinking about it a bit more it does make more sense to me. The primary gap I had was, as you allude here:

> You can use a copyable anywhere you’d use a non-copyable since there is no restriction.

Effectively copyable always conforms to non-copyable, just not the other way around.

And the compiler effectively automatically notates literally everything with Copyable, so you need the explicit (& ~Copyable) in the function definition so you're still able to define functions within a ~Copyable protocol that have Copyable semantics.

It's very in the weeds, and I still don't like it (I would have preferred NotCopyable since the ~, especially next to the &, directly implies something like bitwise operators), but I guess custom ownership is itself very in the weeds and you will have to think about it hard no matter what approach is taken. I would have expected custom ownership to be fundamentally incompatible with Swift, but clearly it's here; I should probably read about it more so I have a more clear understanding.

(I also didn't realize & was extant syntax).



Yeah, I’ll admit it’s hard to get your head around. I had to think about it a couple of times just writing that explanation.

It took me a couple of minutes to figure out why it was in the function definition. I guess it had to be but that wasn’t obvious to me at all at first.

> And the compiler effectively automatically notates literally everything

Right. Just like how all classes in Java extend Object even though you don’t have to literally write it.

I believe they’re still working on a more Rust-like borrowing system, but I could be wrong. I know this helped them implement parts of the standard library much better because they could make assumptions that you can’t make with copyable objects.

I do get your point about just calling it NotCopyable. I don’t actually know why they settled on the name they did, I didn’t ever try to look that up. Maybe it’s because it’s a special thing that requires compiler machinery and there’s no way for you to make an equivalent?



I recently tried SwiftGodot, because I found Swift for game Dev very interesting in comparison to csharps GC stalls. Sadly it does not yet support sharing pre build Windows development libraries[1]. Compilation does take quite some time, even on my beefy 16 core Ryzen. The changelog did mention multi threaded Swift package manager improvements on Windows though.

[1] https://github.com/migueldeicaza/SwiftGodot/issues/521



I've been using SwiftGodot lately too, and I'm quite enjoying it! iOS development is my day job, so it was great to port my old GMS2 project's codebase over to Swift, and suddenly have type-safety and no GC hiccups. During the porting process, the Swift compiler even discovered a couple of latent bugs for me!



Every time I read about swift it seems like a nice language, but it's a shame its ecosystem is so limited. It's great to see it supporting more platforms now. Is the ecosystem in the package manager/index also heading the same direction?



For the question, I think it depends on what you mean. I've been able to use SwiftPM on linux without a whole lot of issue. So SwiftPM itself doesn't seem to be the blocker here. At least I've not had issue in my experimentation. What is maybe more of the blocker is the number of libraries that are useful and are compiled and tested on Linux. Which for that, the solution is just people write libraries/upstream Linux support to existing libraries.



Making more and more of foundation available (a big part of 6) should also help a lot. Fewer external dependencies needed, and fewer per-platform libraries if you support multiple platforms.



I really love the Swift language. I think it is really nice to program in. When it is really good, it just is so satisfying. I can get that "ahh" feeling. I feel like it's this weird blend of Ruby, Haskell, Javascript, and some C# influence. And of course there is always Objective-C baggage lurking in the background.

I agree with most of the comments on this thread. The governance of the language really is a problem. You really can't run two models on the same project at once. The incentives just always get screwed up. Better to just be honest about what's going on, and then everybody can work together. But sometimes that means that people need to start getting paid, and companies love to not pay money if they can avoid it.

Also, Apple's tooling is just truly horrendous. It is just awful. I don't understand why they won't do anything about it. Xcode, there are some things that are nice about it, but compare it to really using a tool like IntelliJ IDEA well and Xcode just fails so hard.

And certain UX problems have literally persisted for a decade, and Apple has not done anything about it. It really is kind of unconscionable that Apple does this to their developers. I know on some level I am being ungrateful, because there are mountains being moved behind the scenes by Xcode, and I really do appreciate that. And from watching WWDC talks, I know that there are some really good engineers at Apple that are trying to do the right thing. And some of it is just corporate BS that is unavoidable. But they really do need to get better.

In any case, I hope that this update makes everybody's life better. When it is really nice, I think that Swift has some of the best developer ergonomics out there. That's just one person's opinion, of course.



Swift throws are results, but now with specific and generic and specializable types.

To illustrate, Swift has a nice `rethrows` feature that helps with function composition.

If your function takes a function parameter that can throw, it can use 'rethrows' to say "I only throw if this parameter does". Then when passed a function that doesn't throw, your function need not be invoked with `try`.

This plays nicely with generics over throwing type, since the bounds propagate back to the function type. If the function parameter only throws a given type, that's what the function will throw.

Also helpful for reducing error boilerplate, `try` has the convenience form `try?` which means "just swallow the exception and return nil", and applies to a whole chain: `let f = try? this() ?? that() ?? "skip"` means f will be the result of either (throwing) function or a literal otherwise.



Throws in Swift are not traditional exceptions. A throwing function in Swift is actually a function that can return an error instead of a result. This is hidden by the language.

So something like

`func foo() throws -> Result`

Is actually

`func foo() -> Result | Error`

The compiler also forces you to handle any returned errors using `try`. So to call our example `foo`, you'd do:

`let result = try foo()`

You must either handle any throws error or include this call in an enclosing throwing function.



>Throws in Swift are not traditional exceptions. A throwing function in Swift is actually a function that can return an error instead of a result. This is hidden by the language.

Implementation detail.

The two features are equivalent.



Using the normal return path vs. nonlocal returns is, I think, not equivalent unless you have a GC. Otherwise you need all that "exception safe code" stuff.

But the main difference is it's not hidden by the language; you have to 'try' any method that can return an error. Straight-line control flow in an imperative language is a good thing IMO. …but too bad about those defer statements.



>Using the normal return path vs. nonlocal returns is, I think, not equivalent unless you have a GC.

I will repeat, whether exceptions are implemented as "nonlocal returns" (like setjmp/longjmp with stack unwinding) or as syntax sugar with sum return types, is completely irrelevant; an implementation detail. The generated machine code is different, but the behavior, the user experience is exactly the same.

>Otherwise you need all that "exception safe code" stuff.

In both cases, you need to write "exception safe code". Example of unsafe code in Java (a language that implements exceptions as non-local returns):

  void bar() throws Exception { ... }

  void foo() throws Exception {
    mutex.lock();
    bar();
    mutex.unlock();
  }
Example of unsafe code in Swift (a language that transforms errors into sum return types):
  func bar() throws { ... }

  func foo() throws {
    mutex.lock()
    try bar()
    mutex.unlock()
  }
>But the main difference is it's not hidden by the language; you have to 'try' any method that can return an error. Straight-line control flow in an imperative language is a good thing IMO. …but too bad about those defer statements.

Whether the language makes you prefix throwing calls with "try" is completely orthogonal to how they're implemented (nonlocal return vs sum return type). It's just a matter of syntax.



The real answer IMO is how they (don't) integrate with generics, first-class functions, and the type system in general. If you try using a checked function inside a stream() call you'll know exactly what I mean. Yes, it's technically possible to make a "functional interface" with a fixed number of checked exceptions, but in practice it's a huge PITA and most functions just don't allow checked exceptions at all.

Compare to ATDs, where any generic function taking a T can also be used with Result exactly the same. (Not a perfect comparison, but there are lots of other scenarios where ATDs just compose better)



It’s often just the boilerplate. As a Java dev people often end up ignoring it and changing everything to “throws Exception” to avoid having to list 4 exception types all over a call stack.

Or they catch and wrap everything in some CustomSysException type so they only have to list one type and it’s not Exception, but then that’s really the same thing isn’t it?

I think it’s kind of a combination of not dealing with things and throwing them up the stack combined with too many exception types and maybe using exceptions when a result type would just be easier.



It's useful to have errors which are less typed than your "actual" results, because if you write a library then you don't want every possible error to be part of your API. If you do, and you call some other library, then either you have to swallow its errors or commit to having all of its error APIs be part of yours too.

And in the end many errors can't be handled automatically, the best you can do is just show them to the user.



And the designers and the Java stdlib itself, seeing as it's pretty much all runtime exceptions in everything introduced in Java 8 and later

Also JVM guest languages like Kotlin and Scala treat all exceptions like runtime.



Used typed throws for a bit and I like them. I remember from long ago the Java implementation forcing me to catch them but in Swift it’s just a bit of extra compiler-enforced documentation about what and what not to expect in terms of errors.



Mostly by Java haters, that even miss the point checked exceptions appeared first in CLU, Modula-3 and C++ before Java was even an idea.

Forced checks on sum types with automatic unwinding are another way of doing checked exceptions, that apparently those haters love so much.



Swift still lacks some concurrency features, even with Swift 6, but it's nice to see that atomic operations have finally been added. Besides that, I often encounter an error without knowing why: 'Failed to produce diagnostic for expression; please submit a bug report.'



It's a fair point, but radically different 8 years on.

Swift type inference is two-way, so you can often improve things by annotating types instead of using inference.

Also, it very easy to have complex builders and nested closures that complicate inferencing, so it's a bit of a side-effect of how much people use the powerful features.

The big compile-time gotcha now is using macros. To be safe, each macro runs in its own process hogging cores, and the new Xcode compile scheduler isn't quite sensitive to that, so sometimes command-line builds with less parallelism are a fair bit faster. The solution there is judicious use, particularly of libraries depending heavily on macros.



I can only imagine how terrible things must have been if the current state is radically improved. I can lock up xcode by doing nothing more than backspacing over a few characters.



I’m honestly not usually that interested in feature lists for languages I don’t use, but this really does look quite compelling. Ownership semantics and C++ integration? Sign me up.

I’d be interested to know how good this integration is in practice e.g. could it be used to integrate with Godot directly?



I guess my question is how much a custom bridge is needed with the C++ integration. (I understand the bridge will always be better in some sense, but I’m also interested in what it would be like to just ignore it and target the C++ API directly.



Seconding this! I've ported a GMS2 project over to SwiftGodot, and it's been great to work with. As I ported over the code, the Swift compiler even caught a bug or two that I'd missed back when it was written in GML!



I find this bid for cross-platform recognition somewhat schizoid given that Swift isn't even backwards compatible on Mac OS. I has to upgrade to Ventura a while back just to get Swift regexen working.



This is somewhere that Swift is better on non-Apple platforms. You have to upgrade macOS to get Swift bug fixes because the runtime is part of the OS, while on other platforms it’s a separate thing that can be upgraded on its own.



Many platforms have an instruction to get the upper 64-bit word of a widening 64×64-bit multiplication. But it can be difficult to (performantly) access such instructions from anything higher-level than assembly, so well-optimized language support for 128-bit integers can open this up.



As a side note, zig supports arbitrary width integers. I'd like more languages to support that. You can have i7, i3, i1024 or whatever you want (I guess up to some limit).

Makes it a lot easier to handle packed binary files etc.



(Author and implementer of the Swift Int128 proposal here)

There's lots of one-off uses for 128-bit integers scattered around systems programming (and programming in general, I just happen to come from more of a systems background). Individually using a pair of 64b integers for each case is totally workable, but eventually life is easier if you just have an Int128 type. There's not really any one use case that drove their inclusion in the standard library, rather lots of times that we were saying "it would be kind of nice if we had this".



I think the complaint is the global runtime namespace, not source modules. Statics live forever, and extensions in any library on a type apply to all uses in the runtime (with no guarantees about conflicts).

Mostly that's minimizable with good practice, but can be annoying to those used to hierarchical namespacing and static memory reclaimed after the class becomes unreachable.



Programmers can't be trusted with submodules, as you can see from C#/Java and its standard libraries where everything is named like System.Standard.Collections.Arrays.ArrayList.

Of course, taking them away doesn't stop them from other kinds of over-organization like ArrayFactoryFactoryStrategies, but it helps a little.



"Programmers can't be trusted with..." isn't the best argument here IMO. You already gave one reason why. Programmers will create a mess regardless IMO, despite how nice the language is. Adding to that (1) among all he things I didn't like about Java, nested modules were least of it. (2) Lot of it has to do with how reference code in that ecosystem are written which are then adapted as standard practice. Its all good for stuff that do one thing, but if you are building things in same repo with several components, they are nice to have. Rust/Python/Racket are few languages I can think of which have submodules/nested modules and I've not heard people complain about that there.



The unqualified imports are definitely a major pain point. In a large Swift app I find myself having to be eternally vigilant organizing my files so that I'm not dumping surprises into the import site.

Makes me appreciate how even Node.js got it right so early on.



I also disliked unqualified imports. I can never get used to languages where you can 'import foo' or 'use foo' and it just dumps a bunch of symbols into the current scope. That combined with Swift's 'implicit member expressions' makes it difficult to read code outside of an IDE (although I understand why they made that tradeoff given some of the long identifiers Swift inherited) [1].

[1] https://github.com/Quotation/LongestCocoa

联系我们 contact @ memedata.com