Hacker News new | past | comments | ask | show | jobs | submitlogin
Swift was always going to be part of the OS (2022) (belkadan.com)
182 points by harporoeder 6 months ago | hide | past | favorite | 105 comments



I find this interesting, considering that .NET went the other direction. The .NET Framework was operating system bound, while the modern .NET (Core) is a separate installation or ships with the app or is even AOTed.

Maybe Apple is more aggressive in adopting Swift as their operating system language, because Microsoft always has this split that the Windows division was C++/COM and .NET did not really fit into their world.


The raisons d'être between the CLR (and C#) and Swift are entirely different.

Apple has explicitly set out to adopt Swift as a successor language to C, Objective-C, C++, and Objective-C++[0][1]. This stands in stark contrast to Microsoft's vision for the CLR, which was… to be a better Java, more or less? (Does anyone actually know what the .NET initiative was all about? Microsoft went absolutely ham on it in their branding, but the momentum unceremoniously fizzled out before Server 2003 went RTM.)

That said, apropos .NET fitting into Windows, the work on Singularity[2] by MSR had a lasting impact. I'm told that there is in use a System C# dialect that generates native object code via a more production grade version of Bartok[3].

0: https://youtu.be/lgivCGdmFrw

1: https://developer.apple.com/swift/#:~:text=Swift%20is%20a%20....

2: https://www.microsoft.com/en-us/research/project/singularity...

3: https://en.wikipedia.org/wiki/Bartok_(compiler)


The original plan of .NET was also to replace everything, .NET was going to be the next COM, hence why there were still plenty of configuration settings named COM_ when the open source efforts started.

Also why since day one, it was polyglot, COM was part of the stack (as next step from VB 6 and MFC/ATL), and Managed Extensions for C++ were included (later replaced by C++/CLI in .NET 2.0).

The problem is that Microsoft isn't Apple, and WinDev couldn't care less about this, and stayed true to their COM/C++ tooling, similiarly when the Longhorn effort failed (mostly due to sabotage from Office/WinDev teams), Vista redid most of the .NET based ideas into COM/C++, and since Vista COM has been the main delivery mechanism for new Windows APIs (WinRT is yet another take on COM).

Bartok was used on the Windows Store compiler for Windows 8.x, via MDIL linker.

.NET Native on Windows 10, grew out of Project N, which was influenced by System C# used in Midori, not Singularity.


Yeah, sigh

Technically the ".NET initiative" was all about web services. But not the modern kind, it was about SOAP. And I guess also Code Access Security-- a way to run untrusted (and even remote) code securely. Ahem, "securely".

The .NET Framework, CIL, and CLI/CLR and it's initial languages, C# and VB.NET, were in support of those technologies, but ended up being the only parts with real staying power.

I am a big fan of the .NET Framework (and it's evolution in .NET Core / .NET 5)-- so many rough edges of the Java runtime concepts and Java language concepts were polished, and a lot of new innovations were introduced (proper annotations, a framework for handling code isolation, a better reflection API, real runtime generics in .NET 2.0 and more).

There's definitely things they tried to improve on that... weren't really improvements. The way "assemblies" are matched in .NET is much more sophisticated- the goal there was to try to kill DLL hell. It evolved into the Global Assembly Cache, which is sort of the Windows Registry of DLLs. Not a huge fan of those bits.

> That said, apropos .NET fitting into Windows, the work on Singularity[2] by MSR had a lasting impact

Firstly, of course C# is used quite a lot within Windows in the user space.

I worked on some open source stuff around kernel level C# that was inspired by MSR's work here-- Singularity still used a very small core written in C++ iirc, we were interested in reducing the amount of unmanaged support code as much as possible as well as replacing the unmanaged c++ bits with unmanaged c#. That work grew into SharpOS. There was also a competing project called CosmOS. Lost to time now.


One of the guys who worked on Singularity's successor, Midori, has a nice multipage post-mortem write up.[0] Many design concepts have found their way into C#/.NET in the years since. However, unlike Singularity, Midori's source code was (unfortunately) never released.

[0]: https://joeduffyblog.com/2015/11/03/blogging-about-midori/


> There's definitely things they tried to improve on that... weren't really improvements. The way "assemblies" are matched in .NET is much more sophisticated- the goal there was to try to kill DLL hell. It evolved into the Global Assembly Cache, which is sort of the Windows Registry of DLLs. Not a huge fan of those bits.

The Global Assembly Cache did not make the jump to the modern .NET (Core). There was the thing called `dotnet store`, but it’s broken since .NET 6: https://github.com/dotnet/sdk/issues/24752

The assembly redirection hell has also been greatly reduced there.


There is something similar to the Global Assembly Cache for .NET Core. This allows WSUS to automatically upgrade the .NET runtime installed on the system.

https://devblogs.microsoft.com/dotnet/net-core-updates-comin...


Not really. You can upgrade the framework with Windows Update, or by just going to dotnet.microsoft.com and downloading the latest installer. Applications can ship their own copy of the framework or depend on the system copy. The framework does include some core/system libraries, but other applications can’t install their crap system-wide (which was possible with the GAC).


Thats just framework dependent upgrades. .NET core and beyond allows for side by side installations of with different assembly versions where in .NET framework they shared a GAC.


Yeah I didn't think it had, but I've not done too much with Core/5+ yet. I want to use it but I end up just using Typescript/Node by default. I do need to get into the habit of investigating using CLR when a bit more performance or lower latency is required, instead of jumping past it to systems level with C++/Rust/Go.


I heard that parts of Microsoft Edge (the pre-Chromium version) were written in a GC-free subset of C#. I don’t know if it used System C#.


I really wish that Apple would provide an alternative C API to its system frameworks, even if it would be inconvenient to use (similar to Microsoft's COM APIs - terrible to use directly as C API, but at least it's possible, and most importantly, the COM APIs provide a stable ABI and allow proper interface versioning - so old and new interfaces can live side by side).

Language bindings could then directly go through the C API instead of having to go through an ObjC- or Swift-shim, or calling directly into the brittle ObjC runtime functions (is there even something similar to the ObjC runtime for Swift?, e.g. a C API to create Swift objects and call methods on them?).


They do! See https://github.com/apple/swift/blob/main/docs/LibraryEvoluti...

You can also see an example of what a .NET integration with Swift ABI looks like here: https://github.com/dotnet/designs/blob/main/proposed/swift-i...


While it's good to have core technology as part of OS, it also makes it non-updateable separately from the OS itself, which slows the spread of developments.

That's my main gripe with apple's approach to swift and swiftui. Yes, tech is getting better every day, but unless you target latest OS version, you can't use new fresh stuff until it's on enough devices around you. And that pretty much guarantees that no matter what apple adds, you still have to wait a year or two until you can safely start using it.

In modern android, kotlin(and compose) also part of the system, yet all the apps do not rely on the system libs, but rather inject the latest available runtime with each of the apps. It takes more space, but instead allows developers to target latest available stack, no matter what core os this app is being run on.


I'm surprised that Apple didn't opt for a hybrid approach.

They control the OS. They also control Swift. So why not embed the Swift version that the app was built against, and then download that swift version to the user's device when they download an app that uses it. Then:

- The app runs against the exact Swift version it was built against

- The app size is smaller because you're no longer shipping the Swift libraries with the app

- The impact on the user's device space is minimized since it only downloads each version of Swift once (to be shared by all apps that use that version), and only on demand.

- If a new version of Swift comes out before an OS upgrade, simply add it to the list. It gets downloaded the same as the rest.

They could even add some predictive Swift downloading for the most popular versions of Swift to avoid unnecessary delays downloading it.


I think you’re forgetting about apple’s frameworks, and the fact that _they_ want to use Swift. Their code runs inside your address space, and I think you’ll need a more complicated scheme to solve those problems.


I guess the problem then is that you end up with one copy of Swift installed per Swift version that's ever released more or less, which doesn't seem ideal from a space perspective.


That's what MS does with .net and DirectX for example.

Apple only needs to do this because they are charging a pornographic premium for storage.


MS does this because they aren’t in the same business as Apple.

Apple is in the device business. They make most of their revenue by selling you a new device.

A 10 year old Mac is basically a brick unless you want to mess with OpenCore Legacy.

The solution to this problem is to target a newer OS like Apple wants you to. Their users are going to buy a new system anyway, they’re the most affluent segment of the PC market.


> Their users are going to buy a new system anyway, they’re the most affluent segment of the PC market.

I thought Apple users were more likely to buy/own used devices than PC users were? I'm not fully caught-up on the statistics, but I'd assume that's still true.


Meanwhile PC users, plug a new disk into their desktop, or replace the hard disk on their laptop (plenty of options still available where disk, memory and battery can be exchanged).


Still doing weekly live dj sets with my macbook from 2013, editing in Logic, researching music online, listening to Spotify,… Except for the battery, there is nothing wrong with this device of more than 10 years old.


So you’ve already lost feature and most security updates, 3 major releases behind.

I’ve got a 2012 Mac mini and it’s limping along with the OpenCore Legacy patcher. I got the kernel panics to stop but they came back with a recent update. I’m gonna sell the thing and probably switch those duties to a Linux server.


Wouldn't you be able to only store the diff of every version to some base version instead?


That should in theory be possible, yeah, though I can't imagine a great way of doing it. Do you want to add a bunch of complexity to the system's dynamic linker to make it understand "base + binary patch" dynamic libraries?

In any case, maybe you can add heaps of complexity to core OS things and save some disk space; but you still need the full patched dynamic library in memory when the process is running, so at the very least you'll end up with bloat from lots of versions of the dynamic libraries loaded in memory when processes with different versions are running...

Maybe you could tackle both of the problems by storing a base version of the dylibs and then have other dylibs which provide replacements for only the symbols which have changed between versions... but this would severely limit the kind of thing you can do without just basically having to override all symbols. And automating this process would be hard, since compiler upgrades could cause small code gen changes to a bunch of symbols whose behavior haven't changed and you wouldn't want to ship overrides for those.

In the end, while I'm sure there are things you could do to make this work (Apple has some talented engineers), I also understand why they wouldn't.


I think this tight coupling between the language and the platform compromised a very promising language. Swift is one of few if not the only modern language that, at the same time, has excellent performance (due to AOT compilation and optimization, deterministic garbage collection via ARC etc.), has modern security features (algebraic nil as opposed to NULL, bounds checking etc.), and is relatively easy to learn and become productive in, perhaps on par with Python/JavaScript for the core language.

I don't think there's something else in the "general purpose languages with substantial real-world use" camp that touches on those 3 points quite like Swift does.

On the other hand, non-Apple developers have good reason to avoid Apple, due to the extreme anti-competitive behavior. It's the C# story all over again.


Its performance is unfortunately very far from "near Rust". I don't know if that's a result of one virtual dispatch too many or upfront overhead of its ARC implementation, but Swift almost always underperforms on microbenchmarks, despite expectations.

If there's a deep dive on this, I'd love to read it. Could one of the possible reasons be targeting few-core systems and providing as deterministic memory usage as possible to fit into RAM on iOS devices without putting the burden on the programmer?


I generally feel that for a language like Java/C#, which Swift is, you really need a JIT and a tracing, moving GC to get optimal performance. Apple has pushed the COM-like model of static code generation and non-moving reference-counted GC about as far as it can go at this point (impressively far--the compiler heroics in Swift are incredible), and it still can't quite make it to Java/C#, which end up having a simpler implementation than that of Swift in the end. The fact is that the ability to dynamically observe the behavior of the program and recompile with optimizations on the fly is just too powerful to give up.

Perhaps aggressive PGO could help to close some of the gap. The problem is that PGO requires effort on the part of developers to write comprehensive test cases and it's not clear how to scale that workflow. Large companies can write representative test cases and scale PGO on their performance-sensitive services, but your average iOS app developer won't be willing to do that.


Initially I was kind of disappointed of how AOT evolved on Android verus Windows Phone, then I came to realise Google was actually right.

Whereas Windows Phone would use Windows Store to AOT compile the application, Android would AOT on device.

Thus initially, it felt like using the tiny phones for that would be a bad decision, and it was, as the JIT was reintroduced 2 versions later (Android 7).

However, it was a mix and match of all modes, interpreter hand written in Assembly for quick startup, a JIT with PGO data gathering, AOT compiler with feedback loop from PGO data, latter on, sharing PGO data across devices via Play Store services.

This mix of JIT/AOT with PGO sharing across everyone, brings the optimal execution flow that a given application will ever get, allows reflection and dynamic loading to still be supported, and AOT compiler toolchain can have all time of the world to compile on the background.


It's most likely just reference counting and the way abstractions work in Swift (dynamic dispatch?). In particular, it still loses to C# even if you use AOT for the latter, especially in multi-threaded scenarios.

HotSpot C2 and .NET Dynamic PGO-optimized compilations first and foremost help to devirtualize heavy abstractions and inline methods that are unprofitable to inline unconditionally under JIT constraints, with C2 probably doing more heavy lifting because JVM defaults to virtual calls and .NET defaults to non-virtual.

With that said, I am not aware of any comprehensive benchmarking suites that would explore in-depth differences between these languages/platforms for writing a sample yet complex application and my feedback stems mostly from microbenchmark-ish workloads e.g. [0][1].

Performance aside, I do want to compliment Swift for being a pleasant language to program in if you have C# and Rust experience.

[0] https://github.com/ixy-languages/ixy-languages (2019)

[1] https://github.com/jinyus/related_post_gen (2023)


On the case of Java, it isn't only HotSpot, there are several other options, and in the case of OpenJ9 and Azul, cloud JIT also plays a role for cloud workloads.

I also like Swift, if anything it helped to bring back the pressure that AOT compilation also matters.


The performance difference is a small constant factor, perhaps 1.05x, perhaps 3x depending on the workload. If you are writing kernels, high performance graphics, signal processing, numeric analysis - sure, that's significant.

For the typical application though, it's fast enough, you get same order of magnitude performance as C++ or Rust with a almost Python like mental load. As the success of other dog-slow languages show, this is a major selling point.


I can confirm that one big reason why Apple went with reference counting in Swift is because they like it when garbage is freed right away. This lets them get away with smaller heaps than they'd need to get comparable performance with tracing garbage collection. This does slow down execution somewhat; the overhead of updating all those reference counts isn't terrible, but it is significant. It's just a price they consider to be well worth paying.


> It's the C# story all over again.

At least C# has a real cross platform story for a while now.


Partially, you need to rely on the community for GUI stuff (Avalonia and Uno), and old Microsoft still pushes VS/Windows as the best experience, anyone else that wants a VS like experience has to buy Rider.

Yes, there is VS Code, which besides being Electron based, Microsoft is quite open that will never achieve feature parity with VS.


With regards to UI there is Microsoft’s MAUI, which I personally prefer over Avalonia. I love the single project approach of MAUI. I think Avalonia also relies on MAUI controls to some extent (I seem to recall a <UseMaui /> project setting in Avalonia projects.


MAUI doesn't count if "supports GNU/Linux" is part of being considered FOSS proper, and on macOS they took the shortcut of using Mac Catalyst instead of macOS UI APIs.


RC is slower than a modern garbage collector, ARC (if the A means it requires an atomic increment/decrement) is significantly so.

I’m not saying that it is a bad choice, it is probably a good one in case of battery-powered machines with small RAMs, but I think tracing GCs get a bad look for no good reason.


Not sure why this gets downvoted when a quick search for RC overhead in Swift reveals that it is quite high[1]

[1] Figure 3 in https://iacoma.cs.uiuc.edu/iacoma-papers/pact18.pdf

---

Additionally, people comparing ARC to Objective-C's conservative GC in the replies don't seem to understand that (1) refcounting is a form of GC, often times inefficient compared to a mark-and-sweep GC, and (2) conservative GCs are quite limited and Apple's implementation was pretty bad compared to other implementations.

Objective-C objects are basically all a struct objc_class* under the hood, and conservative GCs in general cannot distinguish whether a given word is a pointer. Even worse, for a conservative GC to determine whether a word points into a heap-allocated block, it has to perform a lengthy, expensive scan of the entire heap. It also doesn't help that Apple decided to kickstart the GC if your messages began with "re" (the prefix for "retain" and "release" messages, which were used all the time before ARC came around). So at one point in time, you were able to marginally boost performance of a garbage collected Objective-C application by avoiding messages beginning with "re"!


The A in Arc stands for "automatic."

But you are right about memory usage and battery ... this is why iOS devices require less memory than Android devices for comparable performance (or better performance in some cases).


The confusion might arise from the fact that for a rust arc, the a does mean atomic.


Well, automatic doesn’t add much to the picture, it’s more of a marketing name. But you are right, I used rust’s terminology here.


Indeed, in microbenchmarks the only time you see swift faster than java, c#, or go, is when the bounds checking is turned off. It is not a very performant language. I do like the syntax and semantics though.


What's the point opining on which is faster when you don't even know what ARC stands for, and thus presumably not the first thing about it?

Apple's frameworks used GC before, and they switched to ARC.


What Apple calls "ARC", the rest of the word simply calls "RC". Unlike in Objective-C, the vast majority of RC implementations do not need the developer to modify refcounts by hand. It was already "automatic", so to speak.

Moreover, ARC itself indeed modifies refcounts atomically. If it didn't, you would not be able to reliably determine the liveness of an object shared between threads. Now ask yourself whether atomically updating dozens of integers is faster than flipping a bit across a table of pointers.


Basically Apple turned Objective-C's GC failure to deal with C semantics, picked COM's approach to smart pointers, and made a market message out of it into ARC, for people that never dealt with this kind of stuff before.

Like in many things where "Apple did it first".


ARC can stand for multiple things and is more of a marketing name here, than anything. The relevant garbage collector algorithm is called reference counting - and depending on whether it is single-threaded or have to do it over multiple threads, it can have quite a big overhead. Also, ObjC was also ref counted AFAIK before.


Yes, everyone that points that out usually has no idea that Objective-C GC failed due to C's semantics making it quite prone to crashes, and that automating Cocoa's retain/release calls was a much more easier and safer approach than making C code work in a sensible way beyond what a conservative tracing GC will ever be able to offer, while dealing with C pointers all over the place.


As others have pointed out, there’s some tradeoffs here. One of them I’m not seeing mentioned is better forward compatibility. As a user, when Apple adds new features like built-in photo OCR, updated UI elements, better navigation patterns, etc., and I update iOS, every SwiftUI app I have gets those features without the developer doing anything.


> every SwiftUI app I have gets those features without the developer doing anything.

I'm not sure that's right. The developer does a ton -- it just feels automatic to you. All your SwiftUI apps are not going to use Metal shaders automatically now just because they're now so neatly supported in iOS 17. And every update to Swift/SwiftUI comes with deprecations and those need to be addressed (sooner rather than later), including adding #ifavailable macros to cover all the bases.

The real benefit is that developers can add new functionality or write simpler code than they would have to before. I could be wrong, but the stuff you get "automatically" is minimal IMHO. It's just that developers start targeting the newest iOS version during beta so that by the time users upgrade, apps can be ready


It’s a little bit of column A and a little bit of column B.

Sure, as a dev, there’s plenty to do and test and update, but there’s also heaps I get for free, especially when following Apple’s “best practices.”

A simple example was a year ago when Apple tweaked some UI elements to look sleeker and changed default styles (e.g., lists, pickers, etc.).

My SwiftUI app and other SwiftUI apps compiled against the then-current iOS SDK immediately showed the new UI elements when run on the latest iOS beta. Didn’t even require me to recompile.

Stuff like that is elegant and can only be done when the libraries are included in the OS.

The flip side is, of course, that if you, for some reason, have a particular thing in mind and you don’t take precautions to lock it down in your code, it’ll stop looking that way when changes are made in the next iOS.

But I guess that’s what beta testing is for, and I’ve yet to come across a freebie that I didn’t like, but I’ll concede that it’ll depend from dev to dev.


I think i'd rather be sure my code behave like it did when i compiled it over having unexpected side effets at each os release. If all it takes to get the new features is recompile & deploy a new version of my code, i'm fine with it.

The only exception is security.


On the other hand, from a user perspective who knows when that recompile will happen in the case of devs who are less attentive, too busy, or only doing this app thing as a side hobby.


From the user point of view, the new functionality is usually helpful.


Sure, but the only case i see it really useful is for unmaintained apps. At which point you can be sure the app is going to break no matter what in a not distant future.


One of the major benefits of software is that you do _not_ need to re-create it if it already exists and solves your problem.

An unmaintained app that solves a specific problem is never going to break by itself “in a not distant future”. Only if its environment changes so much that it cannot be run anymore (including security fixes missing in the app) does this happen.

I recall a firm making good money while using their internally custom-built software for MS DOS with no way to change it whatsoever -- the software firm that wrote it was probably already long out of business, source code was not available -- and that was in 2019 which was already long past the days of MS DOS.

I think it is worthwhile for OSes and other core software infrastructure to support running even unmaintained apps as much as possible because it reduces the need to rewrite (or overhaul) programs only due to a lack of maintainance for the existent one.

Not caring about this is IMHO accepting to waste a huge portion of the advantages that software gives us compared to other technology (software by itself never breaks from physical defects or continuous use, does not stain, etc.).


It’s also an insidious way to make perfectly good hardware obsolete. As Apple upgrades the OS, they slowly drop support for older hardware. This is usually justifiable, that’s just how OSs work for a variety of reasons.

However it also means that if all apps have to target the latest OS just to get some UI features, they will quickly end up dropping support for older hardware as well.

If you want a good example of this, check out the download page for Calibre, a simple app that helps to manage and convert eBooks.


Apple provides longer support windows than most android manufacturers. If you don’t need the latest features then there’s nothing forcing you to throw away a working device.


But there is pressure forcing you to abandon functioning hardware, that’s my whole point. Obviously nobody should expect OS updates forever, but third party software stops supporting old hardware that they would run fine on because of the way that the runtime tightly couples with the OS.


The tradeoff is that the app runs faster, looks better, works better -- in quite the indirect way.

Now that the developers on the core part don't need to spend time on compatibility -- or, just dont want have to make the base choice of being a runtime dependency -- they can spend time on other things instead.

This seems like a net negative at a glance, on the surface it means the apps are less compatible, so the second level is forced onto the older iterations, in practice, since each iteration has to worry about a lot less, the older iterations are _also_ a lot better instead.

It is of no surprise to me these Apple or Apple-like systems tend to be better overall, as opposed to the other philosophy of Android.

It leaks into all the levels. In the Java app, it is usual to see a deprecated warning that keeps working and it is maintained, and someone pays for that. The negative side is that there's no reason to get rid of the said dependency, either.

My point is that lowering the maintenance cost of _any_ app or systems in general, leaves room for improvement in all the other areas, as long as you don't fall behind -- if you are allowed to fall behind, you can afford to, if not, the end result is better given enough time.


> It is of no surprise to me these Apple or Apple-like systems tend to be better overall

This might have been true in the past, but it's been getting worse over the last decade.

For instance: The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings window written in SwiftUI, which UX wise is a joke compared to the old one, even though the old settings windows wasn't all that great either - case in point: try adding two DNS servers, searching for 'DNS server' only allows adding one item, then the DNS server panel closes and cannot be opened again without repeating the entire search, no idea how this mess made it through QA).

If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?


> The new parts in macOS that are written in Swift seem to be mostly inferior to the parts they replaced (see for instance the new settings windows written in SwiftUI, which UX wise is a joke compared to the old one

Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.

> If Swift is so much better than ObjC, then we should start seeing improvements as users, but that doesn't seem to happen, instead things are getting worse in new OS versions. Why is that?

Because Apple are institutionally incapable of writing software at a sustainable pace and things gradually get worse and worse until somebody high up enough at Apple gets fed up and halts development to catch up with all the quality issues. This isn’t anything new to Swift; they took two years off from feature development to release Snow Leopard with “zero new features” because things had gotten too bad, which happened years before Swift. They are just far enough along in the current cycle that these problems are mounting up again.


> they took two years off from feature development to release Snow Leopard with “zero new features” because things had gotten too bad

This is not an accurate characterization of Snow Leopard. See "The myth and reality of Mac OS X Snow Leopard": https://lapcatsoftware.com/articles/2023/11/5.html See also: https://en.wikipedia.org/wiki/Mac_OS_X_Snow_Leopard

There were many significant changes to the underlying technologies of Snow Leopard. Moreover, Snow Leopard was not, despite the common misconception, a "bug fix release". Mac OS X 10.6.0 was vastly buggier than Mac OS X 10.5.8.


I was also (somewhat indirectly) responding to the claim in the parent.

> The tradeoff is that the app runs faster, looks better, works better.

I haven't noticed any of that so far in new macOS versions, and it is indeed not something where the programming language should matter at all.


> Swift is a programming language. SwiftUI is a UI framework. The programming language doesn’t dictate UX. The new Settings application doesn’t have worse UX because of its programming language.

Swift the language strongly informed SwiftUI, which in turn strongly informed the applications written in it. The path of least resistance defines the most likely implementation. If I have to go the extra mile to do something, I probably will not, so worse UX (by some metric) is a direct consequence of that constraint.


There’s not really anything wrong with the SwiftUI API. The implementation is just terrible, especially on macOS.

Jetpack Compose is a similar API on Android, except the implementation is good, so apps using it are good.


And the vice-versa... features were added to Swift the language to make certain SwiftUI syntax possible.


I really hope that the new version of the settings app causes enough backlash that Apple starts fixing SwiftUI on the Mac...


The weirdest thing about System Settings is that SwiftUI already supports much more Mac-like idioms. They deliberately chose to use the odd-looking iOS-style switches, bizarre label alignment, and weird unique controls. While also keeping the annoying limitations of the old System Preferences app, such as not being able to resize the window.


> no idea how this mess made it through QA

I assume that Apple does not have traditional QA that tries to break flows that aren’t the new hotness. The amount of random UX breakage of boring old OS feature is quite large. Or maybe Apple doesn’t have a team empowered to fix the bugs?

To be somewhat fair to Apple, at least they try to keep settings unified. Windows is an utter mess.


That setting window.. whilst I don't really think swift UI has anything to do with.. it's just so awful, lifted right out of ios, where it is also just awful. As an android user, I don't understand how people put up with that app


The issue has got nothing to do with programming languages or UI toolkits, it's just that before there were more people with more attention to details, or now the management is so broken that there is no QA and no time to fix things.


A good operating system UI framework should enforce the operating system's UX standards though and make it hard to create an UI which doesn't conform to the rules.

But yeah, in the end, software quality needs to be tackled on the organizational level.


Really?

Just at the moment that Swift and, later, SwiftUI get introduced, and entirely coincidentally, management breaks?


It's been a gradual process, looks at the Music app, how it has continuously got worse and buggier over the year, even without Swift and SwiftUI. You can't blame SwiftUI for that.


I see the whole thing a bit more holistically. Swift and SwiftUI are both symptoms of the more general malaise, and then contribute back to it.

We had this in hardware, with machines getting worse and worse and Apple getting more and more arrogant about how perfect they were. In hardware, they had their "Come to Jesus" moment, got rid of Jonathan Ive (who had done great things for Apple, but seemed to be getting high on his own fumes), pragmatically fixed what was wrong and did the ARM transition.

With software, they are still high on their own fumes. The software is getting worse and worse at every level, and they keep telling us and apparently themselves how much better it is getting all the time. Completely delusional.


> it also makes it non-updateable separately from the OS itself, which slows the spread of developments.

Hasn't it always been this way with native desktop applications using the OS vendor's UI toolkit? What I hear some folks describe as "the old days" of Windows, Mac, etc.


On the Mac, it was non-updateable separately from the hardware for a while (large parts of the UI toolkit shipped in ROM) :-)

I think there also always was a new OS version for the new hardware, though, with the new hardware not running at all on older versions.

The high speed at which computer tech improved at the time made that a bit of a smaller issue.


> On the Mac, it was non-updateable separately from the hardware for a while (large parts of the UI toolkit shipped in ROM) :-)

Apple had a mechanism (ROvr resources) to allow the system software on disk to override components from ROM.


I don’t know about tech getting better every day. If I look at what Apple itself is able to do with SwiftUI, especially compared to Cocoa, for example with the Settings app or Journal (which I assume is SwiftUI, I don’t know though)… it’s kinda pathetic.


While it's good to have core technology as part of OS, it also makes it non-updateable separately from the OS itself, which slows the spread of developments.

like javascript in the browser...


I think the Log4j debacle makes it painfully clear that we simple can’t afford to trust application developers to keep bundled libraries up-to-date.

Good idea, doesn’t work.


I think this is a great way to slow down unconditional hype based adoption of new technologies imo, it gives time to make training materials. It’s probably why apple don’t feel the need to make good tutorials, bc they make an artificial delay between new features and actual practical use of them


I recently built a Swift app for the first time, and I was so pleasantly surprised. It's really easy to pick up, I also did zero styling or performance optimizations and it's pretty and fast.

My only regret was not using SwiftData (since it was only released with IOS 17), CoreData is not that nice.


CoreData is indeed not a Swifty experience, if only because of all of the boilerplate necessary.

Luckily SwiftData is really easy to pick up and because it’s build on top of CoreData it’s pretty easy to convert it over to SwiftData.


You can use both at the same time as well. But not all the core Data features are supported yet. I use Core Data abstractions in my app and can’t port that part over to Swift Data yet.


"Because this was the very premise of Apple’s OS-based library distribution model: apps compiled for Swift 5 would work with an OS built on Swift 6; apps compiled with Swift 6 would still be able to “backwards-deploy” to an OS built on Swift 5. Without this, Apple couldn’t use Swift in its own public APIs."

How is an app built on Swift 6 runnable on an OS with just Swift 5 runtime? I would have thought developers have to target the minimal Swift (and iOS version) at build time and live with the features available then.


Yes, we pick a deployment target (a lowest supported version of iOS/macOS/etc.) at build time, and we're then allowed to use language, standard library, and SDK features that are supported by that deployment target. We can also conditionally use newer features when the app runs on a newer target, using @available and #available syntax.

Swift 6 will not change the ABI in an incompatible way. If Swift 6 introduces features that require runtime support, then code that uses those features will not back-deploy, but other code will. We have seen this before. For example, Swift 5.7 implemented SE-0309 (“Unlock existentials for all protocols”). Some of the new features of SE-0309 required runtime support, and if you wrote code that used those features, the program would not back-deploy. But (I think) the compiler emits an error if your deployment target doesn't ship with the Swift runtime that supports those features.

The big changes in Swift 6 will be new semantic restrictions related to concurrency, and possibly a small amount of breaking changes to syntax. These changes will prevent some existing Swift 5 code from compiling in Swift 6 mode. That is the reason the major version number will change.



The comment about code signing perhaps being a blocked for third party sdk/library deduplication doesn't seem to also consider the fairplay DRM on iOS apps. As far as I know, the code (.text) segment of all iOS app store binaries are encrypted. I don't know the details about the encryption scheme (I would assume it is perhaps uniquely encrypted keyed on the user's appleid?) but it sounds like this would be a larger problem than just code signing signatures varying between publishers (and I would assume all appstore apps are re-signed by a single apple appstore authority, although enterprise/adhoc apps probably aren't).


Go's creator still uses and promotes Go. Python's creator still uses Python. C++'s creator still uses and promotes C++. Java's creator still uses and promotes Java. But Swift's creator no longer uses Swift AFAICT.

Why?


Did he ever use it much? It’s not self-hosting yet, so he’d be more a user of C++ than he was of Swift.


He is developing Mojo now, so i guess he has no reason to use it now. Also perhaps developing a new language is an admission that the S4TF was a failed effort.


and this is one area where linux distros with its package managers really shines and totally beats all the competition


In the time since this article was published, a new feature was introduced in Swift 5.8 that addresses the issue: @backDeployed [1] allows Apple to include a default implementation for a new API method that will be used when running on an older version of macOS.

It only works for functions/methods, but it will allow devs to use new APIs sooner!

[1]: https://www.hackingwithswift.com/swift/5.8/function-back-dep...


Seems similar to polyfills in web developer. Game changer given vagaries of browsers still.


The more time passes, the more I feel like dynamic linking of anything but plain C interfaces is just a bad idea.


Would you consider using runtime things like Microsoft’s Component Object Model (COM) to be a form of dynamic linking?


The real question is probably "Is COM a bad idea?"

And I don't think it is. But COM itself was such a pain to deal with. Notably all of my time with it was spent from within .NET so maybe it was less painful in C++-- but .NET was at least partially designed to work well with COM. Insert crying emoji.

(Of course I think Microsoft intended us to stop using COM and instead use SOAP, but that didn't happen)


Since Vista that COM is the main delivery mechanism for new Windows APIs.


SOAP is an object access protocol. COM is an component object model. How is SOAP providing a component model?


The only really important and surviving part of COM isn't the "component model idea", but that COM enables versioned interfaces and dynamic linking with a stable ABI for high level languages that usually don't have a stable ABI (like C++) by defining a standard of how high level language concepts can be tunneled through a regular C API.


Which is even overlooked is how similar ideas have been adopted in Apple and Google platforms (XPC, IO/Driver Kit, AIDL/Binder, FIDL/zircon), less so in GNU/Linux as not all distributions rely on D-BUS for similar ideas.


COM (a) requires dynamic linking underneath and (b) uses C interfaces underneath, or at least used to.


Only In-Proc COM requires dynamic linking, COM servers rely on processes.

DCOM, nowadays out of fashion also uses processes, and in case of exposing In-Proc COM to the network, a wrapper server process is required.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: