As information, the current administration is doing similar demands to foreign universities, trying to impose the point of view of the world in a president we didn't vote for.
Here is an article about the Trump administration demands to our universities.
> The reason I believe C is and always will be important is that it stands in a class of its own as a mostly portable assembler language, offering similar levels of freedom.
When your computer is a PDP-11, otherwise it is a high level systems language like any other.
Less controversially, when you write C, you write for a virtual machine described by the C spec, not your actual hardware.
Your C optimizer is emulating that VM when performing symbolic execution, and the compiler backend is cross-compiling from it. It's an abstract hardware that doesn't have signed overflow, has a hidden extra bit for every byte of memory that says whether it's initialized or not, etc.
Assembly-level languages let you write your own calling conventions, arrange the stack how you want, and don't make padding bytes in structs cursed.
These are all such nonsensical misinterpretations of what people mean when they say C is "low level". You absolutely don't write C for the C abstract machine, because the C spec says nothing about performance, whereas performance is one of the primary reasons people write C.
The existence of undefined behaviour isn't proof that there is a C "virtual machine" that code is being run on. Undefined behaviour is a relaxation of requirements on the compiler. The C abstract machine doesn't not have signed overflow, rather it allows the compiler to do what it likes when signed overflow is encountered. This is originally a concession to portability, since the common saying is not that C is close to assembly, but rather that it is "portable" assembler. It is kept around because it benefits performance, which is again one of the primary reasons people write C.
C performance exists thanks to UB, and the value optimising compilers extract out of it, during the 8 and 16 bit home computers days any average Assembly developer could write better code than C compiler were able to spit out.
> Less controversially, when you write C, you write for a virtual machine described by the C spec, not your actual hardware.
Isn't this true for most higher level languages as well? C++ for instance builds on top of C and many languages call into and out of C based libraries. Go might be slightly different as it is interacting with slightly less C code (especially if you avoid CGO).
That's a curious remark, although I guess it doesn't look high level from the eyes of someone looking at programming languages today.
C has always been classed as a high level language since its inception. That term's meaning has shifted though. When C was created, it wasn't assembly (middle) or directly writing CPU op codes in binary/hex (low level).
> Describing C as "high-level" seems like deliberate abuse of the term
Honestly it doesn't really matter. High level and low level are relative to each-other (and machine language), and nothing changes based on what label you use.
While C was adapted to the PDP-11, this was adding byte-level memory access. Otherwise I do no think there is anything in C specific to the PDP-11, or what would this be?
What makes C low-level is that it can work directly with the representation of objects in memory. This has nothing to do with CPU features, but with direct interoperability with other components of a system. And this is what C can do better than any other language: solve problems by being a part of a more complex system.
The post-increment and post-decrement operators mapped directly onto PDP-11 CPU addressing modes.
The integral promotion rules come directly from the PDP-11 CPU instruction set.
If I recall correctly so does the float->double promotions.
CPUs started adapting to C semantics around the mid-80's. CPU designers would profile C generated code and change to be able to more efficiently run it.
Thanks. I guess the integral promotion is related to byte-addressing. If you have bytes but can not directly do arithmetic on them, promoting them to word size seems natural.
Can you elaborate? C constructs generally map to one or a few assembly instructions at most. You can easily look at C and predict the generated assembly. This is in contrast to other compiled languages, like Go, that inject instructions for garbage collection and other runtime features.
Yeah, people keep repeating that like a broken record lately, it smells like Rust to me.
No one is claiming it was built for today's processors, just that it puts less obstacles between you and the hardware than almost any other language. Assembler and Forth being the two I'm familiar with.
If a language is unpopular, people won't want to work for you and you'll run into poor support. Rewriting a library may take months of dev time, whereas C has an infinite number of libraries to work with and examples to look at.
Being old doesn't mean anyone knows the language. I mean if the language predates C significantly and nobody uses is then there's probably a really good for it. The goalposts aren't moving they're just missing the shot
C++ for one - it has atomics with well defined memory barriers, and guarentees for what happens around them.
The real answer is obviously Assembly - pick a random instruction from any random modern CPU and I'd wager there's a 95% chance it's something you can't express in C at all. If the goal is to model hardware (it's not), it's doing a terrible job.
C lacks sympathy with nearly all additions to hardware capabilities since the late 80s. And it's only with the addition of atomics that it earns the qualification of "nearly". The only thing that makes it appear as lower level than other languages is the lack of high-level abstraction capabilities, not any special affinity for the hardware.
For one, would expect that a low level language wouldn't be so completely worthless at bit twiddling. Another thing, if C is so low level, why can't I define a new calling convention optimized for my use case? Why doesn't C have a rich library for working with SIMD types that has been ubiquitous in processors for 25 years?
It puts less obstacles in the way of dealing with hardware than almost any other language for sure.
What's standardized was never as important in C land, at least traditionally, which I guess partly explains why it's trailing so far behind. But the stability of the language is also one of its features.
simd doesnt make much sense as a standard feature/library for a general purpose language.
If you're doing simd its because you're doing something particular for a particular machine and you want to leverage platform specific instructions, so thats why intrinsics (or hell, even externally linked blobs written in asm) is the way to go and C supports that just fine.
But sure, if all youre doing is dot products I guess you can write a standard function that will work on most simd platforms, but who cares, use a linalg library instead.
Like, say I have a data structure that is four bits wide (consisting of a couple of flags or something) and I want to make an array of them and access them randomly. What help do I get from C to do this? C says "fuck you".
Pick an appropriate base type (uintN_t) for a bitset, make an array of those (K * N/4) and write a couple inline functions or macros to set and clear those bits.
However there is no official roadmap regarding C23 support, and now with the whole safety discussion going on and Secure Future Initiative, probably will never happen.
Additionally clang is a blessed compiler at Microsoft, it is included on Visual Studio, so whatever MSVC doesn't support can be done in clang as alternative.
They have added one feature (typeof) from C23, so maybe they will add the rest when they release C++26. Or maybe they won't. Microsoft is an expert in inflicting the cruelty of providing just enough hope.
C++26? There are having issues with delivering C++23, since the whole change in security focus with Rust, Go, C#, Java first, C and C++ for existing codebases, and most likely one of the reasons Herb Sutter is no longer at Microsoft.
Oh wow, I don't write C++, so I didn't know how bad the situation was. My recollection that MSVC always implemented C++ standards posthaste is clearly outdated.
Yup, we are never getting C23. Good thing C11 is decent enough, I guess.
Kotlin's fame is mostly due to Kotlin Virtual Machine, aka Android.
So as long Android stays around, Kotlin will keep its relevancy.
Same with Swift and iDevices, especially since Metal was the only thing that Apple still bothered to implement in Objective-C first, with Swift bindings.
Now Ruby, well outside Rails I hardly see any demand for it on the circles I move on.
Maybe the fame, but I've only recently taken up Kotlin and... I can't say anything bad about it. Yes, Java is catching up, to a degree, but Kotlin is really nice to use if you're on the JVM anyway. (non-mobile btw)
JVM is built on Java, as is the whole ecosytem, also doesn't help that JetBrains seems to only care about JVM to the extent it helps bootstraping Kotlin own ecosystem.
Except for Android, there are no JVM implementations that take Kotlin into consideration.
The beauty of kotlin is they don't need to.
I use kotlin on the backend (never did android development). It's amazing. Better version of java than java. Use any and all java libraries transparently (there's nothing to change or do), heck you can even have the java and kotlin in the same codebase (something I haven't really done, but tried it for fun)
Java and Kotlin both compile down to .class Java bytecode.
Suspend functions appear as regular JVM methods with an extra Continuation parameter. Tools like runBlocking make them straightforward to call from Java.
My main point was that you can use Java code as is without modifications from Kotlin, and that if you wanted you could have Java code in your project and it will work just fine. Of course if you wanna start calling co-routines from the Java part of your project to the Kotlin part, hey that's up to you.
I've had a few hits from recruiters lately that mention Ruby on Rails. My experience in the Salt Lake City valley was that several successful businesses started with that stack and now it's just sort of "entrenched". Some have been working to move to Elixir, but they still have to turn wrenches on the old (money-making) stack. It's kind of created this microcosmic COBOL dev-like niche. We still need folks who know this stuff. With all that said, I do think there are new startups that still choose Ruby on Rails for its quick time-to-market ability. So demand definitely varies by location.
There is, but lacks all the nice tooling Objective-C and Swift enjoy for Metal development, it is only there for game developers to integrate Metal into their C++ engines, without dealing with Objective-C++.
The only place C++ has really first class treatment on Metal is a shading language.
I definitely think that there's a difference in kind about what Google has done and what Microsoft attempted to do.
Microsoft wanted everyone on MSN instead of the Internet. They bullied their browser to be the only one, and then kept it crippled. They tried to own the scripting language (VBScript) of the Internet.
I'm going to try right now: Oh, looks like I can visit any website I want with Chrome. Or Chromium. Or now IE also using Chromium.
Google is of course still driven by capitalism, not altruism. But when you look at their history they've in the vast majority of cases done the right thing arguably for the wrong reason.
And that's because Google's incentives have been aligned differently. Microsoft earns money from Windows and Windows related services (very broad here, where I include Office). Until Bing, every time someone used the Internet instead of native apps, Microsoft basically lost money. Definitely lost power.
Every time someone spends more time on the Internet, Google earns more money, statistically. So Google, in a complete opposite to Microsoft, has been incentivised to help people get onto the open web.
Yes, after Microsoft's surrender the Chromium market share is too big. And it's a problem. But at least thanks to Apple you cannot make a website that only works on Chrome. Especially since Chrome on iPhone uses WebKit, not Chromium, because of Apple app rules.
Another problem with Google is that some important opensource projects have a large set of maintainers be Google employees. But the alternative is that they… not contribute? Didn't we say that big companies should give back to opensource? But of course they'll work on what they need. Though there will be a large overlap.
It's kind of a first world problem that the open source (apache license) Kubernetes has "too many google employees" as contributors.
During Microsoft's domination, this was not the problem. This was not the problem at all.
What can you not do, or need to special case, with Safari+AWS?
Android, OK there Google asserts control. Not total control (see any Samsung phone), but a lot.
Apple platforms only had command line after NeXT reverse acquision, it isn't as if A/UX was a huge success, so it is kind of ironic see that mentioned.
It was specially clear in the early days of MS-DOS versus Mac OS.
As gray dog around the prairie, had Microsoft actually been serious with POSIX subsystem on Windows NT/2000, instead of some marketing material and low level effort, GNU/Linux adoption would never taken off, at least not at a level that would have mattered.
With OS X on one side, and POSIX subsystem on Windows NT/2000 side, everyone would be doing their UNIX like workflows without thinking once to try out GNU/Linux.
At my university we only cared about Linux on its early days, Slackware 2.0 days, because naturally we couldn't have DG/UX at home, and that POSIX support was really unusable beyond toy examples.
Here is an article about the Trump administration demands to our universities.
https://www-publico-pt.translate.goog/2025/04/11/ciencia/not...
reply