Seems to be an older convention in linguistics. Romanizations of Japanese also switched from circumflexes (Tôkyô) to macrons (Tōkyō) at some point in time fairly long ago—I think the English-language Japanese journal I saw using that convention systematically was from the late 1950s, and its recent issues definitely don’t use it.
Perhaps a circumflex was easier to typeset, like with logicians switching from Ā to ¬A and the Chomskyan school in linguistics switching from X-bar and X-double-bar to X' and XP?
> Or was it the gradual phasing out of mainframe-class hardware in favour of PC-compatible servers
Proprietary Unix is still around. Solaris, HP-UX and AIX still make money for their owners and there are lots of places running those on brand-new metal. You are right, however, that Linux displaced most of the proprietary Unixes, as well as Windows and whatever was left of the minicomputer business that wasn't first killed by the unixes. I'm not sure when exactly people started talking about "Enterprise Linux".
Back then I went with Debian, but I agree - the early scale-out crowd went mostly with Red Hat. Back then there was a lot of companies still doing scale-up with more exotic hardware with OSs like AIX and Solaris.
I first remember the term when Oracle ported themselves to Linux, began submitting patches, then began pushing Oracle on Linux to enterprises.
Oracle's big reason for doing so was because they could charge more for Oracle on Linux, and still get to a lower total cost of ownership than Oracle on Solaris.
Oracle began this in 1998. By 2006 they had their own Oracle Enterprise Linux distribution.
well, IBM very publicly invested 1Bn USD into supporting Linux on all their hardware with all their software. so db/2 on s390 on Linux, likewise websphere, etc. it gave the customers the promise of one run time environment on anything. and SUSE and shortly later Red Hat provided truly source compatible environments for software vendors. "code once run anywhere", for real. and then IBM and Oracle and Co forced suse and red hat to become binary compatible at the kernel/libc and basic system libs level, so Oracle and all could provide one binary under /opt on any Linux...
and that pulled all other vendors along, HP, Dell, Fujitsu, likewise for software...
and it all started with IBM officially supporting and pushing the hobbyist student project Linux on the holy Grail of enterprise compute, (of 1999/2000): s390
4.3BSD was basically designed without any bespoke hardware platform of its own. They commandeered DEC’s big iron, for the most part, to “dual boot” before dual-booting was cool. 386BSD began to enable PC-compatible hardware and really cost-effective “server farms” even before Linus was a twinkle in Finland’s eye.
Moreover, various BSD flavors were empowering admins to breathe new life into legacy hardware, sidestepping and bypassing the proprietary software channels. Linux, on the gripping hand, remained x86-unportable for awhile after BSD (and Xfree86) was running everywhere.
Personally I ran Minix-286 at home; at university we enjoyed a “recreational” VAX11 running not VMS but 4.3BSD.
Flash forward to 1998: from the arid but air-conditioned Sonoran Desert I received gently-loved twinned Apollo 425t systems with memory upgrades; I installed OpenBSD on both, as well as my 486DX100! It was a homogeneous OS environment with heterogenous hardware... and the 486, with Adaptec SCSI & a VLB #9GXE64 Pro, could boot into Windows 98 and run the Cygwin X server, or DOOM or Quake.
There was a golden age when a dude could walk into any surplus yard, grab Big Iron Unix Boxes, take them home and bootstrap NetBSD. On anything. Bonus: BSD originated in USA/Canada, for a trustworthy chain of trust. (Oh Lord, the encryption export technicalities...)
That’s normally what this means, yes, with a few more intermediate steps. There’s only one bootstrap chain like this that I know of[1,2,3], maintained by Jeremiah Orians and the Guix project; judging from the reference to 180 bytes, that’s what the distro GP describes is using as well.
> This is a set of manually created hex programs in a Cthulhu Path to madness fashion. Which only have the goal of creating a bootstrapping path to a C compiler capable of compiling GCC, with only the explicit requirement of a single 1 KByte binary or less.
London is somewhat unusual in that its streets are actually rather wide for a European city, due to urban planning regulations enacted after the Great Fire and (until recently) the willingness of its inhabintants to demolish historical buildings. Paris, Vienna, or Prague, for example, are generally much denser, not to mention genuinely medieval cities like Girona.
> Even dns resolution on glibc implies dynamic linking due to nsswitch.
Because, as far as I’ve heard, it borrowed that wholesale from Sun, who desperately needed an application to show off their new dynamic linking toy. There’s no reason they couldn’t’ve done a godsdamned daemon (that potentially dynamically loaded plugins) instead, and in fact making some sort of NSS compatibility shim that does work that way (either by linking the daemon with Glibc, or more ambitiously by reimplementing the NSS module APIs on top of a different libc) has been on my potential project list for years. (Long enough that Musl apparently did a different, less-powerful NSS shim in the meantime?)
The same applies to PAM word for word.
> Mixing static linking and dlopen doesn't make much sense, as said [in an oft-cited thread on the musl mailing list].
It’s a meh argument, I think.
It’s true that there’s something of a problem where two copies of a libc can’t coexist in a process, and that entails the problem of pulling in the whole libc that’s mentioned in the thread, but that to me seems more due to a poorly drawn abstraction boundary than anything else. Witness Windows, which has little to no problem with multiple libcs in a process; you may say that’s because most of the difficult-to-share stuff is in KERNEL32 instead, and I’d say that was exactly my point.
The host app would need to pull in a full copy of the dynamic loader? Well duh, but also (again) meh. The dynamic loader is not a trivial program, but it isn’t a huge program, either, especially if we cut down SysV/GNU’s (terrible) dynamic-linking ABI a bit and also only support dlopen()ing ELFs (elves?) that have no DT_NEEDED deps (having presumably been “statically” linked themselves).
So that thread, to me, feels like it has the same fundamental problem as Drepper’s standard rant[1] against static linking in general: it mixes up the problems arising from one libc’s particular implementation with problems inherent to the task of being a libc. (Drepper’s has much more of an attitude problem, of course.)
As for why you’d actually want to dlopen from a static executable, there’s one killer app: exokernels, loading (parts of) system-provided drivers into your process for speed. You might think this an academic fever dream, except that is how talking to the GPU works. Because of that, there’s basically no way to make a statically linked Linux GUI app that makes adequate use of a modern computer’s resources. (Even on a laptop with integrated graphics, using the CPU to shuttle pixels around is patently stupid and wasteful—by which I don’t mean you should never do it, just that there should be an alternative to doing it.)
Stretching the definitions a little, the in-proc part of a GPU driver is a very very smart RPC shim, and that’s not the only useful kind: medium-smart RPC shims like KERNEL32 and dumb ones like COM proxy DLLs and the Linux kernel’s VDSO are useful to dynamically load too.
And then there are plugins for stuff that doesn’t really want to pass through a bytestream interface (at all or efficiently), like media format support plugins (avoided by ffmpeg through linking in every media format ever), audio processing plugins, and so on.
Note that all of these intentionally have a very narrow waist[2] of an interface, and when done right they don’t even require both sides to share a malloc implementation. (Not a problem on Windows where there’s malloc at home^W^W^W a shared malloc in KERNEL32; the flip side is the malloc in KERNEL32 sucks ass and they’re stuck with it.) Hell, some of them hardly require wiring together arbitrary symbols and would be OK receiving and returning well-known structs of function pointers in an init function called after dlopen.
> Witness Windows, which has little to no problem with multiple libcs in a process
Only so long as you don't pass data structures from one to the other. The same caveats wrt malloc/free or fopen/fclose across libc boundaries still applies.
Well, not anymore, but only because libc is a system DLL on Windows now with a stable ABI, so for new apps they all share the same copy.
Yes, but in a culture where this kind of thing is normal (and statically linking the libc was popular for a while), that is in mostly understood, CPython’s particular brand of awfulness notwithstanding. It is in any case a much milder problem than two libcs fighting over who should set the thread pointer (the FS segment base), allocate TLS, etc., which is what you get in a standard Linux userspace.
That's one of the reasons that OpenBSD is rather compelling. BSDAuth doesn't open arbitrary libraries to execute code, it forks and execs binaries so it doesn't pollute your program's namespace in unpredictable ways.
> It's true that there's something of a problem where two copies of a libc can't coexist in a process...
That's the meat of this article. It goes beyond complaining about a relatable issue and talks about the work and research they've done to see how it can be mitigated. I think it's a neat exercise to wonder how you could restructure a libc to allow multi-libc compatibility, but question why anyone would even want to statically link to libc in a program that dlopen's other libraries. If you're worried about a stable ABI with your libc, but acknowledge that other libraries you use link to a potentially different and incompatible libc thus making the problem even more complicated, you should probably go the BSDAuth route instead of introducing both additional complexity and incompatibility with existing systems. I think almost everything should be suitable for static linking and that Drepper's clarification is much more interesting than the rant. Polluting the global lib directory with a bunch of your private dependencies should be frowned upon and hides the real scale of applications. Installing an application shouldn't make the rest of your system harder to understand, especially when it doesn't do any special integration. When you have to dynamically link anyway:
> As for why you’d actually want to dlopen from a static executable, there’s one killer app: exokernels, loading (parts of) system-provided drivers into your process for speed.
If you're dealing with system resources like GPU drivers, those should be opaque implementations loaded by intermediaries like libglvnd. [1] This comes to mind as even more reason why dynamic dependencies of even static binaries are terrible. The resolution works, but it would be better if no zlib symbols would leak from mesa at all (using --exclude-libs and linking statically) so a compiled dependency cannot break the program that depends on it. So yes, I agree that dynamic dependencies of static libraries should be static themselves (though enforcing that is questionable), but I don't agree that the libc should be considered part of that problem and statically linked as well. That leads us to:
> ... when done right they don't even require both sides to share a malloc implementation
Better API design for libraries can eliminate a lot of these issues, but enforcing that is much harder problem in the current landscape where both sides are casually expected to share a malloc implementation -- hence the complication described in the article. "How can we force everything that exists into a better paradigm" is a lot less practical of a question than "what are the fewest changes we'd need to ensure this would work with just a recompile". I agree with the idea of a "narrow waist of an interface", but it's not useful in practice until people agree where the boundary should be and you can force everyone to abide by it.
That is more glib than insightful, I think: the programming equivalent of “as fast as you can” in this metaphor would likely be measured in lines of code, not CPU-seconds.
FYI, you seem shadowbanned[1] for some reason: all of your comments after the first one (and, now that I’ve vouched for it, this one) are marked “dead” as though downvoted to oblivion, though I find nothing that objectionable in any of them. I suggest you look at your comments page[2] from an incognito window (and perhaps contact 'dang for clarification?).
These kinds of things almost always give me an uncanny-valley feeling. Here I'm looking at the screenshot and can’t help noticing that the taskbar buttons are too close to the taskbar’s edge, the window titles are too narrow, the folders are too yellow, and so on and so forth. (To its credit, Wine is the one exception that is not susceptible to this, even when configured to use a higher DPI value so the proportions aren’t actually the ones I’m used to.) I’m not so much criticizing the theme’s authors as wondering why this is so universal across the many replicas.
Computing is largely a cargo cult thing these days.
The problem is that the interfaces these bootleg skins draw "inspiration" from were designed on the back of millions of pre-inflationary dollars' R&D from only the best at Golden-Age IBM, Microsoft, Apple, etc.. BeOS, OS/2, Windows 95-2000 do not look the way they do because it looks good, they look the way they do because it works good, countless man hours went into ensuring that. Simply designing an interface that looks similar is not going to bring back the engineering prowess of those Old Masters.
I’m less inclined to attribute it to “these days”, as I remember the contemporary copycat themes in e.g. KDE and Tk looking off as well. Even Swing with the native look-and-feel didn’t quite look or feel right, IIRC.
As a (weak) counterpoint to supplicating ourselves to the old UI masters, I submit Raymond Chen’s observations from 2004[1] that the flat/3D/flat cycle is largely fashion, e.g. how the toolbars in Office 97 (and subsequent “coolbars”) had buttons that did not look like buttons until you hovered over them, in defiance of the Windows 95 UI standard. (Despite Chen’s characteristic confident tone, he doesn’t at all acknowledge the influence of the limited palettes of baseline graphics adapters on the pre-Win95 “flat” origins of that cycle.)
Also worth noting are the scathing critiques of some Windows 95 designs[2,3] in the Interface Hall of Shame (2000). I don’t necessarily agree with all of them (having spent the earlier part of my childhood with Norton Commander, the separate folder/file selectors in Windows 3.x felt contrived to me even at the time) but it helps clear up some of the fog of “it has always been this way” and remember some things that fit badly at first and never felt quite right (e.g. the faux clipboard in file management). And yes, it didn’t fail to mention the Office 97 UI, either[4,5]. (Did you realize Access, VB, Word, and IE used something like three or four different forks of the same UI toolkit, “Forms3”, among them—a toolkit that looked mostly native but was in fact unavailable outside of Microsoft?..)
None of that is meant to disagree with the point that submitting to the idea of UI as branding is where it all went wrong. (I’ll never get tired of mentioning that the futuristic UI of the in-game computers of the original Deus Ex, from 2000, supported not only Tab to go between controls and Enter and Esc to submit and dismiss, but also Alt accelerators, complete with underlined letters in the labels.)
> Despite Chen’s characteristic confident tone, he doesn’t at all acknowledge the influence of the limited palettes of baseline graphics adapters on the pre-Win95 “flat” origins of that cycle.
It's right in the second sentence: "...Windows 1.0, which looked very flat because... color depth was practically non-existent."
> I’ll never get tired of mentioning that the futuristic UI of the in-game computers of the original Deus Ex, from 2000, supported not only Tab to go between controls and Enter and Esc to submit and dismiss, but also Alt accelerators, complete with underlined letters in the labels
I think that's because they used the stock UI toolkit of the original Unreal Engine, which also had all these things. If you recall, UT'99 actually had a UI more like a desktop app at the time, complete with a menu bar and tabbed dialogs:
In modern times telemetry can show how well new designs work. The industry never forgot how to measure and do user research for ui changes. We've only gotten better at it.
I've had an alternate theory for a while. Prior to verbose metrics, UIs could only be designed by experts and via small samples of feedback sessions. And UIs used to be much, much better. I suspect two things have happened:
- With a full set of metrics, we're now designing toward the bottom half of the bell curve, ie, towards the users who struggle the most. Rather than building UIs which are very good, but must be learned, we're now building UIs which must suit the weakest users. This might seem like a good thing, but it's really not. It's a race to the bottom, and robs those novice users from ever having the chance of becoming experts.
- Worse, because UIs must always serve the interests of the bottom of the bell curve, this actually is why we have constant UI churn. What's worse than a bad UI? 1,000 bad UIs which each change every 1-6 months. No one can really learn the UIs if they're always churning, and the metrics and the novice users falsely encourage teams to constantly churn their UIs.
I strongly believe that you'd see better UIs either with far fewer metrics, or with products that have smaller, expert-level user bases.
I don’t believe either is the primary driver of modern UI design. Cynical as it may be, I think the only things that get any level of thought are:
1. Which design is most effective at steering the most users to the most lucrative actions
2. What looks good in screenshots, presentations, and marketing
The rest is tertiary or an afterthought at best. Lots of modern UI is actually pretty awful for those mentioned bottom of the bell curve users and not much better for anybody else in terms of being easy to use or serving the user’s needs.
Proper use of analytics might be of assistance here, but those are also primarily used to figure out the most profitable usage patterns, not what makes a program more pleasant or to easy to use. They’re also often twisted or misused to justify whatever course of action the PM in question wants to take, which is often to degrade the user experience in some way.
There's a much simpler explanation. At some point, the UI becomes about as good as it can be. It can't really be improved any further without changing the whole paradigm, and just needs to be maintained.
But product managers inside the large corporations can't get promoted for merely maintaining the status quo. So they push for "reimagining" projects, like Google's "Material Screw You" UI.
And we get a constant treadmill of UI updates that don't really make anything better.
Just because they're measuring doesn't mean they're measuring the same things as before.
The goal in 1995 might be "The user can launch the text editor, add three lines to a file, and save it from a fresh booted desktop within 2 minutes".
The goal in 2015 might be "we can get them from a bare desktop to signing up for a value-add service within 2 minutes"
I'd actually be interested if there's a lot of "regression testing" for usability-- if they re-run old tests on new user cohorts or if they assume "we solved XYZ UI problem in 1999" and don't revisit it in spite of changes around the problem.
Telemetry may tell you the "what" but, at best, it will only allow you to infer the "why". It may provide insights into how people do things, yet it will say nothing about how they feel about it. Most of all, telemetry will only answer the questions it is designed to answer. The only surprises will be in the answers (sometimes). There is no opportunity to be surprised by how the end user responds.
It can look better. This is basically a distro with Chicago95 out of the box and not well configured. If you take the time it can look more like 95. The Chicago95 screenshots IMO look better:
Ouch. That screenshot is uncomfortable to look at. The window title bars are painfully narrow, the frame borders have inconsistent thicknesses, the Start menu overlaps the taskbar, the vertical centering of text is wrong.
The answer to your question is that these replicas are of low quality. This one looks like the whole thing was made by someone (or a committee of people) lacking attention to detail.
True to a first approximation. A good second one is that a carefully enunciated [ɑ] isn’t correct, either; a schwa [ə] will sound better, so the vowels and the overall rhythm will be similar to the English word bazaar in a non-rhotic accent. Finally, the hard reality[1] is that all of this is heavily accent-dependent: in Vologda (500km from Moscow) you will hear [o] for the first vowel; in Ryazan (200km from Moscow) you can hear [a] ~ [æ]; even in Moscow itself, a radio announcer will say a fairly careful [ɐ] while someone who grew up in the poorer suburbs will have an almost-inaudible [ə] (this is a strong class marker).
The English word Moscow, meanwhile, is itself very interesting: it’s not actually a derivative of the Russian Москва, but rather a cognate, as both of them are derived[2] from different cases (accusative vs. locative or genitive) of the original Old East Slavic (aka Old Russian, aka Old Ukrainian, etc.) name.
Perhaps a circumflex was easier to typeset, like with logicians switching from Ā to ¬A and the Chomskyan school in linguistics switching from X-bar and X-double-bar to X' and XP?
reply