At what point in the past were our programs stable, robust and just worked? Perhaps it was before my time, but DOS and windows (3.11, 95) would crash, constantly. Blue screens of death, infinite loops that would just freeze my computer, memory leaks that would cause my computer to stop working after a day.
I now expect my computer to stay on for months without issues. I expect to be able to put it to sleep and open it in the same state it was. I expect that if a website or a program errors, my OS simply shrugs and closes it. I expect my OS to be robust enough so that if I enter a usb or download a file I'm not playing Russian roulette that it might contain a virus that would destroy my computer.
In the past I would close my computer at the end of every day because otherwise it will simply crash some time in the night. I would run de fragmentation at least once a month. Memory errors and disk errors were common, and the OS had no idea how to overcome it. Crashes were so common, you just shrugged and learned to save often.
> At what point in the past were our programs stable, robust and just worked?
DOS was rock solid, at least around the era of DR-DOS. DESQView 386 was absolutely stable too. The BBS software I ran on them in those days was a wobbly piece of shit though.
I also recall Borland's Turbo Pascal compiler and the accompanying text-mode IDE being ultra reliable.
After DOS I've used OS/2 which was also extremely stable, although suffered from limited hard- and software availability.
Mac OS X used to be rock solid too in the heydays of the Powerbooks and earlier Intel Macbooks. Every now and then there were hardware design flaws though, and now the quality of both soft- and hardware seems to have taken a tragic turn for the worse.
You still do play Russian roulette whenever you plug a USB device into your computer, see "USB Rubber Ducky".
DOS was rock solid because it did nothing. Many programs, particularly games and anything that did networking, didn't even use DOS interfaces—they bypassed them entirely and worked with either the BIOS or hardware directly. There was no memory protection, no multitasking, and, on a higher level, no permissions nor sandboxing. So while maybe it "just worked", I wouldn't call it robust.
You’re complaining that DOS lacked features and protections. But zero cost abstractions like DOS can still be robust (in the sense of being reliable and solid.) yes it requires more trust, But there is no guarantee modern OSes can run arbitrary zero-trust binaries.
If you run a buggy program on a modern OS, it won't crash the system or impact other processes. If you run a buggy program on DOS, it will write to random physical addresses, probably clobbering the state of other processes and of DOS.
Modern OSes can run arbitrary binaries, but they can pretty much run arbitrary non-adversarial binaries - problematic binaries have to be intentionally written to exploit the system (as opposed to DOS, where non-problematic binaries had to be intentionally written to not break the system).
DOS was a configuration nightmare; you could run games that required up to about 600kb of memory, but only with a ludicrous amount of hacking that was harder to figure out in the pre-internet days.
The "rubber ducky" attack is, of course, also possible with PS/2, XT, and even ADB keyboards, because none of them were authenticated.
Boy, that's not how I remember DOS. I remember playing with all kinds of variants of driver load order in config.sys and passing obscure arguments into himem.sys to avoid odd hardware conflicts and crashes.
I always wondered what the world would be like if Microsoft had just made a 32 bit DOS instead of going down the WinNT/95 route. Most of the headache in config.sys and friends was because you were working around the 16 bit address space. However, there was something really nice about owning your entire machine and only needing command.com for the "operating system". Compare this to full operating systems which consume gigabytes of disk and memory.
This hypothetical 32 bit DOS could've had memory protection and multitasking too. Obviously device drivers would add complexity, but it doesn't need to be as complex as it's become.
DOS isn't even an operating system in the modern sense. Once you add preemptive multitasking and memory protection, you're simply going to end up with a normal modern operating system kernel again.
On the other hand, the stuff that takes "gigabytes of disk and memory" isn't even part of the operating system kernel, so there's no need to start from DOS to get rid of that stuff. It's possible to run linux from a few megabytes of ram.
> Once you add preemptive multitasking and memory protection, you're simply going to end up with a normal modern operating system kernel again.
You're missing the point. There is no single file operating system for desktop users (maybe VxWorks or some other embedded OS falls into that category, but those aren't really for desktops). Modern operating systems sprawl all over the disk. Memory protection and multitasking are not large features, and CS undergrads all over the world routinely implement them in less a semester.
> It's possible to run linux from a few megabytes of ram.
A few megs of ram and a directory in /etc filled with startup stuff and config files. Clearly you don't appreciate it, but there was something really nice about being in the root directory and seeing only command.com and config.sys. The entire rest of the machine was yours to setup however you liked. The things most people hated about DOS really had more to do with the 16 bit address space and segmented architecture.
Sometime in the early or mid 90s I had a FreeBSD box on 2.something - installed because I disliked unreliable flaky Windows so much - that passed a year uptime. It was daily driver during that time, and often doing stuff while I was out at work or sleeping. It was cutting CDs, that had been simply bullet proof on Amiga, becoming so incredibly delicate and flaky on Windows that was one of the pushes to go BSD instead. I mostly kept on using the Amiga as most reliable option for that.
The early 90's Sun's and SGI's didn't crash much either - though in a dev shop, sure we could push them to panic from time to time. The bigger iron just ran indefinitely, often until OS upgrade. :)
Now obviously this talk is game related, but even my previous Amigas were more reliable for uptime if you stayed within Workbench - often passing into months - than DOS and Windows. The mostly undeserved reputation of Amiga for constant crashing was from games hitting the hardware direct and those guru messages instead of silent freeze or pretty random colours that other platforms gave.
All were online, though not much web yet - mainly ftp, newsgroups and dial up BBS's.
In my experience DOS but also all Windows from 1.2 to 95 never experienced crashes. For me it started with Windows 98 (heavily pirated by people) and the horror story was Windows me. My wife told me to do something and as there were versions of Windows 2000 provided for free in magazines, I used one. Windows 2000 was such a relief from Windows me! But there was no USB and other niceties in W2K.
The nightmare started again with Windows XP, then I switched to Ubuntu which was reminiscent of Windows 2000.
A funny thing and proof of solid interfaces in Windows 3.1/3.11 and Windows, is that people were making their own versions by removing/adding components and sometimes even changing their content with hexadecimal editors.
There is still a fandom for old Windows versions out there.
And you could catch viruses literally by hand by looking in kernel files, checking their size, and checking what was loaded in memory.
Windows 95 itself was relatively stable but the drivers were globally not. Your experience variend depending on the hardware you were running, and the stability of the associated drivers.
Well said! Windows + associated app software, drivers, viruses back then used to be the biggest source of problems. Today I have none of that with Mac, iOS or Android. Having no reset for days or months is normal and expected.
The problems you experienced were not with DOS, but with Windows 3.11 / 95. DOS itself was one of the most stable platforms I've ever worked with. I personally worked on a NetWare Server running on DOS that had an uptime of over 20 years. DOS's stability was not an outlier. Many of the UNIX machines I worked that predated DOS had uptimes that all measured in months and years.
The only reason why Windows was so buggy for you is that you were using the home editions. At the same time that you were experiencing blue screens in Win 9x. My NT workstation was rock solid without any of the issues you described.
I hear this claim occasionally but it doesn't match my experience. The very first time I used Windows NT 4 (probably 1997 or 1998), I couldn't figure out how to log out, so I chose Start -> Help to look it up. Bluescreen.
In the subsequent months/years with NT 4, the situation did not improve. It was a sad day when they replaced the HP/UX section of the lab with more NT machines. They were faster but they crashed a lot. It really took until Vista before NT was reliable.
Windows NT was originally a microkernel architecture. NT4 moved a bunch of code back into the kernel space for performance reasons.
Most notably: graphics and printer drivers, which are not typically written the highest standard.
Big iron vendors don't really have that problem, since they typically control their hardware as well. Microsoft had to rely on component vendors to provide driver software and couldn't plausibly test all permutations under all conditions (even though they test very, very many).
I still have the habit of constantly hitting cmd-s everywhere, it’s a reflex I’ll probably never unlearn. I also cringe when I see people working on a bunch of files which have not been saved for a while or, god forbid, not at all. Completely irrational but it’s what I’ve been programmed to do for years ;)
This is exactly how I felt watching the video. It seems just like the same old 'wasn't everything better in the old days' nonsense. Not to mention the fact that in those days, a computer was something that you had in one room in your house, and wasn't often connected to other computers. Nowadays, computers are everywhere. I personally would argue that the rate of increase in safety in code hasn't kept pace with the rate of code being put in things, but that's a whole different kettle of fish.
As a JavaScript developer I strongly resonate with the quote at 14:50 into the video. In summary all of the silicon industrys' chips at the time were full of defects and often the same defects between various vendors. The industry was completely aware of this. The problem is that the original generation of chips were designed by old guys who figured it out. The current generation of chips (at that time) was designed by youngsters working in the shadow of the prior generation and not asking the right questions because they were not aware of what those questions were.
A decade ago JavaScript developers had little or no trouble working cross browser, writing small applications, and churning out results that work reasonably well very quickly. It isn't that cross browser compatibility had been solved, far from it, but that you simply worked to the problem directly and this was part of regular testing.
That older generation did not have the benefit of helpful abstractions like jQuery or React. They had to know what the APIs were and they worked to them directly. The biggest problem with this is that there weren't many people who could do this work well. Then shortly after the helpful abstractions appeared and suddenly there was an explosion of competent enough developers, but many of these good enough developers did not and cannot work without their favorite abstractions. These abstractions impose a performance penalty, increase product size, impose additional maintenance concerns, and complicate requirements.
The ability to work below the abstractions is quickly becoming lost knowledge. Many commercial websites load slower now than they did 20 years ago despite radical increases in connection speeds. To the point of the video this loss of knowledge is not static and results in degrading quality over time that is acceptable to later generations of developers who don't know the proper questions to ask.
Actually, I don't think that abstractions are the problem. I mean, the whole OSI model is made of abstractions. Abstractions are at the core of software development.
And I also don't think that Jquery is the problem. Jquery just made JS worth learning. Before you had to spend an insane amount of time just working out implementation specifics that changed every few months.
However, the point where I do agree with you is that we have a performance issue with JS. And I am not talking about slow JS engines. I am talking about developers who are not aware of how costly some operations are (e.g. loading a bunch of libraries). Yes, that is an issue that naturally arises with abstractions, but to conclude that abstractions themselves are the problem is wrong.
I think the problem is more about being aware of what happens in the background. You don't have to know every step for every browser and API, but loading 500KB of dependencies before even starting your own scripts is not going to be fast in any browser.
The abstractions aren't making this problem the developers without a willingness to work under the abstractions are.
I wrote this following tool in less than 90 minutes 5 years ago because I had some time left over before presentations at a company hack-a-thon. I updated it recently at about another 90 minutes of time.
That tool was trivial to write and maintain. It has real utility value as an accessibility tool. I could write that tool because I am familiar with the DOM, the layer underneath. Many developers are not even aware of the problems (SEO and accessibility) this tool identifies much less how to write it. jQuery won't get you the necessary functionality.
In the jQuery era (Firefox release and IE6-10ish) they didn't change ever few months, in fact they never changed and didn't for about a decade, which was part of the problem.
Working with the dom in JS wasn't hard, it was just time consuming and required lots of boiler plate code.
JQuery made it quick and easy and cross-browser even if FF was still a small %.
JavaScript itself didn't change AT ALL for years before and after jQuery came out, so I have no idea what you are talking about. Plus HTML 5 was years after jQuery, with html 4 released in the 90s, you've got the wrong recollection of history.
The lack of change was part of what made jQuery so ubiquitous.
I wonder a bit why you write 'IE6-10ish' as IE7 was a big change already (not talking about IE8 or IE9, or what the other vendors did during that period). So when jquery was released, we had the split between standards and IE(6) compliant implementations already and the whole browser development started to get traction again.
So yes, the standards didn't really change during that period, but the real world implementations did. And jquery gave you a way to learn just one thing and don't care about what all those browser vendors were doing.
When IE7 was released absolutely no changes to JavaScript or HTML happened, IE just became slightly more standards compliant.
AFAIK the big thing in IE7 was that it had tabbed browsing, like FF. And slightly improved js performance, that V8 put to shame a couple of years later.
The actual split between IE6/7 and FF was mainly in the Ajax syntax, not the dom, etc.
My impression from what you're saying is that you didn't program js in the 2000s, did you? I did.
> However, the point where I do agree with you is that we have a performance issue with JS. And I am not talking about slow JS engines. I am talking about developers who are not aware of how costly some operations are (e.g. loading a bunch of libraries). Yes, that is an issue that naturally arises with abstractions, but to conclude that abstractions themselves are the problem is wrong.
I've seen the map reduce way defended as being "more readable and maintanable", with plenty of agreement to it. When I contested it, mental gymnastics ensued and did not let up. Nobody dislikes performance, not really, but I think some don't like reflecting on how they arrived at their opinions. That's the bit they're really invested in.
And in general, "what does this abstraction stand for?" is a very dangerous question, if you ask it about computer stuff, you might also ask it about other things, and there more groups that don't like that than there are people in the world. Not to make this too political, but I think the pressure against thinking for yourself is way, way bigger bigger than the demand for performance. Just think of Ignaz Semmelweis.
Well I came from the pre jQuery era but the whole industry is obsessed with react. There are tons of tools available web components are becoming standard, ultimately using native apis is far more performent but it's also more standard and consistent and easier than the old days to write pure vanilla js.
I have heard people say this in justification of their framework, library, abstraction, whatever. The reality is that, from my experience, I can easily replace maybe 8 other JavaScript developers.
I am not saying that out of arrogance or some uniformed guess at my radical superiority. I am saying this out of experience. It isn't because I am smart or a strong programmer. This has proven true for me, because when writing vanilla JS I am the bottleneck and the delay. I am not waiting on tools, performance delays, code complexity, or anything else. If there is bad code its because I put it there and I am at fault, and so I have nothing to blame but myself. Knowing this, and I mean as a non-cognitive emotional certainty, means I can solve the problem immediately with the shortest effort necessary or it isn't getting solved ever and that changes my priority of effort. It also means not tolerating a slow shitty product since you are fully in control (at fault).
When people go through project management training I tell them there are only two kinds of problems: natural disasters and bad human decisions. When you can remove blame from the equation the distance between you and the problem/solution becomes immensely shorter.
> not asking the right questions because they were not aware of what those questions were.
to me, this sounds like the NIH syndrome, and that those who are tasked with creating new stuff is either lacking in comprehensive education, as well as the "old guards" not transmitting the knowledge in a more permanent form (like a book).
> many of these good enough developers did not and cannot work without their favorite abstractions
i would argue that those were not "good enough", but is "barely know enough". My mantra for using a library is - if you could've written the library yourself, then use it. Otherwise, you dont know enough and using it as a blackbox is certainly going to lead to disaster in the future (for you or some other poor soul).
> My mantra for using a library is - if you could've written the library yourself, then use it. Otherwise, you dont know enough and using it as a blackbox is certainly going to lead to disaster in the future (for you or some other poor soul).
That is largely my attitude to using any package/library - if the entire dev team gets hit by a bus tomorrow can I maintain this (i.e. keep it working reliably in production), if the answer is no then I nearly always avoid it and if I can't I wrap it in my own layer so that I can replace it later.
I've been an enterprise developer for a long time so my worldview is shaped by "this will likely stick around twice as long as anyone expects minimum" though.
I have a friend with that attitude. It's led him to implementing his own encryption libraries, UI libraries and so forth, and...
Well, I have to admit he's smart, but the software he writes outside of work is some of the worst I've ever used. Furthermore, I was able to crack his RSA implementation with a straightforward timing attack.
Some things shouldn't be reimplemented.
Perhaps he could have implemented them, then used something else? True, but that's a hard sell.
Surely you can at least estimate if you could have created/could maintain a library without actually doing a reimplementation, e.g. by diving into a bug or two?
There is too much copy pasting in web-development. I mean, we still use the MVC pattern to run almost all our web-apps because the MVVM pattern would require us to hire additional developers to do the same thing.
Between Ajax and JQuery we can build things that are perfectly reactive and come with the added benefit of being extremely easy to debug. They can’t run offline, and if we need to build something that does, then we’ll turn to something like angular/react/ (typically Vue, because we actually use Vue components in our MVC apps from time to time, but which one is beside the point).
When we interview junior developers about this, they often think we’re crazy for not having access to NPM, but we’re the public sector, we need to know what every piece of our application does. That means it’s typically a lot easier to write our own things rather than to rely on third party packages.
I loved this talk, and in particular the point about programmers being forced to learn trivia instead of deep knowledge. I just started a new job at a big tech company, and I've spent a whole week so far trying to figure out how to use the build tool. The frustrating part is that most of the software modules my team is working on aren't very complicated. The complexity comes from pulling in all sorts of 3rd party libraries and managing their transitive dependencies.
Great talk. I agree that sw is on the decline. You can see it in your OS, on the web, everywhere. Robust products are replaced with 'modern' crappy redesigns. We are surprised if the thing still works after 5 years.
I don't agree on his conclusions. The real source of problem is that now we have maybe x100000 more software than we had in 70s. It's that many more programmers, so not just the 1% smartest greybeards as before. We need more abstractions, and yes, they will run slower and have their issues.
Also, not everybody is sitting at the top of hierarchy of abstractions. Some roll up their sleeves and work on JIT runtimes and breakthrough DB algorithms.
All those blocks of software need to communicate with the platform and between them. IMO the way out is open source. Open platforms, open standards, open policies. Every time I found a good piece of code in company's huge codebase, it was open source library. Every time. You have to open up to external world to produce well engineered piece of software. The lack of financial models for open source is the obstacle. We should work on making simple and robust software profitable.
> The real source of problem is that now we have maybe x100000 more software than we had in 70s. It's that many more programmers, so not just the 1% smartest greybeards as before.
The greybeards from the 70s weren't much smarter than today's programmers. They were the same curious hackers from today, with the advantage of being born in the right place at the right time, when the technology was still developing, so they were forced to build their own tools and operating systems.
> We need more abstractions, and yes, they will run slower and have their issues.
I disagree, and side with Jon Blow on this: abstractions (if done well) create the illusion of simplicity and more often that not hide the apparent complexity of lower levels. Sometimes this complexity is indeed too difficult to work with, but often it's the problem itself that needs to be simplified instead of creating an abstraction layer on top.
I think as an industry we've failed to make meaningful abstractions while educating new programmers on the lower level functionality. A lot of today's programmers learned on Python, PHP, Ruby, JavaScript, etc., which are incredibly complex tools by themselves. And only a minority of those will end up going back and really learning the fundamentals in the same way hackers in the 60s and 70s did.
> IMO the way out is open source. Open platforms, open standards, open policies.
Agreed. But education and simplification are also crucial.
I'm consistently confused how we manage to run so many more lines of code, yet our software doesn't really do anything it didn't do in the late 90s. Back then I chatted over IRC, browsed the web and played video games. Now I chat over whatsapp, browse the web and play indie video games. In 2009 Chrome had 1.3M SLOC. Today it has 25M. And that number has been going up linearly - since 2016, not including comments, chrome has added 2.1M SLOC per year. Thats another entire 2009 google chrome web browser in code added to chrome every 8 months. Can you name a single feature added to chrome in the last 8 months? I can't. As Blow says, productivity (measured in features per LOC) has been trending toward 0 for a long time. What a tragic waste of google's fantastic engineering talent.
I pick on Chrome because the data is available. And because I regretfully have about 8 copies that code on my computer. But I bet we'd see the same curve with lots of modern software. The LOC numbers for microsoft windows have become so large that I can't really comprehend how so many programatic structures can do so little.
I once heard this architecture pattern referred to as a pile of rocks. Piles of rocks are really simple and elegant - you can always add features to your pile of rocks. Just add rocks on the top until its tall enough! Piles of rocks are really easy to debug too. Just shake the pile (unit tests), and when anything collapses, add rocks until the hole is filled in (= patch that specific issue). Then rinse and repeat. You don't need to bother with modelling or proofs or any of that stuff when working on a pile of rocks.
Look at those Haskell programmers over there building aqueducts using archways. Peh. they should get jobs writing real programs.
Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive. And as for the whipper-snappers: people don't get very far writing programming languages/video games/operating systems without knowing their stuff.
The real boogeyman is feature combinatorics. When making a tightly-integrated product (which people tend to expect these days), adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.
Take OpenBSD for example: When you have a volunteer project by nerds for nerds, prioritizing getting it right (over having the fastest benchmark or feature-parity with X) is still manageable.
Bring that into a market scenario (where buyers have a vague to non-existent understanding of what they're even buying), and we get what we get. Software companies live and die by benchmark and feature parity, and as long as it crashes and frustrates less than the other guy's product, the cash will keep coming in.
> When making a tightly-integrated product (which people tend to expect these days)
Do they? It was my impression that the recent evolution of user-facing software (i.e. the web, mostly) was about less integation due to reduced scope and capabilities of any single piece of software.
> adding "just" one new feature (when you already have 100 of them) means touching several (if not all 100) things.
This sounds true on first impression, but I'm not sure how true it really is. Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore. A feature to a program is like a program to OS, and yet most software doesn't involve extensive use, or changes, of the operating system.
The most complex and feature-packed software I've seen (e.g. 3D modelling tools, Emacs, or hell, Windows or Linux) doesn't trigger combinatorial explosion; every new feature is developed almost in isolation from all others, and yet tight integration is achieved.
> Consider that I could start rewriting this as "adding 'just' one new program when you already have 100 of them installed on your computer"... and it doesn't make sense anymore.
Turns out when you chnage the words of a statement, it changes the meaning of that statement.
And this is actually more rule then exception - once you have more then 2 plugins, the chance of plugins colliding or not allowing updates of the main software in the future are more or less norm.
Not an issue if there are no third party plugins. Its however hard to resist allowing third party plugins when you already have the architecture. Also hard to resist feature bloat when adding new features are seemingly free.
If there are no third party plugins, then you don't have plugins - its an internal architectural decision, not relevant for the end-users.
Having plugins means anybody should be able to create one.
I remember vagrant has support for all historic plugins versions no matter the current API version. This is rare goodness but prevents only one type of problem - inability to update the core.
Or, just think about your phone. If I put my head to the speaker, a sensor detects that, and the OS turns off the screen to save power. If I'm playing music to my Bluetooth speaker, and a call comes in, it pauses the song. When the call ends, the song automatically resumes.
KT's UNIX 0.1 didn't do audio or power management or high-level events notification.
> Lack of inter-generational knowledge transfer doesn't cut it. Most of the people who rolled this stuff are still alive.
I think he means generations in terms of the workplace/politics where you can have a generation change every few years. Meaning that most old guys go and new guys come. Technically you could ask the old guys because most are still alive but it doesn't happen for a lot of different reasons.
I tend to agree that OpenBSD is hitting the spot a lot better, but the problem I have is that there's not enough momentum that it keeps up with hardware releases. They had been maintaining kernel drivers for AMD GPUs for a while, but it seems they stopped updating regularly. I now own no hardware from the last decade that OpenBSD can get accelerated graphics on, and I need accelerated graphics to power the displays that allow me to be productive (by showing me enough information at once that I can understand what I'm doing).
I was having a conversation with somebody the other day about a privacy concern they were addressing, where a company was offering to monitor cell signals for some retail analytics purpose; and it was genuinely surprising to them that mobile phones broadcast and otherwise leak information that can be used to fingerprint the device. I think it's rather shocking the amount of ignorance people allow themselves to have when it comes to things like this. Furthermore, the way she was talking about it, it seems she thought it was the responsibility of basically anyone but the owners of these devices to consider things like this, or even ask the questions that would tell you something like this exists.
It seems to me that there is a cultural problem that deep expertise is not valued, because it is difficult to understand and to get somebody "flexible" is easier.
I was just at a workshop about https://en.wikipedia.org/wiki/Design_thinking. The whole premise was that you don't actually need to hire an (expensive, inflexible) expert, who understands how something is done, but rather what you need to do is to "observe" an expert.
But imagine what happens when everybody does that! Everybody gets rid of their experts, assuming that the client (who they are supposed to provide the service for) has the actual expertise. And they are assuming the same about their clients and so on. The end result is complete disregard for expertise.
So expertise is a positive externality, in an economic sense. Nobody is incentivized to keep it more than neccessary. This leads to losses over time.
This is very common in industry with boom/bust cycles. Lots of experts in the boom, they leave when the bust comes, then when the next boom comes there are lots of problems expanding said processes quickly because of lack of expertise.
The risk is that things 'just work' for extended periods of time and the maintainers are optimised out of the system because they aren't needed in the short term.
My personal guess at why civilisations can collapse so slowly (100s of years for the Romans, for example) is that the people who maintain the political systems do too good a job, and so the safeguards are forgotten.
For example, after WWII the Europeans learned some really scary lessons about privacy. The Americans enjoyed greater peace and stability, so the people with privacy concerns are given less air time in places like Silicon Valley or Washington. The two-step process at work here is that when things are working, standards slip and the proper response to problems are forgotten. Then when things don't work, people don't know what to do and the system degrades.
Basically there are norms and unwritten understanding, deeply understood ideas about what is not acceptable. Rulers don't push their power to full extent. Then someone comes along and starts to push them and gradually what is acceptable changes.
I had a few cocktails and thought about a few points made in the video.
It seems to me that there are two different notions here that are being conflated:
. A rotting of knowledge over time.
. A variant of Moore's Law. In this case, the idea that the value of technology, in a particular area, has a decreasing value on the margin.
It's kind of like the notions you see in cliodynamics, that there are a few interacting sine waves (or some other function) in mass human behavior.
I suppose that the main concept of importance is how it all might mess with your own personal situation. Personally, I think that the West is in decline, but that doesn't have a whole lot to do with the quality of software on internet websites.
He obviously uses Windows for his anecdotal examples, but I don’t think you can point the finger at Windows specifically. I think it’s consistent throughout OSes. I see regressions in iOS since the earlier versions, as well as in Linux and the applications I use.
My IDE stopped providing menus. It’s open source so I just shrug and track the issue in Github.
Portable Apps on Windows are a hedge against some of the angst he describes. (E.g. the part about updates changing a lot of things around or causing failure-to-launch problems).
E.g., I still use WinAmp to play mp3's. It's a portable version, doesn't need installation (so I can use it on my locked-down work computer). The UI hasn't changed in 20 years. Newer file-formats can be played after adding plug-ins.
I've put together a whole bunch of Portable Apps, and nowadays I first try to find a portable version of an app I need before a non-portable version.
100% concur re: portable apps. It’s a better way to live from an end-user standpoint in my experience. Portableapps.com and librekey both are excellent
He briefly mentions the Boeing 737 MAX issue, but he understates the problem. Sure there was a software problem, but the urderlying issue was the whole notion that everything can be "fixed" (worked around) by software. That it's fine to make changes to the plane aerodynamics and compensate with software so that it would seemingly act like the previous model.
The repeal of Moore's law will be a blessing in disguise, I think. Not only will programmers need to get clever in a traditional sense but, also, a new era of specialized hardware will require a more intimate understanding of the bits and pieces. I'm optimistic.
I’m actually looking forward to the end of Moore’s Law. Instead of keeping up with a rapidly shifting landscape of new and creative ways of wasting CPU cycles, we can maybe build things that have a chance of lasting a hundred years.
At my workplace when I ask about why some process parameters are this way it usually leads to a dead end where the people who know are long gone and those who should know don't know the essentials. Everything is kind of interconnected and errors show up months later so you can't really change anything on any machine until the machine as a whole breaks and needs to be replaced. Then you try to get it to work somehow and those parameters are then set forever.
Reducing all this complexity is partly why I'm hoping the Red Language project can succeed where Rebol failed.
Of course you can't do everything, but a good full-stack language could cover perhaps 80% of software needs using well written DSLs. The simple fact that we have so many languages targeting the same thing is a waste and duplicative effort (Java, C#, Kotlin, Scala, Clojure, F#...etc) for business apps and (Python, Matlab, Julia, R, and Fortran) for data science and scientific programming. Also systems languages like (C, C++, Ada, Rust).
On one side it is good to have purpose built languages, but on another it puts a big barrier to entry.
Note that I'm advocating for abstractions, but far fewer languages. Yes, abstractions add complexity, but actually make the code more readable. I shudder to think of humanity having to maintain and support ever increasing levels of software.
I absolutely agree that we actually need fewer languages. The languages we have today really are good enough for the vast majority of programming work. To the extent that they fall short, the solution is to either improve the language or to build good libraries for it.
The general purposeness of computers is the reason for complexity. We use the same systems to do highly secure and critical business transaction as high performance simulation, playing and fun. The conveniency of not having to switch systems when doing different tasks is adding a lot of complexity. Special purpose hardware can by its nature of not being general, be much simpler and omit a lot of the security and complexity. But it's much less convenient and much less flexible.
I think it's because companies always try to commoditize software developers but it doesn't work. You can't replace a good software developer with 10 mediocre ones plus thousands of unit tests. The only way to become a good software developer is with experience.
Now, what would be amazing would be if we had found the Antikythera mechanism so intact that it could be reconstructed perfectly. And then we'd check everything it could do, and what kinds of drawbacks or errors the construction had!
People don't care about five 9s anymore? It's not as important as it was (I assume—I wasn't really around at the time), but cloud providers definitely advertise their number of 9s.
I challenge his assertion around 32:50 that something is lost. I've done assembly programming. C programming. I might venture to say I'm pretty good at it. I even dabbled a bit in baremetal programming, was going to make my own OS, but lost interest. Wanna know why? Take a look at this[1] article. Yep. If, on x86, you want to know what memory you're allowed to access; how much of it and where it is, there is literally no good, or standard way to do that. "Well," you (or jon blow) might say, "just use grub (or another multiboot bootloader), it'll give you the memory map." But wait, wasn't that what we were trying to avoid? If you do this, you'll say "I'm smart, I'm sparing myself the effort," but really there is a loss of capability, you don't really know where these BIOS calls are going, what the inner workings of this bootloader are, and something is lost there.
This is a bit of a contrived and exaggerated example, but it serves to prove my point which is that these things really do scale linearly: you give up the same amount you get back by going up a layer of abstraction (in understanding/productivity, not talking about performance yet). Low-level programming languages aren't more productive than high-level programming languages. Low-level programmers are more productive than high-level ones because it takes more discipline to get good at low-level programming so the ones that make it in low-level programming are likely to be more skilled or, at least, to have acquired more skill. Think about the story of mel[2]. Does anyone honestly think, with any kind of conviction, that mel would have been less productive had he programmed in python and not thought about how machine instruction would be loaded?
As I've mentioned, I have done, and gotten reasonably good at, low-level programming, and yet my current favourite language is perl6. A language that is about as far from the cpu as it gets, on a par with javascript or haskell. Why? Because nothing is lost. Nothing is lost, and quite a lot is gained. There are things I can do with perl6 that I cannot do with c—but, of course, the reverse is also true. And I think that jon blow's perspective is rather coloured by his profession—game development—where performance is important and it really does pay, sometimes, to think about how your variables are going in memory. He has had, I'm sure, negative interactions with proponents of dynamic languages, because he sees their arguments as (maybe that's what their arguments are, I don't know) "c is useless, javascript is good enough for everything." Maybe the people who truly think that have lost something, but I do not think that mel, or jon blow, or I, would lose much by using perl6 instead of c where perl6 is sufficient.
About the first one, that's also another problem, BIOS doesn't have standard protocol, if there would be such, there would be one standardized way to detect the memory layout.
About the second one, performance should be crucial everywhere, if some application eats all the resources then I can't have other applications working in the background doing their stuff. That's the problem with i.e. "modern" communication apps (I'm talking about you, slack) where my four core CPU is on it's knees when doing simple things like switching the team or even channel, not mentioning starting the app itself. Another one is when I'm on the Google Meet chat, my browser eats 80% of CPU, I can't do anything reliably in that time, running anything makes chat to loose audio, lag a much, etc.
Going back some years ago, I was able to run Skype, AQQ, IDE, Chrome browser and Winamp at the same time on archaic (in today's standards) i3-350M and 4 GiB of RAM.
> That's the problem with i.e. "modern" communication apps (I'm talking about you, slack) where my four core CPU is on it's knees when doing simple things like switching the team or even channel, not mentioning starting the app itself. Another one is when I'm on the Google Meet chat, my browser eats 80% of CPU, I can't do anything reliably in that time, running anything makes chat to loose audio, lag a much, etc.
Again, this is a problem with programming design, not programming language. It is very possible to make good, performant programs in fancy dynamic languages, and awful, leaky, slow ones in 'high-performance' compiled languages. The impact of the language itself is really not as high as it's made out to be. Yes, python is 100x slower than c at multiplying numbers, but so what? Your program doesn't spend most of its time multiplying numbers. If you design a python program in a non-stupid way, for an application like a chat app, the performance hit compared to c is negligible.
> Take a look at this[1] article. Yep. If, on x86, you want to know what memory you're allowed to access; how much of it and where it is, there is literally no good, or standard way to do that. "Well," you (or jon blow) might say, "just use grub (or another multiboot bootloader), it'll give you the memory map." But wait, wasn't that what we were trying to avoid?
Yeah I think that's part of what he was trying to say
Backwards compatibility and overengineered solutions like SMM or ACPI then UEFI with a confusing standard and an even more confusing landscape where most manufacturers will just write whatever makes Windows XP boot and ship it
Not to pick on Microsoft specifically too much, but I remember seeing the hello world program for windows 3.1 for the first time and thinking, “this is not looking good.” And I was right.
Wow he has a rosy picture of the past. I don't see where he gets the five nines from. He doesn't even quote anybody on it. Most of the examples he gives would have been zero nines back in the day because they were not available at all!
Wikipedia for example has one nine availability in my life. Because when I sleep my phone is still on.
I now expect my computer to stay on for months without issues. I expect to be able to put it to sleep and open it in the same state it was. I expect that if a website or a program errors, my OS simply shrugs and closes it. I expect my OS to be robust enough so that if I enter a usb or download a file I'm not playing Russian roulette that it might contain a virus that would destroy my computer.
In the past I would close my computer at the end of every day because otherwise it will simply crash some time in the night. I would run de fragmentation at least once a month. Memory errors and disk errors were common, and the OS had no idea how to overcome it. Crashes were so common, you just shrugged and learned to save often.
reply