The thing to keep in mind with regard to the significance of the C language and this particular time-frame is that personal computers were becoming capable of hosting a C compiler. Today, we don't think about the ability to host a C compiler to be a constraint, but with early micro-computers it was. For instance, the Atari-era of computers, could not easily host a C-compiler. There was a version called "Deep-Blue-C" which is a subset. In fact, "compilers" were rare. More common were interpreters because they required less resources.
Running alternative operating systems on early micro-computers, such as CP/M on an Apple-II, provided more capabilities, and ability run slightly more powerful software
The IBM PCs and clones around that time-frame certainly had the power to host compilers, Turbo-Pascal was one early example.
Also, back then, with some exceptions, most commercial products on micro-computers were written in Assembly language, not a high-level language. This was done for performance. This was common.
However, C was generally regarded as producing code that's fast enough that enabled programmers to use it instead or coding directly in Assembly. Certainly, with the new crop of computers, the faster IBM PCs (ATs), Macintosh, Atari ST, Amiga, all had sufficiently sophisticated compilers where that became standard issue. Pascal and then C.
So around 1983, C was become something that people were starting to take a much closer look at because of the availability of hardware and operating systems that allowed them to use C. It is not that C was not available previously, but it was used by a different class of developers, the ones with S-100 Bus machines running CP/M of some form.. these are machines that often required you to re-target the BIOS from source when you add more RAM.
C also became the default language for Windows. For the Mac, it was a form of object Pascal at first, and then moved to C++ with MacApp.
This is an example of C being powerful enough, being at the right place and at the right time to where now it's the lingua-franca of system software.
> ... most commercial products on micro-computers were written in Assembly language, not a high-level language. This was done for performance. This was common.
The exceptions were interesting. A common approach was a byte code interpreter/machine. This had the advantage of being easier to port, as you only had to port that piece rather than the whole app. Hardware used to be a lot more interesting back then with all sorts of variations on memory addressing modes, size of units of information (bits & bytes), overlaying (you couldn't fit all your code and data in memory at once), file access, and even something as simple as drawing text or graphics on the screen.
UCSD Pascal, and later large parts of Microsoft Excel (plus Word I believe) also used the approach, especially in the latter case because it needed less space - https://en.wikipedia.org/wiki/P-code_machine
Programming back then was very different to today. Virtually everything was constrained by cpu, memory and storage. Writing products involved working out how you could do things despite those, and debugging was not fun. Just like using 9 inch monitors, it seemed fine at the time, but now I really like having almost unbounded everything, and several freaking huge monitors!
I'm pretty sure it was Multiplan and not Excel that used the p-code approach. Also, when Lotus 1-2-3 shipped, it was written entirely in assembly, was blazing fast, and crushed Multiplan in the marketplace.
The Excel team's ruggedly independent mentality also meant that they always shipped on time, their code was of uniformly high quality, and they had a compiler which, back in the 1980s, generated pcode and could therefore run unmodified on Macintosh's 68000 chip as well as Intel PCs. The pcode also made the executable file about half the size that Intel binaries would have been, which loaded faster from floppy disks and required less RAM.
Thanks. Looks like both Multiplan[1] and Excel were p-code. However, Multiplan was wiped out by 1-2-3, and Excel ... adapted (it certainly isn't p-code today or anytime in recent history).
"The inability to fit the larger code size of compiled C into lower-powered machines forced the company to split its spreadsheet offerings, with 1-2-3 release 3 only for higher-end machines, and a new version 2.2, based on the 2.01 assembler code base, available for PCs without extended memory. By the time these versions were released in 1989, Microsoft was well on its way to breaking through Lotus' market share"
Magazines like BYTE were the primary communication channel in those days. Among computing professionals, C might not have been big news for many by 1983. For hobbyists and many other computer users it probably was. Most people's knowledge about programming languages in those days would probably be Basic and Assembly.
In summer 1982, a learned of Pascal's existence at the local university...but my classes were in Basic. I first encountered C in the late 1980's in the context of the Amiga and its ROM Kernel Manuals.
Two decades later, HN is where I heard of Python, Ruby and Clojure [and Rust, Go, Haskell, J, Julia, etc.]. The medium had changed, but for anyone not tuned to the right channel it all passes unnoticed. Even today. most people use Windows and installing a C compiler is outside normal operating parameters.
None of which prevents me from noting how I miss the physicality of BYTE and other magazines from that era, too.
Flipping the pages of the magazine is awesome - the ads on every other page are just like time travel.
An ad for an oscilloscope in a computer magazine! Heaven!
My mom subscribed to Byte during that time period, and I remember counting the days until the next issue would arrive. There were so many cool articles about fundamental things, many of which were over my head as a high school kid. My favorite part was Ciarcia's Circuit Cellar.
In my view all of the computer magazines went downhill (for me) when personal computers began to mature, and the articles stopped being about nuts and bolts.
Man, those adverts sure brings back the memories. Looking at them now (I am in my mind 40s), I am really surprise how much more "hardware" focus the adverts were. Many of them have pictures of ICs, add-on cards for display extensions etc. There was even one from Tektronics about their 60Mhz CRO.
And the Apple II clones and the PC clones...:-)
Back then, I was just going through high school with an Atari 800.
I really like this gem from the Texas Instruments ad on page 145:
"The function key advantage: We give you 12 function keys that you can easily program to make your work simpler and easier. The best the competition can do is 10 or fewer keys"
Its like going back to the black monolith at the Dawn of Mankind. Most of the advertisers have been gone for 25 years, I bet. I did spot an interesting ad on page 258 teasing the release of Excel, to be the reason for buying a computer.
With Microbuffer, you don't have to wait for your
printer to finish before you resume using your
computer.
Data is received and stored at fast speeds, then
released from Microbuffer's memory to your
printer.
This is called buffering. The more you print, the
more productive it makes your workflow.
Depending on the version of Microbuffer, these
buffering capacities range from a useful 8K of
random access memory - big enough for 8,000
characters of storage - up to a very large 256K -
enough for 256,000 characters of storage.
Love the line "The more you print, the more productive it makes your workflow". I need to get that dot matrix printer connected up to my computer and start printing straight away, I can feel my productivity increasing.
Hardware and business-focused. It's interesting how you never really see ads targeting business uses now that personal computers are ubiquitous.
Notably, the primary market for high-end desktop components is now gamers, not businesses. I was quite amused at a recent temp job when I was helping build a computing array for integrated circuit design, and most of the components we were handling had names like Destroyer and Razor Ripjaw.
C compilers for microcomputers were still a new thing in 1983. They were common (for obvious reasons) on Unix machines and larger mini-computers. The only 8-bit C compiler I touched was Aztec C for the Apple II computers and it was unbearably slow (even though the II was quite zippy compared to its Z-80 based counterparts). Borland made an excellent Pascal IDE for CP/M computers but I can't remember a decent C for those.
Fun thing is that Aztec C came with a bunch of tools, including a csh-like shell and vi (no, not vim) text editor. Very slow (felt like a 300 bps terminal line), but one could pretend to be accessing a much larger computer somewhere else.
> the ads on every page are just like time travel
Indeed, quite interesting. I had a thought that having these magazines scans in searcheable form, could help to fight patent trolls. For example, I saw an ad there for 'Gina -- the interactive assistant.."
Definetely predates the patent 2001 patent issued by US patent office
http://www.google.com/patents/US8086500
$4,600 for a 46mb hard drive. That is $10,000 adjusted for inflation. I think a DVD which holds approx 4600mb is probably a few pennies today. How far we have come, absolutely incredible. I guess this means we should reach the singularity very soon. Perhaps when Windows 10 is released. :)
How about that "Rent software before you buy" ad. I remember those places as operating in somewhat questionable legal territory, but here is a full page ad on page 117.
I remember as a child, being bitterly disappointed with how bad Ferrari Formula 1 was on the Amiga, and I ended up blaming the slow speed on C. That colored my attitude toward the language for years, though once I started working, I ended up writing C++ mainly.
Nowadays, I've reverted to coding in C99, simply because I can easily find libraries for 'hard things' like generating jpeg files, and because cleanly written C performs like almost nothing else, and because console programs have a refreshing simplicity about them, especially if you put all the cool stuff in a library, then just hash up a text-based front-end.
There's a heck of a lot you can do in 1000 lines of C, and it's fun to write too, without the hassle and slow performance of an IDE like VS or XCode.
Byte Magazine had a history of forward-looking articles for microprocessor and software technology. There were entire issues devoted to LISP in (1979), Smalltalk (1981), and the 68000 (1986). It wasn't that C was new, it wasn't, but in 1984 it was certainly new to their target audience, which was the micro-computer industry.
While it's true that C compilers were available for MS-DOS and CP/M in '84, these were by no means mainstream, and were prohibitively expensive. Turbo Pascal was still taking the world by storm, and it would be another three years before Borland would introduce an 'ANSI Compatible' C compiler and IDE - the much anticipated (by me anyway) Turbo C in 1987.
It's not too clear, but if you click on the link in the upper left corner, Byte Magazine Volume 08 Number 08 - The C Language (https://archive.org/details/byte-magazine-1983-08), there are a number of different options for downloading the entire issue.
OT, but on page 70 there's an ad for the Franklin Ace 1000, which was ultimately found to be infringing on Apple's copyrights (they directly copied portions of the ROM from the Apple II). It was a landmark case for computers:
p 52: "With the freedom implicit in C's use of pointers come certain risks. Much of C's growth over the last decade has been in ways of detecting erroneous uses of pointers without restricting the ability to write efficient code when necessary.
Obvious in retrospect, but I never realized that ++ and -- were created specifically to map to native instructions and not just as a convenience for "_ = _ + 1".
Not really true (this is about C's predecessor B):
Thompson went a step further by inventing the ++ and -- operators, which increment or decrement; their prefix or postfix position determines whether the alteration occurs before or after noting the value of the operand. They were not in the earliest versions of B, but appeared along the way. People often guess that they were created to use the auto-increment and auto-decrement address modes provided by the DEC PDP-11 on which C and Unix first became popular. This is historically impossible, since there was no PDP-11 when B was developed. The PDP-7, however, did have a few `auto-increment' memory cells, with the property that an indirect memory reference through them incremented the cell. This feature probably suggested such operators to Thompson; the generalization to make them both prefix and postfix was his own. Indeed, the auto-increment cells were not used directly in implementation of the operators, and a stronger motivation for the innovation was probably his observation that the translation of ++x was smaller than that of x=x+1.
Note that one of the claimed benefits is "expressions that eliminate most temporary variables". That was a huge advantage because memory to store source code was scarce, and even more so because compilers were dumb. If you used a temporary, chances were the compiler would use a store for every write and a load for every read.
Compilers were dumb because they had to run at reasonable speed in scarce memory. The time of the 63-pass Fortran compiler for the IBM 1401 (http://ibm-1401.info/1401-FORTRAN-Illustrated.html) was gone, but for microprocessors, not by much.
I worked with a lot of K&R C in the early 1990s, getting software to run on newer compilers and Solaris. I actually learned C++ before I learned C. It still looks strange to me. I believe the parameter declaration style comes from ALGOL (please correct me).
I quite like K&R C --- it has an elegant minimalism to it that was lost in ANSI C:
add(a, b, c) { return a+b+c; }
But the ad-hoc parameter types only really worked if all your types were the same size, so it doesn't really get on with 64-bit machines. (I have just fixed some ancient code which was doing this:
msg(s, a1, a2, a3, a4)
char* s;
{
printf("info: ");
printf(s, a1, a2, a3, a4);
}
...later...
{
msg("an int is %d", anInt);
msg("a string is %s", aString);
}
Yeah, no.
I don't know classic Algol, but I've dabbled in Algol-68 (a most underrated language), and in that the parameter passing syntax is unrelated. The add function would have looked like:
procedure Absmax(a) Size:(n, m) Result:(y) Subscripts:(i, k);
value n, m; array a; integer n, m, i, k; real y;
comment The absolute greatest element of the matrix a, of size n by m,
is transferred to y, and the subscripts of this element to i and k;
begin
integer p, q;
y := 0; i := k := 1;
for p := 1 step 1 until n do
for q := 1 step 1 until m do
if abs(a[p, q]) > y then
begin y := abs(a[p, q]);
i := p; k := q
end
end Absmax