Hacker News new | past | comments | ask | show | jobs | submit login
Transplanting the Mac’s Central Processor: Gary Davidian’s 68000 Emulator (2020) (computerhistory.org)
65 points by tambourine_man on Oct 19, 2021 | hide | past | favorite | 21 comments



As noted in the article, the rather extraordinary thing is that Davidian didn't just write one 68K emulator - he wrote (at least?) three: AMD 29K, Motorola 88K, PowerPC.

The hardware prototypes like the one in the picture are remarkable as well. Imagine four "Macintosh" systems that look just like regular 68K Macs of that time, booting regular 68K Mac OS and application software, but they all have different CPU architectures (68K, 29K, 88K, PPC.)

(Presumably these architectures all had direct support for the 68K's big-endian byte order, unlike, say, x86, though Apple also famously built an x86/PC port of classic Mac OS.)


Apple's gotten good at this architectural switching game for the Mac:

68K -> PPC -> x86 -> ARM

The remarkable thing is that the new machines in emulation mode are usually comparable to or faster than the older machines.


> The remarkable thing is that the new machines in emulation mode are usually comparable to or faster than the older machines.

The central reason is that most programmers make no use of the immense capabilities that modern processors provide. For example, using AVX2 or AVX-512, you can do do magic in accelerating some algorithms on x86. Software that makes use of these will likely be quite slow on (Apple's) ARM processors.


Generally, Mac software that takes advantage of those instructions does so indirectly, via system libraries like the accelerate framework. If your x86 application is using a system library then on an M1 Mac those instructions won’t be emulated at all, they’ll run natively.

This is why Apple’s emulator is so fast: it doesn’t emulate any of the system libraries, it uses the native ones instead.


No, neither Rosetta emulator do that, they translate everything. The original 68k emulation on PPC did do that, using Mixed Mode Manager:

https://developer.apple.com/library/archive/documentation/ma...

but it was very hard to get right. You have to know the right calling convention for everything, and there are some problems with floating point precision and other things I forget the details of.


Well, that wasn't really true with 68K on PPC. It aspired to be as fast as the fastest 68K Macs, but that was based on averaging over some system calls in PPC code and some not. Speaking as someone who used one of the early machines @ 66MHz. The speed was unimpressive enough that there was a third party product that claimed to accelerate 68K code. Probably through some JIT techniques, I don't remember.

But it seems weird in any case to me to give Apple a lot of credit for this, because haven't we gotten to the point where VMs have eaten the (software) world? Everything runs on an advanced emulator, it's just the processor never even existed.


Sounds more "comparable to" rather than "as fast as the fastest."

Presumably Mac OS on PowerPC got faster as more of it was ported to native code; it's always nice when OS updates make your machine faster rather than slower.

But your point is interesting; as you note, Connectix's Speed Doubler apparently included a dynamic/just-in-time binary translator - I wonder what Davidian thinks about it? (Of course Rosetta 2 for M1 does on-demand static translation.) I'd guess that Davidian's emulator may have used an interpretation approach for consistent instruction timing, which may have been required for timing-sensitive device drivers to work properly, and also for running out of a static ROM with zero RAM overhead. Dynamic translation can eat a lot of RAM, especially if you have to keep the original code around (e.g. for code page memory accesses.)

Connectix famously continued its emulation game and ultimately won a pyrrhic victory after being sued by Sony, resulting in PlayStation/game console emulation being declared legal (hooray!) but also killing the company and its PS1 emulator product (which may have resurfaced as Sony's own PS1 emulator for the PSP/PSVita/PSTV/PS3, though the PS Classic uses PCSX.)


I wish there were more technical information available about the 68000 emulator. This oral history is interesting, but it's not the sort of crunchy details I was hoping for.


Yeah, same here. I don't have any intimate knowledge of it personally, but I do know it shipped on the ROM in the 601 era. Given its size, I wonder how much of it is an emulator and how much a translator between the 68k and PPC toolbox calls. The early Macs are famous for doing tricks with memory to fit everything in the extremely limited work RAM, so I'd bet there's some MMC trickery in there as well.


"Mixed Mode Magic" was a key feature of the 68k emulator. 68k code could call PowerPC routines through "universal procedure pointers", and vice versa. This included Toolbox routines as well as third-party code, and it extended to the point that 68k CDEFs (system extensions) which patched system traps could still be used on PowerPC systems.

Some further details are at: https://orangejuiceliberationfront.com/universal-procedure-p...


How much PPC toolbox actually existed on that first PowerPC mac?

My understanding is that the ROM is mostly OpenFirmware (which is forth bytecode), the nanokernel (aka the 68k emulator) and a mostly unmodified 68k toolbox.


I'd be nice to see the other side of the Smurf board. There's a dearth of passives on the top side.


There's a bunch of smt passives visible on the top side. Insufficient decouplers though so those are likely on the back.


I wonder if instead translating the code of the fly by an emulator if it wouldn't be possible to translate the code ahead of time. Just make a compiler which takes machine code for architecture X and produce LLVM IR.


Static binary translation is relatively straightforward until you start doing indirect jumps (function pointers, C++ vtables, etc), then all hell breaks loose.

At that point, the translator has to know precisely where the jump might go, and it becomes a total mess. Function pointers get encoded as data all the time, and you'd need to properly translate every address. Simple programs might not run into it, but telling users "sorry, we only support translating programs too simple for you to care about running" is not a good experience.

On the other hand, dynamic recompilation lets you easily hook indirect function calls to translate whatever new function is accessed on demand, and lets you reach really high compatibility.


> Static binary translation is relatively straightforward until you start doing indirect jumps (function pointers, C++ vtables, etc), then all hell breaks loose.

It's not even possible on x86 with only static jumps, because you can jump into the middle of an instruction and turn it into a different one.


How does Rosetta handle it? Has anyone tested jumping into the middle of an instruction on an M1 Mac?


It probably falls back to JIT, just like if you started doing self-modifying code. This isn't a common case, it's more for code golf and malware.


You handle that by creating a new basic block.


Thanks, that explained well, why it isn't done.


As I understand it, Rosetta 2 performs on-demand static translation the first time you launch an app.

"If an executable contains only Intel instructions, macOS automatically launches Rosetta and begins the translation process. When translation finishes, the system launches the translated executable in place of the original."

https://developer.apple.com/documentation/apple-silicon/abou...

Interestingly enough, you can't mix different flavors of code, for example plugins written for different architectures, the way you could in some previous emulators.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: