When Intel Edison came out in September 2014, it caught my eye not only because of my unhealthy obsession with robotics, but also because it seemed like an interesting platform for security enthusiasts to perform hobby fuzzing work. For those of you not familiar with the term, fuzz testing is a method of examining the security properties of various Internet-facing programs by throwing quasi-random data at them - and seeing if that causes them to misbehave in interesting ways. If you do it fast enough and creatively enough, this brute-force approach can yield remarkable results and help security researchers squash hundreds of serious bugs.
All right, but what's up with Edison? If you haven't seen it yet, Edison is a sub-$50, stamp-sized (3.5 x 2.5 x 0.4 cm), and essentially self-contained dual-core x86 system with surprisingly decent specs. It comes with built-in wifi and Bluetooth, 1 GB of RAM, 4 GB of non-volatile storage, and can boot to a pretty standard distribution of Linux. It's really tiny - not really a surprise in the age of devices such as Apple Watch or Google Glass, but certainly remarkable for a well-rounded, general-purpose, plug-and-play computer at this price point:
(There's a bunch of other somewhat comparable, hobbyist-friendly ARM-based SoC boards, such as BeagleBone Black or Odroid-U3 - but they tend to be several times as big and come with a different feature set.)
I figured that the device is unlikely to offer the most CPU performance for your buck, but that its seemingly unbeatable form factor could allow DIYers to run long-term fuzzing jobs on, say, a hundred logical cores - without turning their bedroom in a datacenter. In theory, you could construct such a rig for less than $1,300, have it consume less than 40 watts, and take up about as much space as two books.
Of course, that's just theory. To try it out in practice, I decided to order several units and take them for a spin. Somewhat surprisingly, the ordering process itself turned out to be a bit of a hassle; I'm used to buying bare microcontrollers and navigating datasheets for integrated cicrcuits, but Intel managed to make me scratch my head at least three times in a row:
There are several variants of the device, including the "regular" EDI1.SPON.AL.S and "wearable" EDI1.LPON.AL.S. I'll be damned if the differences are explained on Intel's product landing page or in their platform brief - but the first model does just fine.
The documentation for Edison seems remarkably sparse, findable only through web searchers, and often mixed up with seemingly irrelevant content for their older and bulkier platform, Galileo. You will not find anything resembling the sort of in-depth documentation you can see for Atmel MCUs, but with a bit of luck, you may be be able to bump into a getting started doc, which makes it clear that the device needs to be configured via USB-over-serial to get it on the network - and implies that you will need an expensive EDI1ARDUIN.AL.K interface board that is supposedly designed for the users of Arduino (gross!).
There is not a single peep as to whether the much cheaper BB.AL.B breakout board offers the same capabilities for configuring or re-imaging Edison. There is a hardware guide for the cheaper board, but it doesn't help: if you are lucky, you may notice a mention of "UART - USB FTDI" on one of the pictures, but that's about it.
Ideally, Edison should be operated in "production" without the $20 breakout board or the $85 "Arduino" thing. Alas, the only documented connector on the device is an incredibly dense and uncommon 70-pin Hirose DF40 thing (with 0.4 mm contact pitch!). The mating sockets are inexpensive (under $1), but will require serious skill to solder. There is also a bunch of exposed solder pads on the PCB, but they are not documented or marked in any way, so it's not clear if they can be used to supply power to the SoC.
Now, there's always some experimentation and trial and error involved in DIY hardware projects, so I wasn't particularly put off. When the devices arrived, I tried to figure out how to power them without any extra accessories. To get there, I first had to solder the DF40 connector - I'm pretty sure that's what it feels to be a brain surgeon:
With this in place, I tried to confirm my hunch that some of the exposed pads may offer an easier way to power the whole thing - and I was right. Here's what worked for my boards:
Connecting to that +4.5V pad is going to be somewhat challenging to novices, mostly because there are two components nearby - but if you have a decent, small soldering iron and some practice with SMD, it shouldn't be a big deal.
Of course, power is just half the problem, so to figure out what's in the box, I mounted one device on top of the breakout board and hooked up an external antenna (the internal one works, but isn't great - and my wireless network is sort of crummy). This is how it all looked when booted up:
Neat, huh? The software setup proved to be relatively simple - although again, it came with a couple of minor and arguably unnecessary hurdles along the way:
There were two USB ports on the breakout board, but the difference between them was not really explained in the docs. Both ports offered USB-over-serial, but only one of them was being used as a serial console by the bootloader and the OS. I tried the wrong one first, losing a fair amount of time to figure out why it's not working properly :-)
The configuration script mentioned in Intel's getting started guide
had different calling semantics than specified, and more importantly, failed to ensure that that wpa_supplicant
and udhcpc
would run on next boot after setting up wifi - so every reboot would make you lose network
connectivity and necessitate a connection over USB.
(Newer versions of the system image address both problems, but there is no mention of the need to update anywhere in the docs I stumbled upon.)
For network access, listening on port 22 is for some reason delegated to systemd
, which then calls OpenSSH
in inetd mode on demand. While it may save about half a meg of physical RAM (which Intel then promptly wastes on other
things discussed later on), it also has the nasty side effect of
systemd
hard-killing the entire process group created within your SSH session when you exit - and that
includes screen
and all its children.
The journald
service provided by systemd
was running with no maximum journal size
defined, eventually filling up the entire root partition. Oops.
Modern hyperthreading can be helpful in fuzzing work. Most Atom CPUs are capable of HT and this one reportedly has two "threads" per core, so I naturally assumed that it comes with HT - but as it turns out, there are appear to be no extra logical cores available on the system after boot.
Upon closer investigation, looks like Atom SoCs in the Silvermont family have dealt away with explicit HT, and the threads may be just used internally for out-of-order execution - but it's another thing that isn't really clear in the marketing docs.
(On the flip side, the SoC reportedly also includes a separate Quark CPU - but it is currently locked and there is essentially no information about its specs or future utility.)
Looking beyond the occassional quibble or two, Edison turned out to be pretty impressive. The 32-bit Linux distro is based on Yocto Linux (a framework for building customized embedded images) and comes with very few compromises: you even get a working, hosted GCC, so compiling and installing programs such as screen is a breeze.
(Of course, the platform is a regular x86 system - so even without hosted GCC, you wouldn't really have to cross-compile.)
After booting to the Intel-supplied image, you get around 900 MB of free RAM (although you can reclaim a bit more killing a couple of services) and over 2 GB of disk space. Not enough to go crazy with Microsoft Office or Firefox, but plenty of leg room for fuzzing libraries, network services, and so on.
Speaking of memory usage,
I would generally recommend doing systemctl disable
and systemctl stop
on
edison_config
, xdk-daemon
, rsmb
, and mdns
-
both to free up resources and to minimize the network-exposed attack surface. It is also safe to get rid of
clloader
, sketch_reset
, systemd-resolved
, and a bunch of other things.
The system needs around 20-30 mW when idle, and peaks in the vicinity of 800 mW when running at full power - not bad.
Basic usability and power consumption aside, the big question is performance.
The machine uses a dual-core Atom Z34XX CPU ("Merrifield")
nominally running at 500 MHz, coupled with two-channel 800 MT/sec RAM and 4 GB Flash storage.
To figure out what are the trade-offs outside what can be obviously inferred from the CPU clock speed,
I ran a single-core instrumented fuzzing job with
afl, targeting libpng
:
This is a rough comparison with the same target on a single core of my Xeon X3440 ("Lynnfield") server running at 2.5 GHz:
(Xeon X3440 is a bit dated at this point, but still provides a reasonable reference point; in particular, its performance is still noticably better than a contemporary 4-core virtual server available from a major cloud service provider. Core-for-core, it is also only around 35% slower in such a test than the more modern 6-core 3.2 GHz workstation I'm writing this on.)
Of course, it is a limited benchmark that emphasizes context switching speed, simple integer arithmetics, and memory access - but it's a good proxy for what I'm after, and probably a semi-decent approximation of system performance in general.
As you can see on the screenshots, the measured speed of the fuzzing process - as expressed by the number of execs per second - is lower on Edison by a factor of four or so. The result is actually a bit more positive than implied by the comparison of clock speeds alone, suggesting that there are relatively few hidden bottlenecks for running such loads on Edison.
This brings us to a back-of-a-napkin calculation of cost-efficiency. Compared to my reference server, there is a nominal 4x slowdown per core, and on top of that, Edison has just two cores, rather than four. For highly parallelizable and computationally intensive tasks such as fuzzing, we can probably safely say that Edison will be about 8x less powerful.
As it happens, the SoC is also roughly 5-8x cheaper than a desktop machine or a server with specs comparable to the X3440 box - so while it is not exactly a steal, the economics make some sense. The modest premium may be well worth the small footprint and the quiet operation of an Edison-based rig.
Of course, the other alternative for scalable, long-term fuzzing work is, ahem, cloud computing. One CPU core with a performance comparable to my reference server is usually billed at around $0.05 per hour. One $50 Edison board would pay for 1,000 hours - just over one month - of processor time. If you do the equivalent amount of work on Edison, you will break even.
The virtual core you are getting will be likely around 4x faster than a single core on Edison, but on the SoC, you'd have two cores to get by; so, let's conservatively say that you would need to run computationally expensive tasks on Intel's contraption for three months straight to get to the point where it truly pays off. This is not exactly crazy: some individual fuzzing jobs run for weeks or months at a time, and the device itself can be reused for any number of different fuzzing projects later on.
But if we're talking about such timescales, it also makes sense to look at single, private virtual servers, which come without all the cloud storage and IPC bells and whistles, but can be paid for with a single, monthly fee. The rate for a single-core VPS is usually below $10, or some $0.01 per hour. By that token, you would need to crunch numbers on Edison for more than a year to recoup its worth - a less convincing deal.
On the flip side, virtualization can be more capricious, with substantial fluctuations in latency and CPU power; on budget VPSes, you can also get throttled or kicked out if you're hogging all the available CPU power for days or weeks at a time. And, once you spend your $50 in the cloud, you don't get to keep a cool, tiny computer to play with :-)
Physical, "bare metal" servers at an ISP may actually offer the best value: they often start somewhere around $80 per month and come with very few compromises. On the other hand, ISPs will almost certainly get cranky if you end up ordering them by the dozen and then walk away from the contract after a month or two.
The bottom line is, the CPU power economics for Edison are OK, but nowhere near being exceptional; the form factor, however, remains unrivaled. I think I'm going to stick with this platform for a while; here's my current matchbox-sized 10-core wifi cluster:
The whole thing consists just of SoCs, a couple of wires, a random 4.5 V power supply from my recycling box, and a bunch of miniature nuts and machine screws that double as the ground rail. This is about as close as I could pack the boards and not have them overheat when cooled passively. If the airflow is obstructed or you want to pack them tighter, a heat sink or a small and quiet fan should do the trick.
Well, that's (mostly) it! Some time ago, I also promised to compare Edison to Odroid-U3, a 1.7 GHz four-core ARM system that volume-wise, takes up nearly 25 times as much space as Edison, but is still very compact - and roughly comparable in terms of price. Using the afl-fuzz benchmark as a reference point, a single core on Odroid is almost precisely 2x faster than Edison; there are four cores total, compared to two on Edison, so in theory, the overall gain is 4x. Alas, I noticed that the multi-core fuzzing performance degraded quite a bit less gracefully on this Exynos 4412 SoC than on x86 - perhaps due to differences in cache size or memory bandwidth. Realistically, a gain of somewhere near 3x may be a safer bet.
Another challenge is that when running at full throttle with four instances of afl-fuzz, even on a cool day and even with its formidable heatsink, Odroid quickly gets to over 90° C (175° F) - so especially when trying to put together a cluster, you may need a decent fan:
Under similar full-load circumstances, Edison runs much cooler:
Without a fan, the stability of Odroid under heavy load seems questionable; over the course of several weeks of fuzzing, I've seen multiple inexplicable crashes, including "illegal instruction" faults in stock screen.
Price-wise, the Odroid board costs $65, although there are some additional costs. Perhaps most notably, the system comes with no non-volatile storage, so you need an eMMC module ($25); alternatively, a micro-SD card (sub-$10) will work if you don't care about the the file system being sluggish every now and then. There is also no wifi, so you need a wireless module ($8) or a wired hub and some cables to go with it. All in all, the total will be probably closer to $90 or so.
All this still makes Odroid-U3 a pretty good deal if board dimensions are not your primary concern, but also really highlights that one important strength of Edison:
(The ARM board also needs a $15 serial module or a keyboard and a micro-HDMI cable to connect it to a monitor, but that's not really different from the breakout board for Edison.)
You can reach the author of this page at <lcamtuf@coredump.cx>.