Primarily, the YubiKey is there to lock away the private key while making it available to the running CA. Certificate signing happens inside the YubiKey, and the CA private key is not exportable.
This uses the YubiKey PIV application, not FIDO.
As an aside, step-ca supports several approaches for key protection, but the YubiKey is relatively inexpensive.
Another fun approach is to use systemd-creds to help encrypt the CA's private key password inside a TPM 2.0 module and tie it to PCR values, similar to what LUKS or BitLocker can do for auto disk unlocking based on system integrity. The Raspberry Pi doesn't have TPM 2.0 but there are HATs available.
As for the "hours" max interval, this is the result of a design decision in Go's time duration library, dealing with the quirks of our calendaring system.
It's because units up to hours are of a fixed size, but days in most places are only 24h for ~363/365 days of the year, with some being 23h and some being 25h.
(This is ignoring leap seconds, since the trend is to smear those rather than surface them to userspace.)
Hi, I'm the author of the post. Thanks for your questions here.
> -Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does this solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk. So if the device is compromised, they can just issue their own certs. If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.
Yep, it's overkill. Homelabs are learning environments. People want tutorials when trying new things. It's a poor man's HSM because not many people will buy an HSM for their homelab, but almost everyone already has a YubiKey they can play with.
The project solves the problem of people wanting to learn and play with new technology.
And it's a way to kickstart a decently solid local PKI, if that's something you're interested in.
The RNG is completely unnecessary flair that just adds to the fun.
> -Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise.
> -They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
The tutorial shows how to generate and store the private key offline on a USB stick, not on the device or the YubiKey. The key material never touches the disk of the Raspberry Pi.
Why store a copy of the CA keys offline? Because YubiKeys don't have the key-wrapped backup and restore feature of HSMs. So, if the YubiKey ever fails, you need a way to restore your CA. Storing the root on a USB stick is the backup. Put the USB stick in a safe.
If you want active revocation, you can set it up so that the intermediate is revocable—in case physical theft of the key is important to you. (We have instructions to do that in our docs.)
> -All this digital duct taping the windows and doors yet the article instructs you to download and run random binaries off GitHub with no verification whatsoever.
It's open source software downloaded from GitHub. The only non-smallstep code is the RNG driver (GitHub is the distribution point for that project). Was there a kind of verification that you expected to see?
> -Why do you need ACME in a homelab and can't just hand issue long lived certificates?
-OpenSC and the crypto libraries are notoriously difficult to set up and working properly. A tiny CA this is not.
Most people don't need ACME in their homelab, they just want to learn stuff. That said, we have homelabbers in our community issuing certs to dozens of endpoints in their homelab.
Whether you issue long-lived or short-lived certs is a philosophical issue. If a short-lived cert is compromised, it's simply less valuable to the attacker. Short-lived certs encourage automation. Long-lived certs can be easier to manage and you can just manually renew them. But unplanned expiry of long-lived certs has caused a lot of multi-million dollar outages.
Despite the critical feedback you've received above, I found the article interesting, and having a homelab with several spare Pi's, it's got me considering setting a CA up. Thank you.
How should a company figure out what to charge for something in the first place?
Especially a startup that doesn't have much market data to go on, and may be making something entirely new that no one quite knows the value of.
When this is the case, one option is to do price discovery.
And the way to do that is to remove prices from the website, take calls, learn about customers and their needs, and experiment.
> and may be making something entirely new that no one quite knows the value of.
How many such companies even exist at any given point in time? In software in particular, that's going to be almost none, and those few that are, won't be that for long. For everyone else, there are already competitors doing the same thing, and even more competitors solving the same problem in a different way[0], giving you data points for roughly what prices make sense. Between that and your costs being the lower bound, you almost certainly have something to work with.
--
[0] - There's no "someone has to be the first" bootstrap paradox here. Even if you're lucky enough to genuinely be the first to market with something substantially new, it still is just an increment on some existing solution, and solves a variant of some existing problem, so there is data to go on.
If client pays for a link that’s part of a chain, and doesn’t want the chain broken, and still has profit, it means client can pay more, that link is worth more.
In my opinion the question isn’t so much “if” but rather “when”.
When will AI research and hardware capabilities reach a point that it’s practical to embed something like that into a regular document?
We’ve already seen proof of concept LLMs embedded into OpenType fonts.
I guess the other question is then “what capabilities would these AI agents have?” You’d hope just permission to present within that document. But that depends entirely on what unpatched vulnerabilities are lurking (such as the Microsoft ANSI RCE also featured on the HN front page)
> // Use interpreted JS only to avoid RWX pages in our address space. Also, --jitless implies --no-expose-wasm, which reduce exposure since no PDF should contain web assembly.
The first widespread AI Malware will be a historic moment in this century. It will adapt like a real biological virus to its host and we have no cure for this.
I learned C by running a MUD — a DikuMUD derivative. I was in high school, in the 90s, and I didn't know any programmers in my town who could teach me how to really code. My high school computer science teacher didn't know.
What I loved about the MUD as a learning environment was the players. On a busy night we'd have over a hundred people playing. So, I got to cut my teeth on a real, live production system with actual users. That motivated me. There were mild consequences if I broke things. And, if I made things better for the players, it felt good.
For me, this environment was so much better than doing programming problem sets by myself, writing code that no one would ever use.
+1 — cut my teeth on learning C in middle school by hacking up a DikuMUD derivative. So many great memories of that period.
And not just C but Linux (Slackware!), sockets, even kludging the single-player DOS port to be two-player by playing over a serial cable to another PC. And annoying my future Dropbox teammates by including an extra space after/before parens in function calls (and if/for/switch statements), putting { on its own line, etc as was the convention in that code base IIRC.
I knew a little C, but I learned about sockets and files (and later databases) and various other things but hacking on DikuMUD, CircleMUD, and SocketMUD, and later writing my own MUDs.
Over 100 simultaneous users is quite the success for a MUD back then and especially today. I also learned C by forking DikuMUD too, it was so accessible and easy to tweak.
It was indeed quite the success, but there were a number of MUDs in the 90s and 00s that would hit 100+ regularly. The MUD I mostly played at that time (WoTMUD) was hitting 200+ regularly for awhile.
I also learned C this way (with ROM 2.4 in my case), but what I really should have learned is social skills. Instead, once I got good enough at C to make the playerbase my mostly unwilling playthings, all pretense of being anything other than the most insufferable insane dictator in human history went right out the window, and I was so drunk on my own "power" that I was entirely blind to it until it was far, far too late.
I tried learning to make sourdough bread by reading the Tartine Bread book.
The problem is, baking bread is such a sensual activity.
You need to understand what it feels like when the texture of the dough is right.
You need to learn how to fold and stretch the dough and shape it in ways that are very difficult to describe.
None of this translates well into English, no matter how good of a writer you are. And photos are of limited utility.
Learning in person from a knowledgeable teacher is ideal. Just as with a board game.
But, since we are talking about media here, what helped me the most with bread baking was Instagram.
I watched videos of bakers doing each stage of the process and talking me through it.
I saw the texture of the dough they were using, and how they worked it.
I learned by example.
And I wonder if board games are similar to bread.
Would I rather read a 70-page rule book, or watch someone play the game for a while or teach it to me in a video?
I'd prefer the video content, and then I'd want rulebook as a reference guide rather than a tutorial.
I suspect it’s a per person thing. I’ve taught myself how to bake sourdough with a book. I’ve taught myself how to knit by reading as well which is also very tactile.
When reading a good rule book/instruction manual I get little moments where the respective explanations click.
But I assume everyone has a preferred method that works for them and has a similar experience when learning.
Credential files are a good, simple, portable option. Files have permissions already. They don't depend on an external service or a proprietary API.
And, if your program accepts a credential file, it will be compatible with systemd credentials. systemd credentials offer more security than an unencrypted credential file. They are encrypted and can be TPM-bound, but they don't require the software using the credential to have native TPM support.
Primarily, the YubiKey is there to lock away the private key while making it available to the running CA. Certificate signing happens inside the YubiKey, and the CA private key is not exportable.
This uses the YubiKey PIV application, not FIDO.
As an aside, step-ca supports several approaches for key protection, but the YubiKey is relatively inexpensive.
Another fun approach is to use systemd-creds to help encrypt the CA's private key password inside a TPM 2.0 module and tie it to PCR values, similar to what LUKS or BitLocker can do for auto disk unlocking based on system integrity. The Raspberry Pi doesn't have TPM 2.0 but there are HATs available.
reply