Hacker Newsnew | past | comments | ask | show | jobs | submit | maxtaco's commentslogin

Easy multi-accounting is something that I hope we already have (`foks key switch` is pretty smooth). It's a feature I use a lot (I have a personal account on @foks.app and our company account is on @ne43.foks.cloud).

This is a great point and I thought a lot about this. This is the sort of thing that can be changed later if it's really a good idea, but I got to thinking that having non-local admins would mean more server-to-server communication and more server-to-server trust, and I was trying to avoid that.

Imagine alice@foo is an admin of bluejays@bar. One thing alice@foo will need to do is to make signed changes to bluejays@bar, when adding or removing members, let's say. Right now, the server at bar will check the validity of these signatures, that they were made with the alice@foo's latest key. So in other words, there would have to be some way for bar to authenticate to foo to allow bar to read alice's sigchain and to determine her latest key.

I was thinking that keeping foo and bar separated was a good idea both in terms of privilege separation and keeping the network simpler (which would in turn be good for uptime and would simplify software upgrades).


I'm not familiar with Radicle, but I'll check it out. For (1), consider the case of that server being hosted on AWS. Even though only members are authorized to SSH into it, the plaintext is still known to the cloud hardware, and can be exfiltrated that way. In FOKS, the server sees encrypted data only, so that attack is greatly mitigated. I would say that if the SSH server was hosted on one of the workstations of one of the team members, then the security advantages of FOKS would be much less.

The KV-Store and Git server are implemented as "applications" on top of the FOKS infrastructure, so they aren't coupled. They see a sequence of Per-Team-Keys (PTKs); they use the older ones for decryption and the newest for encryption. I'd really love to see all sorts of other applications built on top of FOKS but we might need to do some work as to nailing the right plugin architecture.


Correct! Remote members of the team get access to shared team keys, and the team's data, even though they don't have accounts on that server. Knowledge of the team key suffices to allow a remote user to authenticate and transfer (encrypted) data to and from the server.

There is very little server-to-server communication, which simplifies the design and software upgrades.


Max here, author of FOKS. I find it interesting how much glue is required to perform basic cryptographic operations, even in 2025. Imagine a very simple idea like encrypting a secret with a YubiKey. If it's an important secret, that you really don't want to lose, then now you need a second YubiKey as a backup, in case the primary is lost or breaks. But now how do you encrypt and how do you rotate the primary out if needed? To the best of my understanding, there aren't great solutions short of a system like FOKS. If not FOKS, I really believe a system like it ought to exist, and it ought to be entirely open, so that arbitrary applications can be built on top of it without paying rent.

Max! I'm so happy that you're doing this! I was a huge fan of Keybase, and have spent the last few years praying (and sometimes brainstorming funding) a decentralized, open source version of it. Looking forward to digging into the details of FOKS, but just wanted to say thank you and the Keybase team for all you've done -- including keeping Keybase going after the Zoom purchase.

Thanks Danny! The Keybase team (not including me) deserves all the credit, I've been gone for over six months. It's a great team and I miss working with them.

If you haven't seen KERI they're worth a read, I found out about them at an Internet Identity Workshop. It has all those quality of life features for public keys - revocation, rotation, recovery. "Key Event Receipt Infrastructure". Relies on "witnesses" which I don't know if I love it but their presentation impressed me.

https://keri.one/


Max, this looks interesting and I'd like to follow the blog. Would you please add an Atom feed to the blog?

FOKS is a cool project; what kind of projects do you foresee getting spun off from this?

I'm actually working on a crytpography based project inspired by Keybase's use of Merkle Trees and identity proofing but with an added dash of privacy through pseudonyms and chain hashing. Thanks for putting time into this.


Thanks! Would love to see a file sync app, an MLS-based chat (where the encryption key is essentially a combination of the keys output from MLS and the PTK from FOKS). Password managers. I think there's the potential for something like a Hashicorp-Vault-style server-side secret key material manager, but many details left to reader. Maybe a Skiff-style Google-docs clone? I think there are lot of potential directions to go in.

Something like pa should be easy enough to port to it as a first pass: https://github.com/biox/pa

IMO Vault is really nice, but something as simple as possible is better for managing secrets, especially when the storage layer has permission and sane encryption handled for you.


> TL;DR: FOKS is like Keybase, but fully open-source and federated

What features from a user perspective does it currently have in common with Keybase?

F.e. I remember Keybase mostly for secure messaging using public identities (HN, Reddit etc.), and sharing data/files.


E2E-encrypted git. Keybase has KBFS, and FOKS has a poor man's equivalent, which is E2E-encrypted Key-value store.

Thanks! Sorry for being lazy, but I was wondering how you share something using the E2E-encrypted KV store (it wasn't obvious in the website)? In kbfs, I remember it was as easy as putting it in a comma separated usernames path.

It's not as seamless. You need to first make a team, then invite (or add) that user into the team, and then use `foks kv put --team <your-team>`. One key difference is that in Keybase, all user's profiles were essentially world-readable. FOKS aims for more privacy by default, so in order to add Bob to your team, Bob has to first allow you view his sigchain, so you can learn his public keys.

The add vs invite distinction referred to above is because servers can choose different visibility policies. You can set up a server at foks.yourdomain.cc, and set it to "open-viewership", which means that any user can see any other user by default. If you and Bob are both on that host, you can add him to your team without his permission. But other hosts, like foks.app, do not work this way, and Bob has to authorize you to view him.


- Sorry about the outage yesterday. It lasted about 1 hour but service was promptly restored.

- As Zoom employees, our primary objective is now to bring the technology at play in Keybase to Zoom products.

- We are still making regular updates to Keybase. These updates consist primarily of bugfixes, security fixes, performance improvements and patches and updates to third party libraries. For instance, the app is currently undergoing a major rewrite to be compatible with recent versions of React and React Native. We are not currently pursuing any major new features. The paper trail for these updates is visible for all to see here: https://github.com/keybase/client/tags

- The current plan -- which can be inferred from our ongoing maintenance to the product -- is to keep Keybase running in a performant, usable, free, E2EE, high-security form for the foreseeable future. Should this plan change, we promise to give as much advance notice as possible.


Thank you! I still use Keybase as my main messenger and would be sad if it went away.


Note that the implementation of EdDSA that the authors investigated (libgcrypt) is not a constant-time implementation. Better implementations are more likely to be safe.

See: https://news.ycombinator.com/item?id=21352821


Congratulations and thank you for posting this inspirational story! I became interested in programming 26 years ago after taking the same class you did. So thanks to David Malan and his forebearer Margo Seltzer for CS50.


Keybase CEO here. Let me tell a quick story. January 2019. I was loading the car to leave for a short family ski vacation when I got a truly horrifying email: that my slack account had been accessed from a distant land (that I hadn't been to).

There goes my weekend!

When we first started Keybase, we used Slack as other teams did, but were gradually moving all Slack-based workflows over to Keybase. As such, we didn't use it for anything beyond communicating when Keybase was down. But I was very worried. I knew I used a good, random, one-time password for Slack, so it couldn't have been that the password was stolen from somewhere else. Had my computer been rooted? Had my side-hustle password manager been compromised (oneshallpass.com). I immediately contacted Slack security and asked them if this issue was on their side, and they neglected to point me to the relevant blog article from 2015 (which didn't detail the extent of the compromise, we now know). They just said they take security very seriously and hinted I was at fault.

In the subsequent few weeks, I reset all of my passwords, threw away all my computers, bought new computers, factory-reset my phone, rotated all of my Keybase devices, and reestablished everything from the ground up. It cost me a lot of time, money and stress. In the end, I was pretty sure but not 100% convinced that if I had been rooted, that the attackers couldn't follow me to my new setup. But with these things, you can really never know for sure.

I got the email today that my account might have been compromised in the attack. I would say for sure that it was compromised, and I can breathe a big sigh of relief, that was the explanation I wanted to hear all along.

It was great to know throughout this ordeal that the product we're building --- Keybase --- solves this problem in a fundamental way, not with adding further workarounds (2FA while better than just password alone, reminds me a bit of the 3-digit verification code on the back of your credit card; and if Slack's credential database is compromised again, 2FA won't help at all). With Keybase, all of your data is E2E encrypted, and your encryption keys never leave your device. A would-be attacker who compromised our database would have no ability to access any important user data.

Summary: estimated cost to me:

   - $5000 worth of hardware
   - 60 hours of labor
   - 25 hours of lost sleep
   - 10 additional hours of team effort
Fortunately:

   * Keybase does not communicate sensitive information in slack such as cap tables, financials, employment decisions or compensation discussions, team reviews, company devops secrets, stupid memes that could be taken out of context, or private DM'ing.  Basically we just use a `#breaking` channel in Slack, for when we break Keybase.
   * Keybase itself is immune from this kind of break-in.
Edits: wordsmithing and improvements

Also: Import your Slack team to Keybase: https://keybase.io/slack-importer


Note that Slack has only sent the email about account access since 2018:

>Additional security features: As of January 2018, we began sending an email every time your account is accessed from a new device; this is a simpler and more immediate way for you to be aware of new logins to your account than periodically reviewing your access logs.

(Quote from the "Slack password reset" email they recently sent out to affected users.)


I think if this happened to me I would just assume the company was hacked due to poor security practices. It seems much more likely than my password being stolen from a device of mine considering that a very significant percentage of companies I have account with seem to have had breaches at some point. But maybe I am just naive.


That was my 90%+ likelihood explanation too, but I figured Slack had its act together, and the risk of being wrong was too high.


I think it’s different depending on who you are and what systems you have access to. The Keybase CEO is much more vulnerable to targeted attacks where 0-days would be potentially worth the cost.


That's true of course, I'm not a valuable target so it is both less likely I would be targeted and less important if I was successfully hit.


Why throw away your computers? Why not remove/disconnect the batteries (if portable) and just store them somewhere in case you eventually no longer suspect a hardware hack, as in this case?



Further: Keybase is a security product and it wasn't deemed worth the risk for the CEO. And while Keybase isn't made of money, the $5k was roughly irrelevant compared to the other costs mentioned here and the _magnitude of the risk_.

If you haven't been through this kind of thing, it's hard to understand how scary it is to have a break-in of unknown origin. If you use strong, unique passwords as Max did, then you're almost certain it's a server break in (and again, this is why Slack is scary for sensitive info)...but being 99% certain isn't enough. Removing that computer permanently from the team gave peace of mind.


tl;dr: UEFI rootkits can survive operating system reinstallation and even a hard disk replacement.

That's why he needed a new physical computer.


Great point. The fact that street parking is roughly free in NYC both encourages drivers to drive and causes unnecessary bottlenecks on congested major thoroughfares. Take, for instance, 86th Street. One lane taken up by parked cars, plus just one double-parked truck means most crosstown traffic along that latitude and in that direction (including packed busses) must single-flight. That the city chooses to keep commuters stuck in traffic to accommodate disused cars is a pessimal allocation of precious street area.


Keybase affiliate here.

Correct: forward secrecy isn't on by default. We think there's a trade-off here. With forward secrecy, your old messages won't be visible on a new device, but users want this since Slack (and others) make it seem natural. However, you can opt-in to forward security on a per-message or per-conversation basis.

The report says "device and server compromise." Decryption keys never leave the user's client. What they mean is if: (1) the server's stored data is compromised; (2) your phone is also compromised; and (3) the messages weren't marked ephemeral; then the attacker might be able to read past messages, even if the user tried to delete them (i.e., did Keybase really delete the ciphertexts?). This line of reasoning is correct and one of the primary motivations for key ratchets. I don't think the report is claiming that users need to trust Keybase's server in general. They do need to trust Keybase to delete messages that are marked deleted, which would mitigate the attack above if conditions 1 through 3 are met.


My issue with Keybase's exploding messages is they're time-based exploding. I wish there was an option to do forward-secrecy messages where the message is visible indefinitely to current devices, but not visible to future devices.


Thank you for clarifying that, especially the second point. Looks like I was misunderstanding the report then.


Thank you for taking the time to read the report!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: