KiwiFlare (soon: Tartarus Feedback)

  • 🔧 The new disks are in and syncing. There may be attachment related issues until this is complete. (4.9T / 22.1T as of 8:15pm EST)
The recent attacks have been either downloading our entire bandwidth at once (which I've limited) or are attacking the HTTP itself. The most recent ones were using exploits in HTTP2.
Thanks for taking the time to explain this stuff!

There’s only so much that can be done against bandwidth based attacks without just physically scaling up the hardware/network stuff, right? Since you would need to receive the package before deciding if it should be dropped.
Exploits in the HTTP2 protocol itself or the implementation of your firewall?

Currently fighting intermittent DDoS attacks myself, and while we are able to use Cloudflare, there’s still some bottlenecks. At this point I feel like there is nothing that can be done if the attacker simply has far more resources (:_(
 
dumb question here so as to not explicitly bug Dear Feeder

if I'm with Jimmy Jonny's Wonder Widgets, and we want to get kiwiflare because cloudflare is fake and gay (but still use our other web host), is that like a thing that operates independent of 1776 Hosting and the general Farms orbit, or if Gone Dong or some other no-good-nik decides to blow another check on DDOS would that then impact Wonder Widgets
 
A little while ago, I saw github traffic insights said someone was linked to one of my github repos from chatgpt.com. This got me curious as to how much AI knows about software I wrote, so I figured I'd put my checkmark to use and ask Grok some general questions. I was surprised by the quality of its responses and the lack of overt moralfagging.
grok.webp
Based on the answer it gave, it seems like it's pulling info from discussions on this site. It's nice to think stuff written on here may inform its answers, including answers about certain people and consent accidents they want to cover up, giving the site's often legitimately useful content much wider reach.

It also gave me a surprisingly insightful (though slightly outdated) overview of the solver library I wrote. It recognizes and acknowledges that the site has ongoing attacks and networking fuckery against it:
implementation.webpimplementation2.webpimplementation3.webp
I like the note it gave at the end about computational complexity vs secrecy/obscurity too. What I like most is that it gives reasonably informed answers about the site and treats us with a basic level of legitimacy instead of calling us a hivemind of cyber-terrorists who are worse than Hitler. Cool stuff.
 
The UK geoblock has been playing up for me. I browse clearnet-through-Tor, and I sometimes get the Ofcom page with non-UK exits. Just got another through a Russian one, I think the IP was 185.40.4.92.
 
Can't edit, doubleposting instead. Just had it happen again, this time under 185.40.4.132 / 2a0e:4005:1002:ffff:185:40:4:132, seems to be the same host though. The geolocation of that network seems to be fucked in general, Tor displays a Russian flag, whois claims Russia as well, but MaxMind claims Seychelles, and ipinfo claims Norway. At least the "New Tor circuit" button fixes it.
 
Can't edit, doubleposting instead. Just had it happen again, this time under 185.40.4.132 / 2a0e:4005:1002:ffff:185:40:4:132, seems to be the same host though. The geolocation of that network seems to be fucked in general, Tor displays a Russian flag, whois claims Russia as well, but MaxMind claims Seychelles, and ipinfo claims Norway. At least the "New Tor circuit" button fixes it.
MaxMind GeoIP locates that IPv6 to the United Kingdom (here), this would explain it if you connected to the site via IPv6. From my own experience I'd put it down to MaxMind taking their time updating specific ranges to a country-accurate degree.
 
The challenge page tends to loop when the site is being hit with a heavy DDoS, and this arguably adds another layer of service denial, since legitimate users can't even get past it. I assume the issue is related to the server having to manage a shitload of challenges at once, bottlenecking, and perhaps purging old records (invalidating sssg_clearance tokens shortly after they're issued).

Did this kiwiflare script ever get released to the plebs
AFAIK, the answer is no. But I'm slowly working on a shitty Go clone of it in my free time bc Anubis needs a less faggy competitor. I'm calling it MangoBlaze.
I know I'm sort of the site's resident Go shill at this point, but I think the language is well suited for handling a lot of server requests at once.
 
Last edited:
I recently wrote a simple implementation of John Tromp's cuckoo cycle hashing for a POW antispam on one of my sites. So far it's been working pretty smoothly though I admittedly am not dealing with anything like the level of bullshit that KF must. Might be worth checking out as a replacement for sha256 POW here, assuming a better POW would actually help. Cuckoo cycle hashing is memory latency bound and so is a bit more resistant to speedups with GPUs/ASICs than sha256, but is similarly fast to verify
 
I recently wrote a simple implementation of John Tromp's cuckoo cycle hashing for a POW antispam on one of my sites. So far it's been working pretty smoothly though I admittedly am not dealing with anything like the level of bullshit that KF must. Might be worth checking out as a replacement for sha256 POW here, assuming a better POW would actually help. Cuckoo cycle hashing is memory latency bound and so is a bit more resistant to speedups with GPUs/ASICs than sha256, but is similarly fast to verify
There is no point making the hash harder. If you make the hash harder, it simply makes the checks harder for the server, which is the opposite of what is being intended. I can increase the sha256 challenge arbitrarily by requesting more zero bits.
 
minor recommendation if you're ever making a breaking change anyways, swapping the order of the challenge salt and attempt would be good, like this:
JavaScript:
// worker.js
const digest = t.SHA256(`${salt}${attempt}`); // prefer ${attempt}${salt}
const isSolved = Math.clz32(digest.words[0]) >= difficulty;
self.postMessage({
    attempt: attempt,
    solution: isSolved ? digest.toString(t.enc.Hex) : null
})

sha256 advances one 512-bit chunk at a time, running the "compression function" to merge each chunk into the current hash value. This state can be saved between chunks and reused to avoid re-hashing a common prefix. (This is called length extension, typically employed to append malicious content to files verified by hash, because it allows attackers to append content and generate an extended hash without knowing the preceding plaintext.)
Python:
>>> hashlib.sha256(b"1234567890123456789012345678901234567890123456789012345678901234").hexdigest() # one 64B chunk
'676491965ed3ec50cb7a63ee96315480a95c54426b0b72bca8a0d4ad1285ad55'
>>> hashlib.sha256(b"1234567890123456789012345678901234567890123456789012345678901234extended").hexdigest()
'a89f712b248692767460cb010caae119645fa49141e42b6b2e85f75eaf49fb63'
>>> tmp = hashlib.sha256(b"1234567890123456789012345678901234567890123456789012345678901234").copy(); tmp.update(b"extended"); tmp.hexdigest()
'a89f712b248692767460cb010caae119645fa49141e42b6b2e85f75eaf49fb63'

For KiwiFlare, this common prefix is currently the challenge salt, which is also exactly one chunk long. This makes it easy to hash the salt just once and extend it for each attempt, reducing each attempt's workload to a single chunk. Solutions almost always fit in a single chunk the way they're currently computed, so this means attempts normally only cost 2 chunks and this isn't a big deal, only halving the work. Putting the salt at the end prevents this. I wrote a simple solver showing this off:
Python:
def kiwiflare_solve(salt: bytes, difficulty: int) -> str:
    # round up difficulty to a nybble multiple (lazy coding)
    solution_prefix: str = math.ceil(difficulty / 4.0) * "0"
    salt_hash = hashlib.sha256(salt)
    attempt = 0
    while True:
        state = salt_hash.copy()
        state.update(str(attempt).encode()) # length extension
        if state.hexdigest().startswith(solution_prefix):
            return str(attempt)
        attempt += 1
 
https://kiwifarms(.)jp
(all tor users are also involuntarily subjected to it)

This is the test domain. It will be fully decommissioned after testing and it will not redirect to the Kiwi Farms. In case you're wondering, Marcan (Byuu's friend) parked this domain during #DKF but I got it back (lol).

You can post feedback in this thread.

.JP runs Tartarus (name?), which will be a flagship product of the United States Internet Preservation Society. It is in active development. It's the lessons learned from KiwiFlare + my long, long wishlist of features. It is in very heavy development and I change it every day. If you experience random downtime please just wait a few minutes before complaining, it is probably me restarting the server ungracefully. The main rundown of why I am developing this is:
  • Many-master setup. Right now, KiwiFlare is a bunch of proxies that send traffic to a single server that does all the KiwiFlare work. This system allows many servers to share the work.
  • Mycelium structure. The frontends are a creeping network that I can expand at will. Even if trannies start working overtime to take down our frontends, spinning them up takes no time.
  • Distributed throttling. Tartarus's system permits rate limiting across multiple nodes without relying on a single source of truth, permitting me to add IPs to blacklists on a firmware level (XDP packet filtering) that allows even single core servers to discard millions of packets a second.
  • Direct connections. Because the frontends no longer rely on a bottleneck server to process requests, the round-trip latency of the middle node is bypassed entirely and connections are faster.
  • Zero-trust configuration. Many improvements above were impossible before because I could not trust certain providers implicitly. Tartarus loads sensitive SSL information directly into memory, reducing liability.
  • Automatic SSL and DNS management. Almost all of our downtime since #DKF stopped was due to either me forgetting to update our 3-month SSL certificates or because a node went offline from me forgetting to pay a bill with crypto. The new system automatically manages SSL for me and automatically pulls DNS records for dead nodes and re-inserts them when they come back online.
  • No-JS compatible. Our most retarded and annoying users can now manually solve Proof of Work challenges with command line copypaste.
  • Graceful reboots and hot config swaps. The server can be fully rebuilt and redeployed without hard downtime (most of the time, unless I fuck something up).
  • Detailed statistics. Tartarus features Prometheus endpoints I can monitor.

1770061258884.png

It is not fully ready yet, in particular there's some managerial changes that need to be done to keep a bunch of different servers in sync.

If you'd like to sponsor my work, you can either donate to USIPS on Github with traditional payments or to the forum directly.
1. https://github.com/sponsors/usips
2. https://kiwifarms.jp/threads/supporting-the-kiwi-farms.27022/

Thanks
 
I love the name, it fits the slobbermutt avatar perfectly.
Thank you for your continued hard work and harnessing your autism for free speech.
 
Screenshot_20260202_134954_Brave.jpg
Not getting to any screen, just this error. On ProtonVPN connected to the US with NetShield turned off.

Edit for shits and giggles I changed my VPN location to Japan and it worked.
 
Awesome stuff. I have tested it with my setup (NordVPN over NordLynx with Post Quantum Encryption, Hardware Firewall, further software filtering on a per-system level, and a browser that has several overlapping layers of protection) and it works like a charm. No lag, no connection issues, no dropouts, etc... That's more than I can say for some commercial sites that shit the bed on a regular basis with my setup.
 
Back
Top Bottom