KiwiFlare (soon: Tartarus Feedback)

  • 🔧 Disk replacement complete. Report tech issues here.
    🔥 There's an issue with .st, so we're going to see if we can switch over to the new frontend today.
I think if you sit on the same VPN server for hours eventually Tartarus will throw a 421 for some reason. Changing the server (even within the same country) solves the problem. Well, for me, at least. Brave mobile, Proton VPN (US).
Edit: That is apparently not exactly the case as I sat on the new server for like an hour at most and got a 421, and had to switch to another country (although I didn't check when I was connecting to the same country again, I probably connected to the same server).
 
Last edited:
On https://usips.org/products/tartarus/
Tartarus is under active development. If you're a developer interested in helping protect the Lower Internet, check out the source code on GitHub. If you would like to financially support our development, please sponsor us on GitHub Sponsors!

I cannot find the source code or licensing information for Tartarus at https://github.com/usips/tartarus-rs. It says that the page is not found. Where can we find the source code?
 
I had this happen this morning while accessing mati.live:
  • Navigate to mati.live
  • Sit through the Tard screen
  • Get redirected to madattheinternet.com
  • Sit through another Tard screen
I'm thinking mati.live shouldn't have a Tard screen. In fact, I would think that mati.live should just have a CNAME record to madattheinternet.com to avoid this convoluted nonsense altogether.
 
Hello, I noticed a few bugs, and I'd like to report them.

1) Request repeatability issue-
Replaying requests using browser dev tools and cURL does not work. You'll get a normal response in chrome, but cURL will redirect the user back to the tartarus page.
Steps to reproduce: (chrome or firefox)
1) open a new tab in a private/incognito window, and open dev tools with F12, make sure "preserve log" is on
2) go to kiwifarms.jp, wait for main page to load
3) copy the GET of `/` as curl (see screencap)
4) paste the curl into the terminal that's on the same host
5) even though the clearance is present and valid, the cURL gets redirected to the tartarus page. This does not happen on the .st domain with sssg clearance cookie.
1770440949265.png
Comparing the requests from https://tls.peet.ws/api/all on this, it seems that even if the JA3s match, this can still be an issue.

2) On the tartarus load page favicon.ico sometimes displays an error image tag. I'm sorry to say I don't have more info on how to reproduce this reliably. My suspicion is that it happens when the clearance expires and the token needs renewing.
 
Elliot has updated his kiwifarms tool.

Github.com/endharassment/tor-fetcher/commit/929266e4156961b923c77837715659223b3f2122

He even included this in the comments urlscan.io/api/v1/result/019c307d-9f9d-72ac-a600-a6319d5708d7/

Might be worth keeping a small eye on it considering it might give someone else the ability to abuse.


What it likely does NOT handle well:
❌ Multi-stage challenges - No evidence of handling multiple sequential puzzles
❌ Session/cookie management - Appears to be a simple fetch tool, not a full browser session manager
❌ Captcha solving - haproxy-protection can require BOTH PoW AND captcha (hCaptcha/reCAPTCHA). The tool only handles PoW.
❌ JavaScript fingerprinting - Sites could detect it's not a real browser
❌ Adaptive difficulty - No indication it handles servers that ramp up difficulty for repeated requests
❌ Timing attack prevention - Sites might detect it solves puzzles "too efficiently"
❌ Tor circuit manipulation - haproxy-protection can communicate with Tor's control port to close circuits; no evidence this tool handles that.

Anti-automation measures:

Timing checks: Sites could detect if puzzles are being solved "too quickly" (faster than JavaScript typically would) and block those attempts as bot behavior.

Browser fingerprinting: Even with the User-Agent spoofing, sites might check for other browser characteristics (JavaScript execution environment, WebGL capabilities, etc.) that this command-line tool can't replicate.

Session/cookie requirements: Sites might require maintaining session state, handling multiple redirects, or storing cookies in specific ways that a simple fetch tool might not handle.

Adaptive difficulty: Sites could implement systems that increase difficulty for IPs/circuits that make repeated requests, specifically to thwart automated scraping.

Browser fingerprinting
Even with User-Agent spoofing, Tartarus could check for:
JavaScript execution environment
WebGL capabilities
Canvas fingerprinting
HTTP header patterns that don't match real browsers
TLS fingerprinting
tor-fetcher is a command-line HTTP client - it lacks the full browser fingerprint.

Session/cookie complexity
Tartarus could require:
Multiple redirects with state
Complex cookie handling
Proof that you executed JavaScript
Time-based tokens that expire quickly.

At the end of the day it's upto null to decide how he wants his tool to work but making erriot angy is funny.
 
Back
Top Bottom