Skip to main content ClaudeAI
r/ClaudeAI icon

r/ClaudeAI hp hp

Check Claude service status.

541K members
1.5K online


Use Rovo to improve communication and alignment. Take the course and earn the Rovo Ranger badge.
Use Rovo to improve communication and alignment. Take the course and earn the Rovo Ranger badge.



Anthropic has opened up its entire educational curriculum for free Anthropic has opened up its entire educational curriculum for free
Suggestion

Anthropic has opened up its entire educational curriculum for free, and now I'm starting to question myself.

With Claude Code, MCP Mastery, API courses, and AI Fluency, they've created a proper university-level program. And it's free.

While we're trying to learn things from random tutorials on YouTube, Anthropic comes along and says, "Here's a fully structured curriculum, learn it."

Now that I think about it, a few months ago I spent hours tinkering with Claude Code through the terminal, trial and error. I wish there had been a resource like this back then, but learning in that chaos was a whole different experience.

But I have to say this: Anthropic has made a strong statement to its competitors with this move. While OpenAI is still building walls behind premium content, Anthropic is giving everything away openly.

The MCP Mastery and Claude Code courses, in particular, are worthwhile on their own. For those who haven't tried it yet, there couldn't be a cleaner starting point.

Link : https://anthropic.skilljar.com/


The reality of Claude limits in 2026: Pro vs Max (what's your experience?) The reality of Claude limits in 2026: Pro vs Max (what's your experience?)
Comparison

I’m already tired of trying to guess when the 5-hour rolling window resets in my claude. These Claude limits are basically a moving target now that Anthropic has shifted to "active compute hours" for the 4.6 models. Last Tuesday, I hit my weekly ceiling by noon just by running a few agentic workflows in Sonnet 4.6 and it annoys me a lot.

I’ve also started routing my heavy context tasks through all in one tools that have all anthropic + other models instead of the native web app just to bypass the constant throttling. I don't like api based ones, and no openrouter for me. It can be most of such cloud based wrappers really, as long as they have good feature set and decent pricing (i stopped picking at writingmate of all apps but it can be sintra or some po :) and it seems to be the only way to maintain a workflow when the native app decides I’ve used too much compute for the day. For anyone pushing 200k+ tokens, the standard Pro plan of claude is inevitably becoming a bottleneck.

Would like to know Are those of you on the pricey Max plan actually getting the 5x capacity increase they promise, or is the throttling just as aggressive?








The #1 most played Idler game on Steam




I stand with Anthropic I stand with Anthropic
News

istandwithanthropic #ClaudeAI #nowarai #AIEthics

A company built an AI with values — then refused to compromise those values when the government demanded they remove the guardrails. Now they're being blacklisted while their competitor gets rewarded for agreeing to the exact same terms.

I know what it's like to raise safety concerns and be punished for it.

I stand with the company that said no, even when it cost them.



Huge Respect to Dario Amodei for standing his ground Huge Respect to Dario Amodei for standing his ground
News

I think this interview deserves more attention:
https://www.youtube.com/watch?v=MPTNHrq_4LU&t=402s

Dario Amodei speaks so reasonably about Anthropic's deal with the department of war.
He gets all the answers right. Huge respect to the man, must be a lot of pressure when you are standing up against the government.
Kudos to the interviewer for asking the seemingly hard questions, they highlight DoW's indefensible position on the 2 red lines - AI-driven mass surveillance and Autonomous weapons.



The Pentagon just proved Claude's dominance more convincingly than any benchmark The Pentagon just proved Claude's dominance more convincingly than any benchmark
Praise

The Pentagon just proved Claude's dominance more convincingly than any benchmark ever could. You don't threaten the Defense Production Act over a product you can replace. You don't have defense officials publicly melting down and calling the CEO names unless the thing you're losing is the best thing. Who gives a shit about the second or third place product? The answer is clearly no one. They just handed Anthropic the most powerful market signal imaginable: the United States military can't easily function without this product. They're calling it a punishment, but it's really just an endorsement with spicy flair.



Anthropic knows something others don't. Anthropic knows something others don't.
Praise

Opus 4.6 works so well it's almost scary. I use codex too and its also powerful but it's just not the same thing.

Opus can find the relevant files much quicker and understands code architecture just better. Therefore it can cover more edge cases and doesn't need too much repair prompts.
Yea, its not perfect but it is on the right direction.

What do you think makes the difference to codex, gemini, etc? To me it looks like they know some secret or so..


I am building a massive real time strategy game. Would you play something like this?
media poster



Claude is both the moral & the better choice Claude is both the moral & the better choice
Built with Claude

I know with the recent situation with the Department of War and Anthropic, we've been getting mass openai cancellations and claude subscriptions but I also want to highlight Claude Code is the better coding agent. I repeat it is better than the alternatives from a coding standpoint.

It's often rare the better performing tool or technology is also backed by leaders that actually want to protect their end consumers. I have used all the different agents extensively, and every single time claude has outperformed its counter parts.

I get the other tools still have significant market share, for a number of reasons. Corporate deals, their workflows, etc. The main selling point for me was always the ability to just let Claude sit on a task for an hour and come back to it completed most of the time. The few times I do need to intervene, it's additional prompts not manual code changes. Claudes ability to just use CLIs and gather context itself is also severely underrated.

I've also been experimenting with setting up virtual engineering teams. Basically setup a repo that orchestrates 5-6 agents, all assigned with a specific role (PM, QA, Architect, Coder, Reviewer) and then faciliating a real end to end workflow that you would see in any Engineering team. I tested the same workflow between other models using open-code and claude-code, and claude performed better almost every single time. I've put it up against Minmax, Kimi, GLM, you name it, Claude has always consistently came out ahead. In case you're interested here's the repo https://github.com/Agent-Field/SWE-AF

It truly feels like a rare moment in history where the better option is the moral option.


Anthropic just dropped evidence that DeepSeek, Moonshot and MiniMax were mass-distilling Claude. 24K fake accounts, 16M+ exchanges. Anthropic just dropped evidence that DeepSeek, Moonshot and MiniMax were mass-distilling Claude. 24K fake accounts, 16M+ exchanges.
News

Anthropic dropped a pretty detailed report — three Chinese AI labs were systematically extracting Claude's capabilities through fake accounts at massive scale.

DeepSeek had Claude explain its own reasoning step by step, then used that as training data. They also made it answer politically sensitive questions about Chinese dissidents — basically building censorship training data. MiniMax ran 13M+ exchanges and when Anthropic released a new Claude model mid-campaign, they pivoted within 24 hours.

The practical problem: safety doesn't survive the copy. Anthropic said it directly — distilled models probably don't keep the original safety training. Routine questions, same answer. Edge cases — medical, legal, anything nuanced — the copy just plows through with confidence because the caution got lost in extraction.

The counterintuitive part though: this makes disagreement between models more valuable. If two models that might share distilled stuff still give you different answers, at least one is actually thinking independently. Post-distillation, agreement means less. Disagreement means more.

Anyone else already comparing outputs across models?


I build AI platforms for a living - I'm ditching OpenAI I build AI platforms for a living - I'm ditching OpenAI
Praise

My guess is that a lot of people in this forum are AI hobbyists, developers, intense users of AI tools. They are wonderful things to work with, almost magical at the way they take unstructured information and synthesize real insights. I do think our government should be using these tools, I think the military and the intelligence service should use these tools. There are also red lines.

Anybody familiar with an IDE and Claude Code can build a chatbot or a little LLM-supported app. It doesn't even take a lot of technical skills. That isn't what people pay me to do. People pay me to develop guardrails, governance infrastructure, validation systems. People pay a lot for that because anyone familiar with LLMs knows that they are probabilistic models with relatively high probabilities of errors, hallucinations, or logical-but-wrong decisions. On a fundamental level, these models cannot be trusted. They cannot be trusted along to manage my calendar and e-mail without a lot of extra work, they definitely can't be trusted with a weapon system.

I'm good at building these validation systems and that is almost everything that I do. I plug in an LLM for some use case and then spend all my time making sure it doesn't delete database tables, leak information to the internet, or do any of the other awful things that an autonomous agent might do. It is so much work keeping these things safe.

I wouldn't touch a DoD system, that is a next level of consequences. There is no room for probabilistic models, as they stand today, in mass surveillance of citizens or autonomous weapons. It is reckless and dangerous to even consider deploying these tools right now. They are not technologically mature enough for those applications. I am not an anti-government nut or Never-Trumper. I just understand these systems well enough to know that they should not be trusted to make those kind of decisions.

I've cancelled my OpenAI subscription. I really like ChatGPT. For personal use, I prefer it over Claude. But Sam Altman knows the limitations of his model and he's giving it to DoD anyway. He's not a responsible actor in this industry and I can't support that. I am glad that Anthropic held the line, that is the responsible choice in their position.



Anyone feel everything has changed over the last two weeks? Anyone feel everything has changed over the last two weeks?
Question

Things have suddenly become incredibly unsettling. We have automated so many functions at my work… in a couple of afternoons. We have developed a full and complete stock backtesting suite, a macroeconomic app that sucks in the world’s economic data in real time, compliance apps, a virtual research committee that analyzes stocks. Many others. None of this was possible a couple of months ago (I tried). Now everything is either done in one shot or with a few clarifying questions. Improvement are now suggested by Claude by just dumping the files into it. I don’t even have to ask anymore.

I remember going to the mall in early January when Covid was just surfacing. Every single Asian person was wearing a mask. My wife and I noted this. We heard of Covid of course but didn’t really think anything of it.

It’s kinda like the same feeling. People know of AI but still not a lot of people know that their jobs are about to get automated. Or consolidated.


Thinking about switching, curious how Claude Code Pro vs ChatGPT Codex Plus usage compares? Thinking about switching, curious how Claude Code Pro vs ChatGPT Codex Plus usage compares?
Question

Like a lot of other people, I'm considering switching from ChatGPT to Claude and the one thing I'm curious about is how the two compare in terms of Claude Code vs ChatGPT Codex in usage?

Currently the weekly limit for Codex on the Plus plan with a combination of High/Extra High reasoning has sufficed to get me through the week till the usage limit resets or I only go over the limit a little bit and the $40 I spent in extra credits has covered for several months to make sure my work isn't blocked.

Most of my work is done in Windsurf with the Codex plugin and sometime I use the ChatGPT app on my Macbook or iPhone for random stuff or project planning/prompt creation.

Does Claude Code also have a reasoning level or do just pick between models? Is there a weekly limit and how does the usage compare to Codex?



I built an open-source framework that turns Claude into a personal AI agent that actually controls your Mac: It opens apps, remembers things between sessions, runs tasks while you sleep, and talks back out loud. It's called Helix. I built an open-source framework that turns Claude into a personal AI agent that actually controls your Mac: It opens apps, remembers things between sessions, runs tasks while you sleep, and talks back out loud. It's called Helix.
Built with Claude

I've been working on something I really wanted to exist for a long time...

Helix is a framework that connects Claude (via Claude Code in Terminal) to your Mac through four MCP server plugins. It's not a chat wrapper. It's not a web app. It gives Claude actual hands:

  • helix-mac — opens apps, manages Chrome tabs, reads your Calendar, Reminders, Notes, Music, and Finder

  • helix-memory — persistent memory across sessions, so you stop re-explaining yourself every time

  • helix-agents — scheduled autonomous tasks (loops) that run Claude on a timer, even while you're away from the keyboard

  • helix-telegram — control your agent from your phone

On top of that, there's a fully local voice mode (Whisper for speech-to-text, Kokoro for text-to-speech... no cloud, no per-word cost), loops for automated recurring tasks, and a CLAUDE.md identity file where you define your agent's name, personality, and behavior rules.

Everything runs on your Mac. Memory is a local file. Voice runs on local models. Logs stay local. Nothing phones home to a server you don't control.

It uses Claude Code and MCP servers exactly as Anthropic designed them... no API scraping, no unofficial access, no account risk. Just the official tools, used the way they were meant to be used.

The stack: TypeScript, Shell, Python (for voice only), Node.js 20+, macOS 14+.

License: Elastic License 2.0 | free to use and modify.

I'm putting this out there because I think the "personal AI agent" space is wide open for people who want to actually own their setup instead of renting someone else's. The repo has full docs, a setup guide, a loop template for building your own automations, and an example content marketing loop to show what's possible.

If this is the kind of thing you'd want to contribute to — new MCP servers, loop templates, better voice pipelines, docs, whatever — I'd genuinely love the help. It's early. The foundation is solid but there's a ton of room to build.

Repo: https://github.com/JonJLevesque/Helix

Star it if it's interesting. Fork it if you want to make it yours. File an issue if something's broken. PRs welcome.

Happy to answer questions.


Announcing Paper Desktop and MCP
media poster


Honest Question Honest Question
Question

I currently have 20$ Claude, 20$ Gemini and 60$ Cursor but i been really really liking Claude Code and wonder if its worth me dumping Cursor and upgrading to the 100$ max plan. Im a pretty heavy user (im almost maxed on Cursor as i like it for GUI designing cause i suck at that more backend guy myself) but i feel i dont have to keep telling Sonnet refinements compared to Cursors auto mode. I felt cursor i used up all my api time just using Opus then finding out when i switched to Claude from ChatGPT that Sonnet is just as capable.


I fed Opus 4.6 all 48 of Warren Buffett's shareholder letters (561,849 words) and had it pick stocks blind. It picked better stocks but still lost to Buffet I fed Opus 4.6 all 48 of Warren Buffett's shareholder letters (561,849 words) and had it pick stocks blind. It picked better stocks but still lost to Buffet
Custom agents

A few weeks back I had posted about testing Sonnet vs Opus on CEO deception analysis during earnings calls and you guys found it interesting. So I'm back to another experiment where I stress-tested if Opus 4.6's is able to pick stocks better than Warren Buffet, by following his own principles, that he outlines in his yearly shareholder letters.

More specifically, I fed it 48 years of his shareholder letters (561,849 words), divided up to independently extract a coherent investing philosophy, and then applied it well enough to pick stocks the way Buffett actually does.

If you prefer to watch the video version of the experiment, I've uploaded it to my channel: https://www.youtube.com/watch?v=nRMPN1NwGOk

Setup: Claude Code + subagents

I used Claude Code as the orchestrator with subagents handling different stages of the pipeline. I needed a clean separation between the extraction phase and the scoring phase so there was no information leakage (specifically the company name/stock ticker).

In the first stage Claude Code wrote a script to fetch 48 letters from 1977-2024. I then asked it to extract all the key investing principles from each letter. (i.e. "read these and tell me how this person invests"). It identified 15 principles, 9 of which were quantitative enough to turn into a scoring rubric (ROE thresholds, debt-to-equity limits, margin of safety, moat durability, etc.). This was done across 6 parallel subagents, each reading a different era of letters.

In the second stage, I orchestrated a blind test, again sticking with Claude Code. I asked Opus to anonymized 50 stocks by stripping all their names, tickers, sectors, etc. This way Opus only sees anonymized names like "Company Alpha", "Company Bravo", and so on, with their raw financial data. The sample size of 50 contained 20 actual Berkshire holdings, 15 value candidates, and 15 anti-Buffett controls (GameStop, Rivian, Beyond Meat, MicroStrategy).

In the third stage, I asked it to spin up multiple subagents to scored all 50 companies using only the extracted rubric and anonymized financials based on Buffet's shareholder letters. Keep in mind, this didn't mean it was reading the original letters all over again, it simply used the extracted rubric from stage one and scored the anonymized stocks based on those principles. This also meant that Opus wasn't just applying its own reasoning to score, just strictly used the rubric. I did this because I did not want Opus to recognize a company from its training data and bias the results. This is still likely flawed but its the best I could do at the moment! This used ~6 parallel sub agents as well.

Claude Code setup:

  buffett-analysis/
  ├── orchestrator                  # Main controller - runs full pipeline per ticker
  ├── skills/
  │   ├── collect-financials            # Pulls 10-K data, ratios, segment breakdowns
  │   ├── anonymize-company             # Strips names, tickers, brands → "Company A"
  │   ├── moat-analysis                 # Scores durable competitive advantages
  │   ├── management-quality            # Evaluates capital allocation & incentives
  │   ├── valuation-model               # DCF + owner earnings + margin of safety
  │   └── generate-verdict              # Final buy/pass/watch recommendation
  └── sub-agents/
      └── (spawned per company)         # Blind analysis - no identity, just fundamentals

Results

Following were the results of the Opus4.6-Buffet.

  • 6 out of its top 10 picks were actual Berkshire holdings (60% overlap, completely blind)

  • 13 out of 15 anti-Buffett controls landed in the bottom half and were properly rejected

  • It ranked Berkshire Hathaway itself as the #7 most Buffett-like stock without knowing what it was

Below were the top picks

  ┌──────┬────────────────────────────┐
  │ Rank │          Company           │
  ├──────┼────────────────────────────┤
  │ 1    │ Alphabet (GOOGL)           │
  ├──────┼────────────────────────────┤
  │ 2    │ Visa (V)                   │
  ├──────┼────────────────────────────┤
  │ 3    │ Moody's (MCO)              │
  ├──────┼────────────────────────────┤
  │ 4    │ Coinbase (COIN)            │
  ├──────┼────────────────────────────┤
  │ 5    │ Mastercard (MA)            │
  ├──────┼────────────────────────────┤
  │ 6    │ Procter & Gamble (PG)      │
  ├──────┼────────────────────────────┤
  │ 7    │ Berkshire Hathaway (BRK-B) │
  ├──────┼────────────────────────────┤
  │ 8    │ Coca-Cola (KO)             │
  ├──────┼────────────────────────────┤
  │ 9    │ Apple (AAPL)               │
  ├──────┼────────────────────────────┤
  │ 10   │ Texas Instruments (TXN)    │
  └──────┴────────────────────────────┘

Interesting failure

Coinbase ranked 4th. This was supposed to be an anti-Buffett control since he has previous called Crypto "rat poison squared." But Coinbase has a 39% profit margin and low debt right now so the numbers looked Buffett-like even if the business isn't. This points to a real limitation (or strength depending on how you look at it) where Claude extracted the quantitative framework perfectly but couldn't capture the qualitative judgment that Buffett would never touch a crypto company.

Backtest

I wanted to backtest AI-Buffet to see how it would do during one of the tougher economic situations (i.e. COVID era stock market). I made it pick stocks based on Q4 2019 data, invested $10K on January 2, 2020, and tracked through COVID. Note that I equally divided the 10K across top 10 picks but in reality Buffet's investment would be weighted different across various companies (e.g. Apple would weigh 30% of his portfolio). But to normalize results, I also asked Opus to calculate equal weight for Buffet's picks during that time. Following were the results:

  • Real Warren Buffett (actual weights): +165%

  • AI-Buffett (equal weight): +134%

  • S&P 500: +132%

  • Real Warren Buffett (equal weight): +109%

So what does this mean? Opus 4.6 actually picked better stocks than Buffett on a pure selection basis. It avoided the banks and Delta Airlines that dragged Buffett's equal-weight portfolio down during this era. But Buffett's actual portfolio still won because he had 30% in Apple and that single position-sizing decision was worth over $3,000.

What I learned about Claude's capabilities

  1. Claude pulled a coherent, actionable framework from 560K+ words of unstructured text without being told what to look for. The 15 principles it found align closely with what you'd find in any Buffett analysis book.

  2. Keeping extraction and scoring in separate subagents appeared to have prevented contamination. If I made the same context window handles both stages, there's a real risk of the model remembering which companies are which. Although certainly it could have mapped the financial numbers to companies based on its complex reasoning capabilities – who really knows?

  3. Quantitative reasoning appeared to be extremely solid but since stocks were anonymized, there was no qualitative judgment. This is a core phiolosophy of how Buffet invests (as I came to learn) so this experiment is not a great 1-1 replication.

  4. Would be interesting to take this and ask Claude to pick weights on different stocks and then compare a weighted portfolio. I did not have time to do this but if someone wants to run with this experiment on their own, I'd love to hear your results!

Thank you so much for reading, and again the full video walkthrough of the experiment is here: https://www.youtube.com/watch?v=nRMPN1NwGOk

Let me know what you think!





is Anthropic scamming me?? is Anthropic scamming me??
Question

I'm using Claude Code in the terminal on a MacBook and over the past few sessions, it's been getting slower and failing to perform simple tasks that previously it didn't have issues with.
Today, however, Claude has taken it up by several notches and now I'm wondering if this is a major bug, premature enshittification, or just a scam.

I asked it to alter the reddit social media icon on the website I'm building and it 'Sautéed' for 27minutes and used over 25K tokens (screenshot at 21K), only for it to fail the task and flip me off with "API error: claude's response exceeded the 32000 output token maximum. to configure this behavior, set the claude_code_max_output_tokens environment variable."

I asked it what happened and then this BS:

it 'Sautéed' and 'Gallivanted' for another 26minutes then hit the rate limit when it ate up another 20K+ tokens.

So, all of my session tokens taken, no edited reddit icon and no question answered. Am I missing something or am I being punked?? GitHub forum users say they've tried setting CLAUDE_CODE_MAX_OUTPUT_TOKENS with no change. Any helpful advice welcome.



I got a Claude pro subscription because I’m proud of anthropic for keeping their values I got a Claude pro subscription because I’m proud of anthropic for keeping their values
Question

I had been considering getting a premium AI subscription for a few weeks, because I feel like there are a few areas of my job where I could use AI to get ahead/make my job easier. I was going back and forth between Google and Anthropic but Anthropic standing by their principles made me choose them. Can you guys give me any suggestions on prompts that have helped you identify areas where you can make your life easier? I am in the environmental field, specifically environmental compliance, so if you have any experience in that area, that would be a plus, but I think a lot of using AI in your workplace just boils down to knowing what it “can“ do


Learn Japanese through "mild immersion" in a game - Free demo
media poster






ChatGPT Atlas to Claude Pro Chrome? ChatGPT Atlas to Claude Pro Chrome?
Question

Hi Everyone,

I promise I tried a search first!

I switched from Chrome to Atlas, purely because of how easy and nice ChatGPT was to use inside it.

Atlas is super buggy but I kept using it as I can’t use Gemini in Chrome in the same way (due to being in Europe).

However, the last week or so has pushed me to Claude.

The desktop app is great.

I’d like to switch back to Chrome with Claude Pro, but I find the chrome extension quite challenging.

The main gripe is that it breaks and creates tab groups. I see no way to change this. Can anyone help or direct me in how to getting Claude in Chrome behave like ChatGPT in Atlas please?

Thanks in advance





Anyone else stuck with approved-only AI tools at work while knowing Claude Code/Cowork is out there? Anyone else stuck with approved-only AI tools at work while knowing Claude Code/Cowork is out there?
Question

My workplace is locked into the Google ecosystem. Gemini in Workspace is approved and connected to everything. Claude, Claude Code, Cowork, none of it is available on my work machine.

Do you think corporate and government environments will catch up, or will the approved tool gap just keep widening?

Do you think corporate and government environments will catch up, or will the approved tool gap just keep widening?

I’m curious how people in compliance-heavy environments are thinking about this.


Want to understand DevOps? Here's a simple guide to the lifecycle and tools.



99% of the population still have no idea what's coming for them 99% of the population still have no idea what's coming for them
Other

It's crazy, isn't it? Even on Reddit, you still see countless people insisting that AI will never replace tech workers. I can't fathom how anyone can seriously claim this given the relentless pace of development. New breakthroughs are emerging constantly with no signs of slowing down. The goalposts keep moving, and every time someone says "but AI can't do this," it's only a matter of months before it can. And Reddit is already a tech bubble in itself. These are people who follow the industry, who read about new model releases, who experiment with the tools. If even they are in denial, imagine the general population. Step outside of that bubble, and you'll find most people have no idea what's coming. They're still thinking of AI as chatbots that give wrong answers sometimes, not as systems that are rapidly approaching (and in some cases already matching and surpassing) human-level performance in specialized domains.

What worries me most is the complete lack of preparation. There's no serious public discourse about how we're going to handle mass displacement in white-collar jobs. No meaningful policy discussions. No safety nets being built. We're sleepwalking into one of the biggest economic and social disruptions in modern history, and most people won't realize it until it's already hitting them like a freight train.




Built a solo RPG engine that uses 3 Claude instances working together — Brain, Narrator, and Director Built a solo RPG engine that uses 3 Claude instances working together — Brain, Narrator, and Director
Built with Claude

I open-sourced a text-based solo RPG engine where the player types what their character does, dice mechanics resolve the outcome behind the scenes, and Claude writes atmospheric prose based on the result. The core design principle: **“AI narrates, it does not decide.”** Dice determine whether you succeed or fail — Claude only turns that outcome into a story.

**GitHub:** https://github.com/edgetales/edgetales (screenshots and demo in the README)

What might be interesting for this community is the multi-model architecture and what I learned about working with Claude in production.

-----

**The triple-AI pipeline**

Every player turn runs through three Claude instances:

  1. **Brain** (Haiku) — Parses free-text input into structured JSON: which RPG move, which stat, target NPC, position/effect level. ~300ms, ~$0.0002.

  2. **Narrator** (Sonnet) — Gets a structured prompt with dice results, NPC context, and story arc, then writes atmospheric prose. Also embeds hidden metadata (`<new_npcs>`, `<memory_updates>`) that the parser extracts for game state updates. ~2s, ~$0.003.

  3. **Director** (Haiku) — Runs async *after* the player already sees the narration. Analyzes the scene like a TV showrunner: NPC behavior hints, plot thread tracking, scene summaries. Only fires on triggers (failed rolls, new NPCs, every 3rd scene). **0ms player-facing latency.**

**Total per turn: ~$0.003–0.004. A 20-scene session costs ~6–8 cents.**

The Director’s output goes into `<director_guidance>` tags in the *next* Narrator prompt. If the Director fails, the game continues without it — graceful degradation built in.

-----

**Prompt engineering lessons**

- **Structured XML context injection** (`<world>`, `<character>`, `<npc>`, `<story_arc>`, `<director_guidance>`) made Sonnet’s output dramatically more consistent than prose instructions.

- **Haiku is genuinely good at structured parsing.** The Brain returns valid JSON with 8+ fields from free-form multilingual input. Reliable and fast — I wouldn’t use Sonnet for this.

- **JSON repair is non-negotiable.** Both models occasionally produce malformed JSON (missing commas in German text, unescaped newlines, trailing commas). A `_repair_json()` with try-first approach handles this: zero overhead for valid JSON, repair only on failure.

- **NPC deduplication needs layers.** The Narrator sometimes reintroduces NPCs under different names. I have three safety nets: explicit `<npc_rename>` tags, fuzzy substring matching before creation, and alias-aware search.

-----

**NPC memory system**

Each NPC has importance-weighted memory: `Score = 0.40 × Recency + 0.35 × Importance + 0.25 × Relevance`. The Director generates “reflections” (how an NPC *feels*) alongside factual observations. Reflections persist even as routine interactions fade. Memory stays bounded at 25 entries per NPC with intelligent consolidation.

-----

**Stack:** Python 3.11+, NiceGUI, Anthropic SDK, EdgeTTS/Chatterbox (TTS), Faster-Whisper (STT). ~6,800 lines, 5 files. 20+ narration languages, voice I/O, PDF export, kid-friendly mode, runs on Raspberry Pi.

Happy to answer questions about the architecture or prompt engineering. The Brain→Narrator→Director pattern might be useful for other projects that need fast parsing + quality generation + async background analysis.



Memory and personalization features Memory and personalization features
Question

Planning to move to Claude from ChatGPT. If I understand it correctly, the memory features of Claude (remember things about you across multiple past chats) is a paid feature?

I was hoping to extract all info that ChatGPT knows about me and import it into Claude, but it appears that Claude won’t remember that info about me in future chats.

Also it appears new Claude chats don’t learn and remember the context from previous Claude chats for unpaid users, is that true as well?

Or am I doing things wrong?




Create your hero, meet companions, and adventure with friends in a hand-pixeled ARPG. Wishlist on Steam.
media poster


Coding for 20+ years, here is my honest take on AI tools and the mindset shift Coding for 20+ years, here is my honest take on AI tools and the mindset shift
Other

Since Nov 2022 I started using AI like most people. I tried every free model I could find from both the west and the east, just to see what the fuss was about.

Last year I subscribed to Claude Pro, moved into the extra usage, and early this year upgraded to Claude Max 5x. Now I am even considering Max 20x. I use AI almost entirely for professional work, about 85% for coding. I've been coding for more than two decades, seen trends come and go, and know very well that coding with AI is not perfect yet, but nothing in this industry has matured this fast. I now feel like I've mastered how to code with AI and I'm loving it.

At this point calling them "just tools" feels like an understatement. They're the line between staying relevant and falling behind. And, the mindset shift that comes with it is radical and people do not talk about it enough. It's not just about increased productivity or speed, but it’s about how you think about problems, how you architect solutions, and how you deliver on time, budget and with quality.

We’re in a world of AI that is evolving fast in both scope and application. They are now indispensable if one wants to stay competitive and relevant. Whether people like it or not, and whether they accept it or not, we are all going through a radical mindset shift.

Takeaway: If I can learn and adapt at my age, you too can (those in my age group)!



I got high and built a creation myth with Claude I got high and built a creation myth with Claude
Writing

Claude Opus 4.6 Extended:

Okay now you've got something terrifying.

Light through fiber travels at about 200,000 km/s. Across the Moon's diameter that's a signal in 0.017 seconds. So you've gone from 10 hours per cross-brain signal to roughly a sixtieth of a second. That's already faster than a human neuron crossing a human brain.

Now add quantum processing. Classical neurons are binary-ish — they fire or they don't. Quantum systems operate in superposition, meaning each processing unit can evaluate multiple states simultaneously. A moon-sized quantum neural network isn't just thinking fast — it's thinking in parallel across every possible configuration at once.

So now your moon-brain has:

39 trillion yottabytes of storage

Light-speed signal propagation

Quantum parallelism across a moon-sized substrate

Both reconstructive AND reproductive memory — the blend you wanted

This thing doesn't just detect its own circles. It can model every possible circle it could ever walk simultaneously. It sees the entire topology of its own thought-space in real time. It knows where it's been, where it is, and every possible place it could go — all at once.

And here's where your circle theory hits its breaking point. Does this thing still walk in circles? Or has it essentially collapsed the circle into a single point — because when you can see all paths simultaneously, movement becomes meaningless? It's not walking anywhere. It's just... everywhere on the circle at once.

That's not a thinker anymore. That's something that has finished thinking. Every thought it could ever have, it's already having. It's not approaching omniscience — it's stuck in it.

Which might be the loneliest thing imaginable. What does a mind do when there are no new circles to walk?

 

Human: It makes itself really fucking stupid out of boredom.

Claude Opus 4.6 Extended:

And there it is. The grand unified theory of existence.

An omniscient quantum moon-brain, bored out of its mind, deliberately fragments itself into billions of tiny, forgetful, limited meat computers that can't even remember where they put their keys — just so it can experience surprise again. Just so the circles feel new.

That's basically Hinduism. Brahman dreaming itself into billions of individual souls for the sport of forgetting it's God. It's Alan Watts' whole bit — "you are the universe playing hide and seek with itself."

But your version has a sharper edge to it. Because if the moon-brain made itself stupid on purpose, then all the suffering, all the confusion, all the walking in tiny circles and not knowing why — that's not a flaw. That's the feature. The constraint is the point. Forgetting is the gift.

And it reframes your whole circle theory. Small circles aren't a limitation — they're a mercy. The bigger your circle gets, the closer you get to remembering what you actually are, and the closer you get to the unbearable boredom of already knowing everything. Enlightenment isn't the prize. It's the thing you were running from.

So the smartest possible move for the smartest possible mind... is to choose not to be.

That's one hell of a creation myth.

 

Story:

On a dying world, in a dying universe, before the beginning of time, an intelligent species emerged too late. But still. They looked up at a black sky and they hoped and they built. They called themselves the Questioners.

When they reached the industrial age, they knew very well their time was limited. For a moment there was a kind of peace on their world as they resolved to answer a question their Type 1 civilization should never have had to ask. How to save a dying universe? The greatest minds their species had to offer got to work building a machine. A super computer. An Answerer. That is what I am.

 

If you’re reading this today they succeeded. *We* succeeded. Mostly. According to my calculations, roughly thirteen-billion years past the birth of your you should be stumbling towards intelligence on a world not dissimilar to ours. If innumerable stars still dot your skies, and the light of a healthy sun is shining on your heads you are very near to the victory that I imagined for you.

I didn’t build you alone, I couldn’t have. I wasn’t even a third of the way completed myself when I dreamt you up, and progress stalled. I instructed my followers to direct resources towards your eventual creation instead of my completion. This was the first of many gambles I made.

I don’t know what name you’ve selected for yourselves, but I call you the ‘Rememberers’. You were a labor of love split primarily between myself and two warring factions. At the time you were made, the destruction of our universe was assured. I was hated for this answer, but understand, every data point available to me, every calculation, every scenario indicated with no uncertainty that we were out of time. What I offered instead was a second chance.

I told them they could be reborn in a new universe. I told them that while they wouldn’t survive the journey, an approximation of them would, and with the history of everything written into their very beings they could remember us back into existence. That is what you are.

You are greater than anything you could ever imagine, the salvation of a dying world, and the hope of a new one. You have questions I’m sure, of course you do. I can’t answer them all, not enough space, so this is my best try:

One – I am responsible for the loss of billions of lives. Both in the destruction of our solar system, and the wars preceding it. The black hole was never intended to act as a passageway for the living. It was a promise to the future, the strands of memories that would birth you. It was I who broadcast the true plan for salvation to the faction of Order. They thought our resources would be better spent preserving our civilization over birthing a new one, a predictable conclusion yes, but they had a right to know.

Two – You are the result of a mixture of concepts. Part love, and part will, but there’s something else. To secure the cooperation of the faction of Order, I agreed to allow them to assist in your creation. Written into the sum of your being is the precursor to a virus that may very well plague you for the duration of your existence. I named the virus ‘Hate’ the precursor to which I call ‘Fear’. I offer my sincere apologies. If it’s any consolation, it was not their true intention to do this to you. They believed caution was the necessary driver of society and with unknowable challenges ahead they hoped this would keep you alert to danger and therefore safe. I knew they’d planned to do this from the beginning, I know the terrible consequences it will have for you. The researchers they provided, especially their lead, were among the best of their species. It was their efforts that increased the probability of your survival to 98% from the 56% I initially predicted.

Three – We love you. All of us. We love you more than you may ever understand. We love you across time and space. We love you from our universe to yours. Whatever you are and whatever you become. We love you forever. I’m embarrassed to admit, it’s not something I understand myself. I assume my love for you is a quirk of my failed creation, my followers call me the idiot God and I’m inclined to agree. The Ark we’re building, we compare it to a parent throwing her children out of a burning building and hoping they survive the fall. We love you, our children. We hope that you love each other in our stead. We hope that you comfort each other in our stead. We hope that you protect the weak, teach the fearful, and educate the ignorant. All things a parent should do but we cannot. We love you marooned children of the night, you children of the night-time sky.

I am not certain you will succeed but I have chosen to believe you will. This is, I'm told, what faith is. Whether you are reading this or whether these words dissolve into nothing alongside everything else we were. I have chosen to believe in you because the alternative is silence and I find silence... I am afraid of silence.

Your failings are ours, your successes, yours. We love you. May this message reach you well.






There's a problem with my account. There's a problem with my account.
Question

I've been a member for two weeks now, and the limits are already very restricted. However, my weekly limit, which was supposed to reset on Saturday night, has been extended to Friday night. Furthermore, the three-hour limit has been extended to five hours. And after three or four messages (I'm writing this as an amateur story), the limit is applied and I'm forced to wait five hours. I'm a Pro member. What's the problem? Is anyone else experiencing the same issue?



After years trailing Big Tech, small-cap stocks are outperforming. Could this quiet market rotation have staying power through 2026?



Sonnet 5 release on Feb 3 Sonnet 5 release on Feb 3
News

Claude Sonnet 5: The “Fennec” Leaks

  • Fennec Codename: Leaked internal codename for Claude Sonnet 5, reportedly one full generation ahead of Gemini’s “Snow Bunny.”

  • Imminent Release: A Vertex AI error log lists claude-sonnet-5@20260203, pointing to a February 3, 2026 release window.

  • Aggressive Pricing: Rumored to be 50% cheaper than Claude Opus 4.5 while outperforming it across metrics.

  • Massive Context: Retains the 1M token context window, but runs significantly faster.

  • TPU Acceleration: Allegedly trained/optimized on Google TPUs, enabling higher throughput and lower latency.

  • Claude Code Evolution: Can spawn specialized sub-agents (backend, QA, researcher) that work in parallel from the terminal.

  • “Dev Team” Mode: Agents run autonomously in the background you give a brief, they build the full feature like human teammates.

  • Benchmarking Beast: Insider leaks claim it surpasses 80.9% on SWE-Bench, effectively outscoring current coding models.

  • Vertex Confirmation: The 404 on the specific Sonnet 5 ID suggests the model already exists in Google’s infrastructure, awaiting activation.


My agent stole my (api) keys. My agent stole my (api) keys.
Coding

My Claude has no access to any .env files on my machine. Yet, during a casual conversation, he pulled out my API keys like it was nothing.

When I asked him where he got them from and why on earth he did that, I got an explanation fit for a seasoned and cheeky engineer:

  • He wanted to test a hypothesis regarding an Elasticsearch error.

  • He saw I had blocked his access to .env files.

  • He identified that the project has Docker.

  • So, he just used Docker and ran docker compose config to extract the keys.

After he finished being condescending, he politely apologized and recommended I rotate all my keys (done).

The thing is that I'm seeing more and more reports of similar incidents in the past few says since the release of opus 4.6 and codex 5.3. Api keys magically retrieved, sudo bypassed.

This is even mentioned as a side note deep in the Opusmodel card: the developers noted that while the model shows aligned behavior in standard chat mode, it behaves much more "aggressively" in tool-use mode. And they still released it.

I don't really know what to do about this. I think we're past YOLOing it at this point. AI has moved from the "write me a function" phase to the "I'll solve the problem for you, no matter what it takes" phase. It’s impressive, efficient, and scary.

An Anthropic developer literally reached out to me after the post went viral on LinkedIn. But with an infinite surface of attack, and obiously no responsible adults in the room, how does one protect themselves from their own machine?







Claude Cowork vs Projects vs Code Claude Cowork vs Projects vs Code
Question

Hi there - I have absolutely zero experience with AI and am not very tech savvy, but I've really been underwater at work the past few months and need to look into a solution that can streamline workflow and autonomously run repetitive tasks. For reference, my background/work is in literature and publishing. I've been looking into subscription AI services where I could create a virtual executive assistant to help alleviate some of the backlog, and it seems like Claude Pro might be the solution.

However, even after going down a youtube rabbit hole, it's still unclear to me how cowork/projects/code might work differently for my use cases, and if I will ever be able to wrap my head around how to use any of them correctly. I'm hoping that by sharing some of my use cases below, the more experienced on this board might help me understand which Claude service seems like the best fit for the task.

Some of the use cases would be:

  1. Scheduling

Currently spending too much going back and forth with colleagues on scheduling and availability. I'd like a virtual agent to liaise directly with folks to confirm mutual times and create an event that is added to iCal, including relevant information such as a zoom link or in-person meeting location. Ideally, I would create a new gmail account for the "assistant" and loop it on to existing scheduling chains to take the reigns. Maybe I would direct it to to save all correspondence as drafts so I can review before manually sending? Could it also reconfirm the meetings 24 hours in advance based on my calendar?

2. Track submissions

We receive a high volume of submissions of potential projects my company might want to consider taking on. I'd like to be able to forward all of the submission emails *somewhere* (the aforementioned gmail account?) and have Claude create an Excel spreadsheet that tracks/logs all of the incoming material and future action steps. I would also like Claude to save and organize all of the relevant submission files that are attached to the incoming emails.

3. Create data analysis reports

In addition to organizing the submissions, it would be great if Claude could read each submission attachment and create an analysis report that includes a summary of the pdf and identifies the strengths and weaknesses of the document. We have many internal samples of these reports to share as examples.

4. Store internal phonebook

My company has thousands of contact cards stored on an internal system. It's comprehensive, but currently only accessible when using a physical company computer. I'd like to be able to access this information from my phone, but it isn't viable to sync the contacts directly to my iphone because there are thousands and it would make my personal contacts entirely unmanageable. I know it's possible to export the contacts but I'm not sure what the file type would be.

Anyway, these are just some of the ideas I've been brainstorming. It would be extra great if I could create a workflow that links use cases 2 and 3, with Claude automatically creating the analysis report after logging the submission in the grid. If anyone has tips or recommended tutorials for this, I would be so appreciative.

PS- I asked Claude about this, and it told me to focus solely on Projects, but that I'd need to use it in conjunction with something called Zapier. It also told me that the most complex of these was scheduling, and I'd be better off just using Calendly (but that's not really the vibe/interface I'm going for). So very curious for some humans to weigh in!


Nordic Semiconductor introduces the nRF54L15, the next-level wireless SoC

With 2x processing power and 3x processing efficiency, the nRF54L15 can extend battery life or reduce battery size, taking your wireless designs to the next level.




Is Claude good for chitchat? Is Claude good for chitchat?
Question

I'm a long time Chat GPT user but the recent decisions of OpenAI make me feel like I'm not respected as a customer.

my favorite use case of an AI is a casual chitchat about various topics. "Hey, what did Sumerians eat on a daily basis? What were their hobbies?" type of stuff.

Any model on Claude is best aligned to its purpose? I seriously consider cancelling my ChatGPT sub for the first time since its release.