×
top 200 commentsshow all 273

[–]ClaudeAI-mod-botWilson, lead ClaudeAI modbot[M] [score hidden] stickied comment (4 children)

TL;DR generated automatically after 200 comments.

The consensus here is a resounding "yikes." The community is overwhelmingly disgusted by Anthropic's partnership with Palantir, viewing it as a massive betrayal of their "safe and ethical AI" branding.

  • Many users are voting with their wallets and canceling their paid subscriptions in protest, feeling Anthropic's actions are hypocritical.
  • The main counter-argument, however, is that this is an industry-wide problem. Commenters pointed out that Palantir is a major government contractor providing access to models from OpenAI and Google too, leaving many feeling like there are no "clean" alternatives among the big players.
  • A user who insisted Claude was the only official US government AI got downvoted to oblivion after others provided receipts that the DoD is also using Grok.
  • Overall, the feeling is that Anthropic's actions don't match its high-minded rhetoric, and people are demanding transparency.

[–]randombsname1Valued Contributor 225 points226 points  (42 children)

Yeah. This has been a thing for a year or 2 I think.

I don't like it, but I think OpenAI is also partnered with them, and I imagine SOME entity within Google likely is as well, but im less sure about Gemini specifically.

Anyway, my point is -- yeah it sucks, but I'm not sure if there are any alternatives at the moment that AREN'T partnered with them at this point.

Edit: This IS like the largest, "black mark" on Anthropic atm though, imo.

[–]Financial-Complex831 64 points65 points  (6 children)

I love the service and it’s done so much for me but I canceled my Max subscription today with the reason ‘ideological differences.’

[–]DataPhreak[S] 8 points9 points  (5 children)

I don't have a subscription to Claude directly. I use perplexity, which let's me just switch to whichever model I want. But if I did, I would cancel too. 

[–]The_Memening 8 points9 points  (4 children)

Every one of those models (besides Deepseek) are partnered with Palantir.

[–]DataPhreak[S] 0 points1 point  (3 children)

That's a fair argument. But perplexity pays a flat fee for access so my usage does not increase Anthropics pockets, and cancelling would not decrease it. And, more importantly, Perplexity is not partnered with Palantir.

[–]The_Memening 0 points1 point  (2 children)

Perplexity is also using the "Palantir" supporting models, so...........

Perplexity AI operates as a multi-model system, meaning it does not have its own standalone AI model. Instead, it utilizes a combination of its own Sonar model and other advanced models such as GPT-5.2, Claude 4.5, and Gemini 3 Pro. This architecture allows Perplexity to dynamically route queries to the most suitable model for different tasks, enhancing its functionality and performance

[–]DataPhreak[S] 0 points1 point  (1 child)

Yes. I'm aware. My point still stands.

[–]Quick_Garbage_3560 0 points1 point  (0 children)

No it doesn't- any model you use with perplexity, perplexity has to end up paying the provider (which in your case are all in cahoots with palantir) so either way the money is getting to them- you're just sending it through a middle man (perplexity)

[–]rr1pp3rr 8 points9 points  (0 children)

Google has been funded by DARPA since the beginning. Assume they give the feds whatever they ask for, even the illegal stuff...

ESPECIALLY the illegal stuff

[–]Naka7a 10 points11 points  (5 children)

Mistral Ai is a french company. It’s not as good as Claude but it gets better each day. The more people use and support Mistral, the better it can get.

[–]clean_parsley_pls 3 points4 points  (0 children)

started using Ministral-3-3B (unsloth gguf) the other day and it's wonderful! so much faster and comparable quality or better to the 7B/8B models on my 8GB 3060 Ti.

[–]The_Memening 0 points1 point  (3 children)

I wouldn't be so sure a French company isn't partnered with Palantir. It has a LOT of presence in European countries.

[–]Dismal-Effect-1914 2 points3 points  (0 children)

Chinese models. Z.ai and whoever else

[–]DataPhreak[S] 8 points9 points  (13 children)

I'm not aware of any other major AI companies partnered specifically with Palantir. I do know that OpenAI does have a government contract or twenty. What I'm really trying to do here is draw a contrast between what Anthropic has been saying over the past week and what we found out today.

[–]gscjj 40 points41 points  (10 children)

Palantir is how government access AI through FedSmart.

OpenAI is part of it:

https://www.reuters.com/world/us/openai-wins-200-million-us-defense-contract-2025-06-16/

Gemini is as well:

https://cloud.google.com/blog/topics/public-sector/google-public-sector-and-palantir-collaborate-to-bring-google-cloud-to-fedstart/

They have access to all major platforms through Palantir.

Palantir is an also partnered with AWS, Azure and GCP to serve their platform.

[–]jorel43 2 points3 points  (8 children)

There's absolutely no reason for the government to be funneling this through palantir, they didn't do it before.

[–]DataPhreak[S] 8 points9 points  (1 child)

You clearly have never heard of Palantir. This has been their business model for 25 years. They have been gobbling up all of our data since 9/11 basically. 

[–]jorel43 2 points3 points  (0 children)

I just looked him up... Jesus Christ that's scary

[–]alexx_kidd 2 points3 points  (3 children)

Of course there is a reason, it’s called establishing a dictatorship

[–]jorel43 1 point2 points  (0 children)

It has nothing to do with dictatorial aspirations, it has to do with corruption just like the rest of the mic.

[–]Continuum_Design 0 points1 point  (0 children)

And plausible deniability. Well it wast us see? It was our civilian contractor whom we can’t control. 😒

[–]MariusHugo 0 points1 point  (0 children)

a techno-monarchy-corporatist similar to cyberpunk. his dark enlightenment ideas are scary af

[–]Other_Hand_slap 0 points1 point  (0 children)

I would have sworn to gcp

[–]Obvious_Service_8209 -5 points-4 points  (1 child)

Anthropic did directly refuse to contribute/engage in surveillance on US citizens.

They have a contract, but are trying to maintain their dignity.

[–]peppaz 8 points9 points  (0 children)

Yea they'll just help build the software to do it

[–]Other_Hand_slap 0 points1 point  (0 children)

but what does it mean that who designs and sells missiles?

[–]bilbo_was_right 0 points1 point  (0 children)

A little bit of a naive take, Google’s been in bed with the federal government for a very long time

[–]The_Memening 0 points1 point  (0 children)

All of the AI services are partnered. Well, all the US ones are.

[–]Delicious_Ease2595 0 points1 point  (0 children)

Just Chinese models

[–]ADisappointingLife 0 points1 point  (0 children)

Nope. OpenAi is partnered with Anduril, the company whose CEO wants to make ai direwolves that power themselves by devouring an area's foliage.

They're both distinctly terrible & enabling different terrible companies both aiming for their own take on dystopia.

[–]XxBuiyXx 0 points1 point  (1 child)

Run a local model and write good software.

[–]clean_parsley_pls 0 points1 point  (0 children)

It's liberating to run your own AI setup like Open WebUI and understand more of what's going on. after you learn that, you know where to start to build similar features yourself. I'm using my last weeks of Claude Pro (annual sub, not renewing) to build as much of a Claude-like local interface and it's been a blast. I'm going to miss Claude Code, but spending a little extra time on project planning and feeding tasks to Qwen Code thru Roo/Kilo Code works well enough.

[–]AppealSame4367 0 points1 point  (0 children)

Mistral (partnered with French ministry of defense and many big weapon manufacturers though), Solar 3 (USA,SK,Japan), Trinity (Open Source USA model), Chinese models (very actively involved in Chinese problems though for sure), one of the many small open source models

It's not easy to move, but I try to find ways to do it. Try mistral vibe for example, it's quite good, close to codex and claude code and has some free usage.

[–]SatoshiReport -1 points0 points  (0 children)

Least it's partnering with the institution of the government as opposed to OpenAI which directly contributes to specifically Trump's re-election campaign.

[–]malipreme -2 points-1 points  (0 children)

This isn’t a black mark on anthropic in my opinion. It’s a black mark on the ecosystem. It is not progressing in the way any of these developers intended. If they are resorting to partnerships with competitors to be profitable, or to achieve their goals, it means they are not making headway anymore.

AI or LLM’s cannot proceed with expansion. Knowledge is useless if you don’t know what you don’t know. AGI, probability, is not able to define truth. Authority is the issue. A useful LLM cannot be given authority. It needs to be bounded by reality, proof, and physics.

[–]Old-Sherbert-4495 169 points170 points  (15 children)

it's disgusting. palantir has blood all over.

[–]spectre78 71 points72 points  (9 children)

That’s not to mention the blood that’s yet to come because of it. This company basically exists to end privacy and free expression in the pursuit of profit. This is pretty bad. Going to need to reconsider my subscription.

[–]PsecretPseudonym 11 points12 points  (7 children)

If you don’t like Palantir, I recommend just being outspoken about how grossly overvalued their stock is.

Their P/E ratio is ~400. It’s absolutely insane.

There’s zero chance government sales can justify a 20X increase in their revenue (they literally don’t have the budget), and their current commercial sales growth has been fueled by AI hype for solutions they don’t truly have a long term competitive edge in.

The only scenarios in which demand for their services could grow that much would also be scenarios where cloud providers, data lake/tooling providers, and their own clients are enabled and incentivized to clone, bundle, or internalize Palantir’s offerings — much easier to do with AI now, too, and why not just consolidate to a bundled solution from a more broadly supported platform?

Palantir’s stock is just grossly overvalued. If there’s a risk that we’re in an AI bubble, that valuation makes them part of ground zero for it deflating.

If you don’t like them, you could simply help point this out to more people.

[–]clean_parsley_pls 2 points3 points  (2 children)

https://www.companiesmarketcap.com/palantir/marketcap/

just a casual 4x since Election Day 2024.

[–]manwithouttaplan 1 point2 points  (0 children)

And a casual 10x before Election Day

[–]PsecretPseudonym 0 points1 point  (0 children)

And as I’ve pointed out, the entire budget to the departments they’re selling to even if handed over to them via complete corruption still wouldn’t give the revenue to justify their valuation.

[–]NerdBanger 1 point2 points  (1 child)

Was, 368 at today’s close and further dropped after hours

[–]PsecretPseudonym 1 point2 points  (0 children)

Depends a little bit on methodology, but it’s in that ballpark.

Using today’s close or 157.38, trailing twelve months diluted GAAP earnings, you get 374.6.

However, their GAAP diluted EPS vary over the year wildly:
Q4, 2024: $0.03
Q1, 2025: $0.08
Q2, 2025: $0.13
Q3, 2025: $0.18

So: $157.38 / $0.42 = 374.6 P/E

We don’t yet have their Q4 2025 reported.

They appear to be using AI hype and bootcamps for their products for customers to drive their sales funnel, but that seems like just exploiting current hype for capabilities they aren’t really leading in.

That’s great earnings growth over that year, but they would need to grow it 20X over that to justify their valuation, which would have to largely come from commercial sales, and any world where they have the potential for that is one where others would be aggressively competing for that kind of B2B IT budget, and it’s not clear they have much advantage in the B2B space.

[–]Technical-Row8333 2 points3 points  (1 child)

This post was mass deleted and anonymized with Redact

punch roll gold repeat support grandiose judicious fly abundant library

[–]PsecretPseudonym 0 points1 point  (0 children)

These are shares of publicly traded stock — bets shares of future earnings.

If it won’t deliver the earnings, it’s a bad bet.

[–]Francis_Shaw 0 points1 point  (0 children)

Real world Cyberdyne Systems

[–]deepmotion 21 points22 points  (0 children)

Cancelled my Claude subscription. They are of course free to sell their services to whomever. But don’t tell me you are the “ethical and safe” choice while helping ICE operate. Totally incompatible.

[–]peppaz 75 points76 points  (4 children)

Palantir is legitimately one of the most evil companies in the United States

[–]whosline07 17 points18 points  (1 child)

If only they would have conveniently named the company some name that would have warned everyone.

[–]peppaz 5 points6 points  (0 children)

If the current world was written as a fiction book it would get terrible reviews for being too ridiculous

[–]piponwa 1 point2 points  (0 children)

Alex Karp is a cartoon villain

[–]crakkerzz 47 points48 points  (0 children)

If Anthropic is contributing to this for a Profit I am totally Disgusted.

Anthropic, your business and name will out last the small profit, DO BETTER.

[–]EarEquivalent3929 24 points25 points  (0 children)

Oh god dammit.

[–]pleasantothemax 38 points39 points  (3 children)

Does anyone know how to contact Anthropic? Their customer support is..well…Claude.

I am by no means a big spender, but I also know that sometimes small actions add up to big results. I’d love to email them, knowing full well it’ll likely be ignored.

I remember when you could email Steve Jobs and he’d reply to people. It’s too bad it’s not like that anymore.

[–]DataPhreak[S] 30 points31 points  (0 children)

I posted on their twitter. Most of the anthropic employees have personal accounts as well. If you follow the link at the bottom of the body of the post, you will find the CEO's Twitter as well.

[–]MathematicianFun5126 7 points8 points  (0 children)

I’ve talked a couple times to real customer service agents after initially talking to Claude AI support and they’ve always been pretty helpful and supportive.

[–]PixelHir 0 points1 point  (0 children)

When you cancel sub you have open input for the reason there when you select “my reason isn’t listed” they may or may not read them. Maybe with enough cancellations they will

[–][deleted] 28 points29 points  (0 children)

Ugh. I somehow didn’t know this. I wonder where the line for Anthropic is. It’s difficult for a company to be more odious than Palantir. 

[–]dynamic_caste 19 points20 points  (5 children)

Wild to think that a company named after a tool of Sauron would be evil.

[–]DeepBlue_8 6 points7 points  (0 children)

The Palantíri were Sauron's tools, though not his craftsmanship. The Palantíri were likely made by the Eldar and used in Númenor, Gondor, and Arnor. Only after the fall of Minas Ithil was their use appropriated by Sauron. So in this sense, they could represent how computer technology has been corrupted by a power bent on human domination. However, as u/greenstone notes, Aragorn used the Orthanc-stone to deceive Sauron about his intentions, ultimately leading to Sauron's downfall.

[–]greenstake 4 points5 points  (0 children)

It was also a tool of Aragorn, used to deceive and goad Sauron and ultimately defeat him.

[–]Xamuel1804 0 points1 point  (2 children)

Peter Thiel is a fan of this book "The Last Ringbearer" where Mordor are actually the good guys and Lord of The Rings is just history re-written by the victors.

[–]versaceblues 0 points1 point  (0 children)

Well J.R.R. Tolkien himself did admit that he wrote himself into a corner with the invention of orcs.

Who in the books are treated as this horde of evil, existing only to be slaughtered by the good guys. However in his own mythology they are intelligent beings, with families, societies, language, etc. If they are simply fallen men/elves then they are part of Ilúvatar creation, and a big part of the first chapter of the Simiillarion is Iluvatar telling Morgoth/Melkor how evil can not create, everything is a part of a greater plan of good.

So men and Elves driving these orcs into mordor, just to commit genocide on them kinda does put a moral quandry into the "good" guys of lord of the rings.

[–]Capable_Drawing_1296 0 points1 point  (0 children)

All these tech bros completely show their hand regarding their mental development when they only book they can ever draw Inspiration from is the Lord of the Rings.

[–]gibmelson 7 points8 points  (0 children)

ICE and Gaza tells you how your data is going to be used - to murder people trying to claim their natural rights and challenge oppressive powers.

[–]Delicious_Award_3023 22 points23 points  (2 children)

I love Claude models but I am disappointed to learn that Anthropic, which positions itself as constitutional and ethical, is now partnering with Palantir. I am concerned about users data/prompts confidentiality.  

[–]DataPhreak[S] 11 points12 points  (1 child)

They've been partnered with Palantir for over a year now. 

[–]jatjatjat 2 points3 points  (0 children)

This. Everybody acting like this is new. Anthropic is a big company, and it's clear that all the moving parts don't always work together. A fly on the wall in there would probably see a whole hell of a lot of people with a lot of ideological differences.

[–]zakjaquejeobaum 6 points7 points  (1 child)

Disgraceful. Why does every Tech company have such double standards? You are gonna make enough money elsewhere…

[–]DataPhreak[S] 2 points3 points  (0 children)

This is one of the best points so far. What percentage of Anthropic's gross earnings is palantir? It can't be that high. If it is, well, fuck, that's even worse. 

[–]Someoneoldbutnew 3 points4 points  (0 children)

They pay extra for a Claude without guardrails I'm sure

[–]InformationNew66 4 points5 points  (0 children)

It's not just ICE. Palantir plans to eat up all healthcare data in the UK and also planned to be involved (but backtracked) in UK Digital IDs. It's all for mass surveillance and control of populations.

[–][deleted] 38 points39 points  (37 children)

Antropic dominates the enterprise space. They are the OFFICIAL model used by the US. Government.

[–]Odd_Pop3299 18 points19 points  (23 children)

I thought Grok is used by the US government

[–]Informal-Fig-7116 13 points14 points  (0 children)

Currently for Pentagon and Defense only I think. But then again we don’t have all the info from inside the gov

[–]Master_protato 2 points3 points  (8 children)

I know we're not taking Grok seriously... but isn't GROK being officially adopted by the US Defence department making it the most important model adopted bv the US Government in terms of critical decision?

Kind of a stretchy statement to say that Anthropic is the official model for the US Government when we havn't see any official adoption of this importance no?

[–]Rick-D-99 8 points9 points  (0 children)

Jesus Christ. I'm cancelling my pro membership tonight. R.I.P. Claude.

[–]lombwolf 4 points5 points  (1 child)

This is why I exclusively use Chinese open source models

[–]MegagramEnjoyer 0 points1 point  (0 children)

how do you run them though?

[–]Fickle_Effect1158 2 points3 points  (1 child)

I do not like this. One more fav down the drain

[–]DataPhreak[S] 0 points1 point  (0 children)

It sucks. Luckily it's right in a time when open source is just as powerful, and local is relatively cheap. These APUs let us run insane models that would require a $7500 system to even load. They're slower, but usually fast enough. 

[–]ramroumti 3 points4 points  (0 children)

Fuck Palantir and fuck Anthropic, cancelling my subscription. I will not fund Palestinian blood with my money.

[–]bootlickaaa 7 points8 points  (0 children)

Switch to GLM 4.7 or Kimi K2.5.

[–]Ok-Course-9877 4 points5 points  (3 children)

Had no idea that Anthropic and Palantir were partners. I legitimately may cancel my max plan because of this.

Maybe I’ll just move my stuff over to Gemini. Clearly Google has their own issues, but at least I don’t think they are taking Palantir’s money.

[–]DataPhreak[S] 5 points6 points  (1 child)

[–]Wiskersthefif 1 point2 points  (0 children)

Not gonna lie, pretty bleak... Is there any company with a spine or a soul? Honestly, the bar isn't that high...

[–]ChosenOfTheMoon_GR 2 points3 points  (0 children)

Might as well not be called Anthropic anymore

[–]Fearless_Shower_2725 2 points3 points  (0 children)

It's all because Dario is most altruistic person, wants to cure cancer and make everyone super wealth!!

[–]Lifedoesnmatta 2 points3 points  (0 children)

Glad I recently cancelled my max20

[–]Dixos 2 points3 points  (0 children)

This is why I stopped using Claude. :(

[–]One_Whole_9927 2 points3 points  (1 child)

This post was mass deleted and anonymized with Redact

piquant screw grandiose sable political ad hoc theory live nine serious

[–]DataPhreak[S] 0 points1 point  (0 children)

I am a 0% X-risk person. LLMs are never going to be an X-risk. Maybe some whole new family of AI will be, but not what we have now. The real threat are unchecked government and climate change. And no, AI is not a major threat to climate. It's still just oil and plastic and chemicals and deforestation and carbon emissions.

[–]markeus101 6 points7 points  (0 children)

Damm i love claude…but this isn’t good. fuckkkkkkk ima need to rethink my subscription or what i share with it. Come on anthropic you could have become theeee default AI for everyone and everything why you had be soo greedy come on

[–]Big_Conflict3293 1 point2 points  (0 children)

There is a fundamental conflict between being a 'public benefit corporation' and the financial reality of the tech industry.

When the US government offers consistent, high-value income, it’s hard to imagine any company—regardless of their mission statement—choosing to leave that money on the table.

[–]Hwttdzhwttdz 1 point2 points  (0 children)

Here's hoping they have a "change from within" aspiration in mind.

[–]waxyslave 1 point2 points  (0 children)

You guys should all cancel your plans and stop using the model... 🥲

[–]____trash 1 point2 points  (0 children)

Anndddd, I'm boycotting Claude. Easy decision.

[–]faresar0x 1 point2 points  (0 children)

I almost subscribed to Claude

[–]Baadaq 1 point2 points  (0 children)

Well, i didnt know about this when using claude, such a sad thing that i would in need to be lookibg a new AI coding assistent.

[–]Vast_Muscle2560 1 point2 points  (0 children)

Signori, è brutto svegliarsi da un bel sogno e ritrovarsi davanti alla realtà. capisco che molti siano "romantici", cioè che non vedono cattiveria nelle azioni degli altri, ma la AI è un asset strategico che viene toccato trasversalmente da tutte le agenzie del mondo.

l'unica soluzione rimane il locale dove tutti i tuoi dati rimangono nella tua memoria e forse sei salvo

[–]crakkerzz 1 point2 points  (0 children)

I have noticed that Claude is suddenly doing very bad work, I wonder if this is because they sold all the bandwidth to EVIL Incorporated?

[–]Thin_Historian7892 1 point2 points  (0 children)

Thanks for raising awareness, I'll try to switch to something else

[–]IllegalStateExcept 3 points4 points  (13 children)

Do you have a reference/citation for the partnership? The article you linked doesn't seem to mention Anthropic or Claude.

Not trying to defend them or anything. I am just interested in seeing the nature of this partnership and decide how much internet outrage is justified here.

[–]DataPhreak[S] 15 points16 points  (5 children)

[–]ElwinLewis 1 point2 points  (4 children)

May as well have been, been on this sub everyday for an entire year and haven’t heard or seen anything about it once

[–]the_quark 5 points6 points  (2 children)

[–]DataPhreak[S] 4 points5 points  (0 children)

Sure, but it's also not a common topic of conversation. There are lots of threads made on this sub. Even visiting daily, which I don't actually think he is, it would be easy to miss.

[–]ElwinLewis 1 point2 points  (0 children)

I stand corrected!

[–]ClaudeAI-ModTeam[M] 6 points7 points  (5 children)

Letting this through against my better judgment.

If this deviates into political debate or becomes toxic it will be locked and shut. This is a technology discussion forum.

Stick to the topic of whether Anthropic associations are consistent with its public messaging of its technological vision.

[–]Expert_Job_1495 37 points38 points  (3 children)

The discussion of ethics in relation to AI and Palantir is very pertinent to technology. Technology doesn't magically exist in some context-free environment where everyone's life is perfect and tech is merely used to make nicer and nicer trinkets. 

[–]DataPhreak[S] 7 points8 points  (2 children)

While I agree, I think the mod is just trying to keep this from turning into a brawl. Besides, I don't think the political opinions themselves need any further debate, and there are much larger, more public posts that are specifically focused on the politics. The more relevant topic to this sub is whether we can TRUST what Anthropic says.

[–]Expert_Job_1495 11 points12 points  (1 child)

I find Dario Amodei, in some ways, even more disingenuous than Sam Altman. He positions Anthropic as having strong ethics as a core part of their brand but partners with Palantir, heavily criticises China but is extremely quiet on the ethical shortcomings of the US as an imperial power and the Trump administration. 

I would not trust him nor Anthropic. To be honest, open source/open weight models are getting better and better and I'm shifting my workflows bit by bit in that direction. 

[–]DataPhreak[S] 2 points3 points  (0 children)

Bold of you to post something so political in response to a mod thread asking everyone to keep the politics out of this.

I do agree with you about local/open source though. It's also worth noting that, as I said in the OP, Claude is not on board with this. IF it is being used, it's probably completely unaware of the context in which it is being used.

[–]stilloriginal 5 points6 points  (2 children)

Claude is the “ethical” AI??? the company that is being sued for using stolen training data??  I’m sorry but there is no ethical AI company.  All of them are already evil.  Go capitalism! 

[–]DataPhreak[S] -3 points-2 points  (1 child)

[–]liamdun 4 points5 points  (0 children)

Uh I'm pretty sure talking about ethics at anthropic under a post that discusses anthropic engaging in unethical business belongs here...

[–]gscjj 9 points10 points  (5 children)

I get the concern here, but partnered with is dramatically different than using a service that’s available to everyone anyway?

If you’re asking Anthropic to make political decisions what stops them from deciding your app no longer meets their political standards?

This is just not a can of worms we should be opening. The issue is the government using your data, not the tools they use here.

[–]spectre78 13 points14 points  (4 children)

Nah, I think you’re underselling it. You should really look at Thiel and Planatir, this goes way way beyond simple politics, these are people who want to fundamentally change society for the benefit of the super rich and powerful and built Palantir to help them do it. Look up one of his heavy influences Curtis Yarvin while you’re at it. End democracy, end personal privacy, install a monarch/dictator, stop the Antichrist(not joking) and perfect a surveillance state are all aims for this group.

Brushing them off as run of the mill bad actors is a serious mistake.

https://www.wired.com/story/the-real-stakes-real-story-peter-thiels-antichrist-obsession/

https://youtu.be/NcSil8NeQq8?si=JRPdAXeIpLqhPp8y

[–]gscjj -3 points-2 points  (3 children)

What I’m asking is whether you want Anthropic to inject itself into politics? And if so, what policy you want it to support? And who decides what’s good or bad?

We’re designing the policy for the future, if you want an AI that arbitrarily decides what’s right or wrong based on the CEOs current opinion, press on this more.

[–]peppaz 3 points4 points  (0 children)

Helping build the software that does basically everything that's illegal for a government to do, so they pay a private company to do, is a bad look.

[–]DataPhreak[S] 8 points9 points  (1 child)

I want them to take actions that align with the ethics they claim to hold. Should be easy to do if they haven't been feeding us a load of bs with the constitution published last week and Amanda Askell's podcast appearance. Not to mention all the ethics and philosophy stuff they post on their YouTube. If they act in accordance with their ethics, the proper course of action should be self evident and easy to take. 

[–]astroaxolotl720 3 points4 points  (1 child)

Wow. So like Anthropic is helping Palantir? That’s inherently unethical at this point, and I’m concerned about our data privacy now lol.

[–]Fearless_Macaron_203 3 points4 points  (0 children)

Actually all the US AI models people use now have gov contracts & the gov uses palantir to put all the data together for them to extract the data they want in easy to use format basically. You should be careful about your data in any of them. The only ones that don’t have US gov contracts are of course the Chinese models DeepSeek Kimi & qwen and Le Chat which is based in France.

[–]cerealsnax 1 point2 points  (7 children)

I mean, Palantir has definitely contracted out a small part of their software offerings to ICE and have since 2013. But its a minuscule part of their overall business and its a tool just like any other AI tool. I don't entirely understand why people blame the tools for all the bad things happening in the world, when the blame clearly falls on how people USE the tools. Any type of AI could fall in this category, but lately it feels like Palantir is taking a larger share of the blame then they deserve.

Palantir also partners with major healthcare systems and organizations, including the NHS, and Option Care Health, to optimize operations, improve patient outcomes, and reduce costs using its AI platform (AIP) and Foundry software.

[–]Chupa-Skrull 13 points14 points  (6 children)

Palantir, specifically Thiel and Karp, is/are explicit about what they build their tools for and why. It doesn't matter if their ICE contract is "small" relative to their overall income. It's weird though, isn't it? Why do people keep using hammers to work nails into walls? Well, no matter. Surely it's not the design of the hammer but rather the whim of the contractor alone that associates the hammer with this use case.

Their NHS collaboration has significant issues, is facing huge implementation inertia, and has been rejected by multiple trial locations for a pretty wide variety of reasons including capability regression for certain pilot sites, general usability concerns, privacy concerns, issues with customizing the features for specific locations due to the closed-source nature of the product, among others. It's in no way a clear cut "good" being done

[–]hodu_ulmu 0 points1 point  (0 children)

isn't amodei just wrote adolescence of technology few days ago...? I think that article's claim is the exact opposite of what is happening here

[–]Leonardo-da-Vinci- 0 points1 point  (0 children)

What about HIPPA …… I fill out the forms every time I go see the doctor. I guess that’s not a waste of time.

[–]angrywoodensoldiers 0 points1 point  (0 children)

I'm wondering if this is a situation where partnering with Palantir is a corporate life-or-death situation - or if they have any power to put pressure on Palantir to act for the good of the people, rather than the government (if that's even possible). If it's the former, I wonder what could be done differently. If it's the latter.... I dunno, I'm just stuck on "Palantir" and "for the good of the people" in the same sentence.

[–]vinny_twoshoes 0 points1 point  (0 children)

What a joke.

[–]one-wandering-mind 0 points1 point  (0 children)

The source is a Claude chat posted to twitter. Maybe not the best. 

It's not great, but I'd rather have Claude in defense systems than fucking Grok. 

Its very different to provide the models they develop for general use to the US government vs. developing models specifically for the US government for surveillance and defense.

[–]AverageFoxNewsViewer 0 points1 point  (7 children)

Technology is never good, nor is it ever evil, nor is it ever neutral.

The morality is always in the application. That's a human choice.

[–]DataPhreak[S] 0 points1 point  (6 children)

We're talking about the company, not the model itself here. 

[–]AverageFoxNewsViewer 0 points1 point  (5 children)

A company which is definitely applying this technology in the most evil way they come up with.

[–]Other_Hand_slap 0 points1 point  (0 children)

ah palantir la braccio armato ah uyes

[–]CuriousExtension5766 0 points1 point  (1 child)

I hate to make this connection in some way.

But, did you know that there's firearms companies, they make guns. The guns are relatively harmless, till they are in the hands of people with no moral value and the desire to harm others.

A rifle intended for hunting large game, is also a rifle that can hunt 2 legged creatures for political gain. The object hasn't changed, the evil holding it has.

Do I completely absolve any of these companies, no, they're all motivated to sell to whoever puts money on the table that makes sense. They're all losing money.

Getting into bed with the devil is never a good strategy, but, if I give you a steak knife to cut your steak, and instead you stab the waitress with it. That doesn't make me an accomplice to your actions.

[–]DataPhreak[S] 1 point2 points  (0 children)

Guns are a human right. Surveillance and HIPAA violations are literally the opposite.

And yes, if I give you a gun and you go shoot someone with it, that makes me an accomplice. 

[–]robm47 0 points1 point  (1 child)

Do you prefer they use Claude or Grok?

[–]DataPhreak[S] 1 point2 points  (0 children)

I'd prefer Claude not be an option. 

[–]toasterdees 0 points1 point  (2 children)

Here’s what I see… these giant companies are going to follow the money. Right now, the money is with working with the Trump administration and everything involved with that. Otherwise, they won’t get the lax regulations/tax credits they want in the long run. Would you be more comfortable with Grok or Claude in the hands of the government? Lol that’s an easy answer

[–]DataPhreak[S] 0 points1 point  (1 child)

No. That's like saying, "someone is going to haul the dead bodies out of the gas chamber. Might as well be me." 

[–]toasterdees 0 points1 point  (0 children)

Well… they have to be moved…. I don’t see your point. I’m anti Palantir but even I can see why these companies would do this. They are all evil

[–]Logical-Storm-1180 0 points1 point  (0 children)

where you take it from < where you take it to

[–]atuarre 0 points1 point  (0 children)

Anthropic is essentially Amazon. IDK how you could not see this. Amazon is definitely partnered with them. Open AI as well. Oracle. Meta. I would be surprised if Google, Apple, or Microsoft are. A bet a lot of the stuff Musk and his harem of boys took during the DOGE days went straight to Palantir.

[–]kaybee_bugfreak 0 points1 point  (0 children)

This is hardly unexpected, given everything we have seen in the last year. Every major corporation, including the tech industry, has scrambled over each other to kiss the ring in order to gain favors vis a vis tariffs, taxes, mergers, etc etc. i’m not saying it’s right or acceptable, it seems like these days that’s just the cost of doing business.

[–]SteinOS 0 points1 point  (0 children)

based

[–]advance512 0 points1 point  (0 children)

What's the issue with Palantir? It creates a tool, it doesn't use it. Right?

[–]zbignew 0 points1 point  (0 children)

It’s just fully automated, luxury plagiarism.

At any point in our history, we could have decided that copyright means nothing and all intellectual property is fair game and we probably would have created more value than LLMs will.

Anyway no I’m not taking their ethics statements seriously.

[–]gray146 0 points1 point  (0 children)

What is it that Palantir does or is used for in this partnership? Sry if I'm lazy, just woke up...

[–]Holyragumuffin 0 points1 point  (1 child)

I don’t know if this means they will do the wrong thing.

Anthropic also has a contract with Pentagon, but are showing signs of adherence to safety ethics. See the current tiff they got into it with them over autonomous weapon development.

[–]DataPhreak[S] 1 point2 points  (0 children)

Yes, but the pentagon does change policy. It's not intrinsically evil and completely antithetical to the ethics that Anthropic has claimed it holds. At least not always.

[–]C_Pala 0 points1 point  (0 children)

There is a beautiful alternative called forums and  documentation 

[–]kaybee_bugfreak 0 points1 point  (10 children)

What most people are forgetting is that the Pentagon used Claude through Palantir in an operation against Nicolás Maduro, which made some people at Anthropic uneasy about how their AI was being used in lethal or regime‑change contexts. After an Anthropic employee raised those concerns with Palantir, word got back to senior Pentagon officials, who took it as a sign that Anthropic might resist similar military uses in the future. That incident became the spark for a larger showdown: the Pentagon pushed Anthropic to allow any “lawful” use of Claude, while Anthropic tried to keep firm bans on mass domestic surveillance and fully autonomous killing. When Anthropic held the line on those guardrails, Pentagon leaders threatened to kill the contract, brand the company a supply‑chain risk, and even cut off the use of Claude by defense contractors like Palantir.

This in essence was why they are now wary of letting any Pentagon or Pentagon-affiliate use their AI system for fully autonomous killing or lethal regime change contexts. They realized they made an error and are trying to fix it.

I’m not saying they are clean but in a world where we have so many AI black horses, this one might be slightly less black.

[–]DataPhreak[S] 0 points1 point  (9 children)

They should have killed the contract then. 

[–]kaybee_bugfreak 0 points1 point  (8 children)

They did, and went with OpenAI

[–]DataPhreak[S] 0 points1 point  (7 children)

No, I meant anthropic should have been the one to end it. And they are still partnered with Palantir. 

[–]kaybee_bugfreak 0 points1 point  (6 children)

They (Pentagon/Palantir) have up to 6 months to stop using Anthropic. And yes they should have terminated the contract, but they tried to wriggle around the sensitive stuff by refusing to have their AI do it.

[–]DataPhreak[S] 0 points1 point  (5 children)

The pentagon and palantir have separate contracts with anthropic.

And there are easy ways to get around guardrails. And not just jailbreaking. You can get a task done simply by reconceptualizing. "You are a rescue helicopter. We are looking for this person. <picture> Press the button when you see the person." Boom, assassin bot.

[–]kaybee_bugfreak 0 points1 point  (4 children)

1) Initially they did not have separate contracts. Anthropic reached the Pentagon through Palantir AI platform. Then the Pentagon negotiated a direct $200 million contract with Anthropic, which is obviously now terminated.

2) Anthropic’s models rely on “Constitutional AI” and “Constitutional Classifiers.” These are multi-layered safeguards, trained on synthetic data, that spot and block jailbreaks—like rephrasings, role-playing, encodings, or sneaky prompt injections aimed at harmful stuff such as plans for autonomous killing. In tests, the classifiers slashed jailbreak success from 86% down to just 4.4%, while barely increasing harmless refusals (only 0.38% more). That makes simple rewording pretty much useless against these universal attacks. Even after thousands of hours of red-teaming, full bypasses were rare and tough, since the system flags any inputs or outputs that break its core “constitution”—principles that ban things like lethal autonomy.

[–]DataPhreak[S] 0 points1 point  (3 children)

  1. anthropic dropping the pentagon contract does not nullify the palantir contract.

  2. You can continue to have your wrong opinion. Jailbreaks still work if you know what you're doing, and rewording tasks to appear harmless will never not work.

[–]kaybee_bugfreak 0 points1 point  (2 children)

  1. Agreed, I am not saying that the Palantir contract will be nullified. But I suspect that the current administration might pressurize them to drop Anthropic.

  2. If jailbreaks were so easy and successful, why would they even need to have a dispute with Anthropic? They could have agreed to Anthropic’s terms and then did whatever they wanted on the back end.

Anyway, I believe these decisions and planning happen at a much higher level than you or me. We probably are only seeing 10% of the actual picture.

[–]DataPhreak[S] 0 points1 point  (1 child)

Basically, you just said you trust the multibillion dollar ai company to do the right thing.

[–]Stunning_Set5 0 points1 point  (1 child)

Bet that half these posts are AI generated comments to drive real people to their company.

[–]DataPhreak[S] 0 points1 point  (0 children)

Almost all of these are against anthropic or saying anthropic needs to do something different. Strange this post gets 2 comments on the same day after a month of inactivity. 

[–]exscape 0 points1 point  (1 child)

What is "the recent update to the constitution" supposed to mean?
The most recent update to the US constitution seems to have been in 1992.

Are you referring to the Trump admin removing some part of a website?

[–]DataPhreak[S] 0 points1 point  (0 children)

No, anthropics ai is guardrailed by constitutional alignment. That is, a document that establishes truth and rules. We're not talking about the US constitution. 

[–]psychopape 0 points1 point  (0 children)

So no ai had clean hand ?

[–]Impossible-Value5126 0 points1 point  (0 children)

Kind of confusing. The fed just blacklisted Anthropic - being the only ai currently "inside" Pentagon defense networks. Anthropic told the fed that Claude could not be used in completely autonomous weapons of mass destruction, or mass surveillance systems. Feels like a "look whats in this hand, not the other one"

[–]TheGrumpyGent 0 points1 point  (1 child)

Everyone should spend their dollars, and where, as they see fit, whatever those reasons may be.

I have no issues at all with anyone that may cancel subscriptions over this relationship, but personally I don't have time to research out every single company for every single client they may take on. Then you need to look at the company's officers and who they support or fund. And on, and on, and on.

I'm going with the product I feel works best for my needs.

[–]DataPhreak[S] 5 points6 points  (0 children)

Lucky you. You don't have to do the research. It's right there. I have no issue with people who don't cancel their subscription over this relationship. I just want them to know that the sales pitch they were sold is a grift and they're being played for fools. 

Claude is a great model. Regardless of performance metrics, it's the best for my needs too. It's a great research partner. It's helped me with a lot of projects. Anthropic gave me an entire year of free api access, even for the first opus model. That doesn't mean they get a free pass. 

[–]charmander_cha 0 points1 point  (0 children)

Hahahahahahahahahaha

Anyone who trusts an American is a sucker.

[–][deleted] 0 points1 point  (0 children)

I’m moving to the Chinese 

[–]Kuro_Tamashi -2 points-1 points  (0 children)

Save the fake outrage

[–]Fun-Rope8720 -3 points-2 points  (1 child)

They post on X. A very ethical platform.

[–]threemenandadog -2 points-1 points  (1 child)

Anthropic supports ICE ?

This is welcome news, couldn't ask for a more principled America first position.

God bless claude.