Hacker Newsnew | past | comments | ask | show | jobs | submit | bilekas's commentslogin

We really are only seeing the beginning of the creativity attackers have for this absolutely unmanageable surface area.

I ma hearing again and again by collegues that our jobs are gone, and some are definitely going to go, thankfully I'm in a position to not be too concerned with that aspect but seeing all of this agentic AI and automated deployment and trust that seems to be building in these generative models from a birds eye view is terrifying.

Let alone the potential attack vector of GPU firmware itself given the exponential usage they're seeing. If I was a state well funded actor, I would be going there. Nobody seems to consider it though and so I have to sit back down at parties and be quiet.


I think it depends on where you work. I do quite a lot of work with agentic AI, but it's not like it's much of a risk factor when they have access to nothing. Which they won't have because we haven't even let humans have access to any form of secrets for decades. I'm not sure why people think it's a good idea, or necessary, to let agents run their pipelines, especially if you're storing secrets in envrionment files... I mean, one of the attacks in this article is getting the agent to ignore .gitignore... but what sort of git repository lets you ever push a .env file to begin with? Don't get me wrong, the next attack vector would be renaming the .env file to 2600.md or something but still.

That being said. I think you should actually upscale your party doomsaying. Since the Russian invasion kicked EU into action, we've slowly been replacing all the OT we have with known firmware/hardware vulnerabilities (very quickly for a select few). I fully expect that these are used in conjunction with whatever funsies are being build into various AI models as well as all the other vectors for attacks.



You know you're risky when AIG are not willing to back you. I'm old enough to remember the housing bubble and they were not exactly strict with their coverage.

Interesting product and best of luck with it.

> but I’m going to start by connecting GPT-4o, Claude Sonnet 4, and Qwen to provide my team with a secure way to use them

I did get a little giggle out of that because I've never heard anyone say that hooking up 3rd party llms to anything was any way secure.


Thanks for the kind words!

The key point there is that many would do it through Azure / Bedrock + locally host the open-source models. Also, all chats / indexed data lives on-prem, and there are better guarantees around retention when using the APIs directly.


Ah I see.. That makes a bit more sense and definitely adds a value multiplier for enterprises I would imagine! I'll try out the open source one and see how it works out!

Is running your llm through azure insecure? I mean more so than running anything on cloud? My understanding was that azure gpt instances were completely independent with the same security protocols as databases, vms, etc.

Azure wouldn't be if you have your company AD/Oauth, I'm GUESSING running local models with data transfer might expose that communication if your local machine is compromised, or someone else's, potentially is multiple points of leakage, companies generally like to limit that risk. This is all an assumption btw.

Edit : grammar


As I see it has whitelisting and enterprise integrations.. as for the OS version maybe you need to roll your own. This is a usual monetization method though.

I hope you mean "parity" no?

"The AI has a complete understanding of your question, prove me wrong"

> It’s not where I obtained this PR but how.

The fact that this was said as what seems to be a boast or a brag is concerning. As if by the magic of my words the solution appeared on paper. Instead of noticing that the bulk of the code submitted was taken from someone else.


> It did matter to Mr. Turley and the product team. The rate of people returning to the chatbot daily or weekly had become an important measuring stick by April 2025

And there it is. As soon as one person greedy enough is involved, then people and their information will always be monetized. What we could have learnt without tuning the AI to promote further user engagement.

Now it's already polluted with an agenda to keep the user hooked.


Now lets charge them per word they send and receive.

Wow.. They have some nerve pushing this into the educational world given the amount of hallucination that is still prevalent.

Or they're just getting teachers to replace future generations of teachers. Which is a real dystopia.


There's something in here for sure, switch over to TS with strict typing and you've got generics to help you out more, at least for validation.

A deep clone isn't a bad approach but given TS' typing, I don't know if they allow a pure 'eval' by default.. Still playing with this in my free time though and it's still tricky.


One thought I recently had, since using deepCopy is going to slow things down, is if the source code for QuickJS could be changed just make copies. Then load up quickJs as a replacement for the browsers javascript by invoking it as wasm.

> Dr Garrett is alleged to have waged this campaign through the medium of IRC ‘sockpuppet’ accounts

And people say IRC is dead!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: