The article should have also emphasized that GitHub's issues trigger is just as dangerous as the infamous pull_request_target. The latter is well known as a possible footgun, with general rule being that once user input enters the workflow, all bets are off and you should treat it as potentially compromised code. Meanwhile issues looks innocent at first glance, while having the exact same flaw.
EDIT: And if you think "well, how else could it work": I think GitHub Actions simply do too much. Before GHA, you would use e.g. Travis for CI, and Zapier for issue automation. Zapier doesn't need to run arbitrary binaries for every single action, so compromising a workflow there is much harder. And even if you somehow do, it may turn out it was only authorized to manage issues, and not (checks notes) write to build cache.
No, the real problem is that people keep giving LLMs the ability to take nontrivial actions without explicit human verification - despite bulletproof input sanitization not having been invented yet!
Until we do so, every single form of input should be considered hostile. We've already seen LLMs run base64-encoded instructions[0], so even something as trivial as passing a list of commit shorthashes could be dangerous: someone could've encoded instructions in that, after all.
And all of that is before considering the possibility of a LLM going "rogue" and hallucinating needing to take actions it wasn't explicitly instructed to. I genuinely can't understand how people even for a second think it is a good idea to give a LLM access to production systems...
Yep, this is essentially it: GitHub could provide a secure on-issue trigger here, but their defaults are extremely insecure (and may not be possible for them to fix, without a significant backwards compatibility break).
There's basically no reason for GitHub workflows to ever have any credentials by default; credentials should always be explicitly provisioned, and limited only to events that can be provenanced back to privileged actors (read: maintainers and similar). But GitHub Actions instead has this weird concept of "default-branch originated" events (like pull_request_target and issue_comment) that are significantly more privileged than they should be.
I agree but its only part of what is happening here. The larger issue is that with a LLM in the loop, you can't segment different access levels on operations. Jailbreaking seems to always be available. This can be overcome with good architecture I think but that doesn't seem to be happening yet.
IMO the core of the issue is the awful Github Actions Cache design. Look at the recommendations to avoid an attack by this extremely pernicious malware proof of concept: https://github.com/AdnaneKhan/Cacheract?tab=readme-ov-file#g.... How easy is it to mess this up when designing an action?
The LLM is a cute way to carry out this vulnerability, but in fact it's very easy to get code execution and poison a cache without LLMs, for example when executing code in the context of a unit test.
GHA in general just isn't designed to be secure. Instead of providing solid CI/CD primitives they have normalized letting CI run arbitrary unvetted 3rd-party code - and by nature of it being CD giving it privileged access keys.
It is genuinely a wonder that we haven't seen massive supply-chain compromises yet. Imagine what kind of horror you could do by compromising "actions/cache" and using CD credentials to pivot to everyone's AWS / GCP / Azure environments!
There is nothing stopping Zapier from having a log4shell style vulnerability that exposes you to the same. The only difference is you're treating Zapier as a blackbox that you assume is secure, and any security issue is theirs and theirs alone. While with GHA you share that responsibility with GitHub. GitHub can screw up with a log4shell type exploit in how they handle the initial GHA scheduling too, but also you can have your own vulnerability in which ever arbitrary code you run to handle the trigger.
You can also do a lot more with GHA compared to waiting for Zapier to support your scenario. Plus most people I knew who used Zapier connected it to some Lambda or another webhook where they got the data from there and ran arbitrary code anyway.
Decentralized or direct P2P micropayments are unlikely to work, true. But why are there so few attempts at centralized micropayments providers? The only success stories I see in the space are GitHub Sponsors and LiberaPay, where their entire thing is aggregating payments together (so you have 1 big card transaction a month per user, not 20 small ones) and doing KYC procedures with donation receivers (once GitHub, or rather Stripe, says you are legit, you can take money from any GitHub user).
That's called starting a bank, or financial services company, and lots of places do it, but the bar to do so, and remain able to do so is fairly high. The margins, however, are exquisite. The middlemen eat fat off the percent they skim off the top.
I was once blown away by iPhone 8 editing capabilities. The keyboard seemed to work OK (minus swipe-to-type, but that wasn't great on Android either), and using 3D Touch to move cursor and select text was the most pleasant text editing experience, even better than on the desktop (arrow keys and vim hjkl).
In multi-user mode, Nix uses dedicated build users to write to the store. There is also single-user mode, but that also doesn't require a world-writable store.
Looking at internal/commands/install.go, it only installs new packages, but doesn't uninstall removed ones. That's the biggest benefit of Brew bundle gone.
This is the way. Shell makes for a terrible scripting language, that I start regretting choosing usually around the time I have to introduce the first `if` into my "simple" scripts, or have to do some more complex string manipulation.
At least nowadays LLMs can rewrite Bash to JS/Python/Ruby pretty quickly.
This is exactly the frustration that lead me to write Rad [0] (the README leads with an example). I've been working on it for over a year and the goal is basically to offer a programming language specifically for writing CLIs. It aims for declarative args (no Bash ops parsing each time), automatic --help generation, friendly (Python-like) syntax, and it's perfect for dev build scripts. I'll typically have something like this:
#!/usr/bin/env rad
---
Dev automation script.
---
args:
build b bool # Build the project
test t bool # Run tests
lint l bool # Run linter
run r bool # Start dev server
release R bool # Release mode
filter f str? # Test filter pattern
filter requires test
if build:
mode = release ? "--release" : ""
print("Building ({release ? 'release' : 'debug'})...")
$`cargo build {mode}`
if lint:
print("Linting...")
$`cargo clippy -- -D warnings`
if test:
f = filter ? "-- {filter}" : ""
print("Running tests{filter ? ' (filter: {filter})' : ''}...")
$`cargo test {f}`
if run:
bin = release ? "target/release/server" : "target/debug/server"
$`./{bin}`
Usage: ./dev -b (build), ./dev -blt -f "test_auth" (build, lint, test auth), ./dev -r (just run).
I've been using node for a decade now and I've had to update NPM libraries a number of times as Node itself upgraded. I have a feeling it will get a lot more stable with ESM and the maturity of the language but if you're writing something you need to run 5-10yrs from now I wouldn't touch a library unless it's simple and has few of it's own dependencies.
Deno has used ESM from the beginning and it’s required on jsr.io. I agree about avoiding dependencies, but maybe it’s okay if they’re locked to a specific version.
I consider luajit a much better choice than bash if both maintainability and longterm stability are valued. It compiles from source in about 5 seconds on a seven year old laptop and only uses c99, which I expect to last basically indefinitely.
python does EOL releases after 5 years. I guess versions are readily available for downloading and running with uv, but at that point you are on your own.
bash is glue and for me, glue code must survive the passage of time. The moment you use a high-level language for glue code it stops being glue code.
Hard disagree... I find that Deno shebangs and using fixed version dependencies to be REALLY reliable... I mean Deno 3 may come along and some internals may break, but that should have really limited side effects.
Aside: I am somewhat disappointed that the @std guys don't (re)implement some of the bits that are part of Deno or node compatibility in a consistent way, as it would/could/should be more stable over time.
I like Deno/TS slightly more because my package/library and version can be called directly in the script I'm executing, not a separate .csproj file.
For some quality of "run", because I'm hella sure that it has quite a few serious bugs no matter what, starting from escapes or just a folder being empty/having files unlike when it was written, causing it to break in a completely unintelligible way.
> This is the way. Shell makes for a terrible scripting language, that I start regretting choosing usually around the time I have to introduce the first `if` into my "simple" scripts, or have to do some more complex string manipulation.
I suppose it can be nice if you are already in a JS environment, but wouldn't the author's need be met by just putting their shell commands into a .sh file? This way is more than a little over-engineered with little benefit in return for that extra engineering.
The reasons (provided by the author) for creating a Make.ts file is completely met by popping your commands into a .sh file.
With the added advantage that I don't need to care about what else needs to be installed on the build system when I check out a project.
The benefit is you can easily scale the complexity of the file. An .sh file is great for simple commands, but with a .ts file with Deno you can pull in a complex dependency with one line and write logic more succinctly.
> The benefit is you can easily scale the complexity of the file. An .sh file is great for simple commands, but with a .ts file with Deno you can pull in a complex dependency with one line and write logic more succinctly.
The use-case, as per the author's stated requirements, was to do away with pressing up arrow or searching history.
Exactly what benefit does Make.ts provide over Make.sh in this use-case? I mean, I didn't choose what the use-case it, the author did, and according to the use-case chosen by him, this is horrible over-engineered, horribly inefficient, much more fragile, etc.
The differences between different environments can vary a lot... many shell scripts rely on certain external programs being available and consistent... this is less true across windows an mac and can vary a lot.
I've found that Deno with TS specifically lets me be much more consistent working on projects with workers across Windows, Mac and Linux/WSL.
I've been working a lot in fairly complex shell scripts lately (though not long— not much over 1000 lines). Some of them are little programs that run locally, and others drive a composable cloud-init module for Terraform that lets lets users configure various features of EC2 hosts on multiple Linux distribution without writing any shell scripts themselves or relying on any configuration management framework beyond cloud-init itself. With the right tooling, it's not as bad as you'd think.
For both scripts, everything interesting is installed via Nix, so there's little reliance on special casing various distros', built-in package managers.
In both cases, all scripts have to pass ShellCheck to "build". They can't be deployed or committed with obvious parse errors or ambiguities around quoting or typos in variable names.
In the case of the scripts that are tools for developers, the Bash interpreter, coreutils, and all external commands are provided by Nix, which hardcodws their full path into the scripts. The scripts don't care if you're on Linux or macOS— they don't even care what's on your PATH (or if it's empty). They embrace "modern" Bash features and use whatever CLI tools provide the most readable interface.
Is it my favorite language? No. But it often has the best ROI, and portability and most gotchas are solved pretty well if you know what tools to use, especially if your scripts are simple.
Agreed. The shell is great for chaining together atomic operations on plaintext. That is to say, it is great for one liners doing that. The main reason probably isn't how it all operates on plain text but how easy it makes it to start processes, do process substitution, redirections, etc.
As soon as you have state accumulating somewhere, branching or loops it becomes chaotic too quickly.
I generally use AWK as my scripting language, or often just write the whole thing directly in AWK. It doesn't change, is always installed on all POSIX platforms, easily interfaces with the command line, and is an easy to learn small language.
Agreed I was looking for this comment. Bun shell is amazing although I had trouble having it be written by LLM's sometimes (not always) but overall Bun shell is really cool.
One of my projects actually use bun shell to call some rust binary in a website itself and I really liked this use case.
> In contrast, the ARM world sucks hardcore - there are no standards for board bringup and boundaries
There are standards for ARM, and they are called UEFI, ACPI, and SMBIOS. ARM the company is now pushing hard for their adoption in non-embedded aarch64 world - see ARM SBBR, SBSA, and PC-BSA specs.
> There are standards for ARM, and they are called UEFI, ACPI, and SMBIOS.
The most popular ARM dev and production board - the Raspberry Pi - doesn't speak a single one of these on its own, so do many of the various clones/alternatives, and many phones don't either, it's LK/aboot, Samsung and MTK have their proprietary bootloaders, and at least in the early days I've come across u-boot as well (edit: MTK's second-stage seems to be an u-boot fork). And Apple of course has been doing their own stuff with iBoot ever since the iPhone/iPod Touch that is now used across the board (replacing EFI which was used in the Intel era), and obviously there was a custom bootloader on the legacy iPods but my days hacking these are long since gone.
I haven't had the misfortune of having to deal with ARM Windows machines, maybe the situation looks better there but that's Qualcomm crap and I'm not touching that.
Regarding Windows/PC ARM devices, I think the best experience would be on System76 Thelio (with Ampere CPU), but that's quite a pricy machine.
I don't really care what Apple does on this regard, they were always doing things differently. IIRC, even Macs that supported EFI, only supported EFI 1.1, not 2.0, no?
> I don't really care what Apple does on this regard, they were always doing things differently. IIRC, even Macs that supported EFI, only supported EFI 1.1, not 2.0, no?
Yup, but as long as you got an original Apple GPU that's enough to just stick a Windows or Linux USB stick and you can install straight from the stick. "Normal" PCI GPUs have to be reflashed with a GOP blob [1] so that Apple's EFI implementation can work with it.
Personally, I just went and installed OpenCore once and that's it.
They should have pushed for it years ago, ARM's devicetree clutter and bootloader "diversity" has been a curse on the end user. At this point it's too late, and doubtful that they even have the influence to make OEMs adopt it.
The apartment block I live in in Ireland has converted phone sockets into Ethernet using similar converters, except (a) it was in 2004, so 10Mbit base, (b) they ordered whole socket replacements, eliminating the need for separate box outside the walls, (c) the goal was to buy 1 business high speed line, and split it across all apartments, which became obsolete when ADSL, DOCSIS, and later FTTH became affordable options.
I heard the state of the wiring also wasn't great, sometimes apartments had twisted pair wires, while some straight wires, some only have 2 or 3 out of 4 wires connected, etc.
This wouldn't be legal in my country unless all the apartments had one owner, because the telcos have a monopoly on communications.
The law says one person can't stretch a cable over to his neighbour, because they would need a licence for that (although if you did do that, who would know?).
I think our phone lines must work differently, the entire infrastructure is owned by one company (BT) who must lease it to other companies. So they can do things like this, as everyone needs a router at the end to access it and that's how they charge per customer.
There is a separate cable network, again one operator (Virgin), who don't lease it out.
EDIT: And if you think "well, how else could it work": I think GitHub Actions simply do too much. Before GHA, you would use e.g. Travis for CI, and Zapier for issue automation. Zapier doesn't need to run arbitrary binaries for every single action, so compromising a workflow there is much harder. And even if you somehow do, it may turn out it was only authorized to manage issues, and not (checks notes) write to build cache.
reply