I agree with the author, we need better primitives, if you need functionality now:
Major tools that exist today for partial structure traversal and focused manipulation:
- Optics (Lenses, Prisms, Traversals)
Elegant, composable ways to zoom into, modify, and rebuild structures.
Examples: Haskell's `lens`, Scala's Monocle, Clojure's Specter.
Think of these as programmable accessors and updaters.
- Zippers
Data structures with a "focused cursor" that allow local edits without manually traversing the whole structure.
Examples: Huet’s original Zipper (1997), Haskell’s `Data.Tree.Zipper`, Clojure’s built-in zippers.
- Query Languages (for semantic traversal and deep search)
When paths aren't enough and you need semantic conditionals:
- SPARQL (semantic web graph querying)
- Datalog (logic programming and query over facts)
- Cypher (graph traversal in Neo4j)
- Prolog (pure logic exploration)
These approaches let you declaratively state what you want instead of manually specifying traversal steps.
Traversable and lenses are very closely linked. If you go to the original paper leading to Traversable [1] and read through it, it feels basically identical to reading through the parts of the lens library that lay down the core abstractions and the laws implementations must follow if you want to be able to blindly manipulate them. In fact, the traverse function is a Traversal, and so fits trivially into the lens ecosystem.
These are what I think the author is looking for. But it shouldn't be a "primitive" in terms of code automatically generated by the compiler, but an interface or typeclass like your examples (in a language advanced enough to have them.)
The problem is that 'lens', 'monocle', etc. are famously abstract and difficult for people to apply to their actual problems. IMO, the solution would be for standard libraries to specify interfaces called 'BreadthFirstTraverse', 'DepthFirstTraverse', etc.
I definitely agree for traversals, but Lenses need some sort of primitive support - even in Haskell they're mostly generated with TemplateHaskell, and the language developers have spent a long time trying to make the `record.field` accessor syntax overloadable enough to work with lenses[1][2]. Hopefully someday we'll be free from having to memorize all the lens operators.
Optics are famously abstract in implementation, but I don't think people have trouble applying them - people seem to like JQuery/CSS selectors, and insist on `object.field` syntax; it's kind of wild that no mainstream language has a first-class way to pass around the description of a location in an arbitrary data structure.
Optics let you concisely describe the location, but defer the dereferencing, so you could definitely approximate optics, not by passing around pointers you compute with `offsetof`, but passing around functions that use `offsetof` to return memory locations to reference (read/write to). You could certainly write a composition operator for `*(*T) => List<*R>`... Some people have done something like it[1][2]:
Account acc = getAccount();
QVERIFY(acc.person.address.house == 20);
auto houseLens = personL() to addressL() to houseL();
std::function<int(int)> modifier = [](int old) { return old + 6; };
Account newAcc2 = over(houseLens, newAcc1, modifier);
These also use templating to get something that still feels maybe a little less ergonomic than it could be, though.
> These are what I think the author is looking for. But it shouldn't be a "primitive" in terms of code automatically generated by the compiler
I think people are often too enamored by general purpose languages that can express such abstractions natively. I don't see an issue with a language that provides this as a primitive without being able to express it itself, constraints can be useful for other properties. Once you can traverse trees, most programming problems can be tackled even in such constrained languages, eg. SQL with CTE.
To point out a prolog thing which is also applicable to other languages with good patter matching: the break/return/prune examples are all ergonomic to implement as recursion in a way that fails in C++ style type based dispatch.
how about we also get regex-parsable streams (IO::Async in perl has something like it, suboptimal perhaps) and regex-parsable treestructures (totally possible)? seems like just having the ~= work on structures (or whatever the API is called in other languages, this being Perl5)?
Indeed they call for new names, as they encompass far more than iterators.
If you read a bit more about them, I think you will be surprised to see the breadth of what these abstractions can be used for. To start, they've been used to build a new compositional foundation for game theory[1], and they can be used to model gradient-based learning in machine learning[2].
As for their simpler applications as getters and setters, they are programmable in the sense that you can think of lens/prism types as interfaces that can be implemented arbitrarily. So you can create your own lenses and combine them like legos to construct new, "bigger" getters and setters from the smaller components.
This thread is about traversing a tree. At what point do we take a step back and that maybe iterating through a data structure and "building new compositional foundations for game theory" shouldn't be conflated together?
When does someone give up on the pagentry of programming and just get something done by looping through data instead of "constructing getters and setters to model gradient based machine learning".
It really seems like the straight forward way of doing things is to make simple iteration and worry about the game theory after that is all figured out.
Engineers aren't shy about eviscerating each other's work when mistakes are made—sometimes too eager, frankly.
Whole courses are built around forensically dissecting every error in major systems. Entire advanced fields are written in blood.
You probably don't hear about it often because the analysis is too dense and technical to go viral.
At the same time, there's a serious cultural problem: technical expertise is often missing from positions of authority, and even the experts we do have are often too narrow to bridge the levels of complexity modern systems demand.
Why solid gold? It would be way too ductile, heavy, and conduct far too much heat.
Diamond might actually be better: low surface energy means a low coefficient of friction, so it would be much easier to clean. It would still suck the heat right out of your cheeks, though.
Realistically, porcelain or other ceramics are probably the ideal material.
In my experience, math majors can do some pretty incredible acrobatics (in a good way), but their documentation, systemic performance awareness, and overall design sense often lag behind. These are things they usually pick up outside of the degree, and they have to break some bad habits learned during it (e.g., single-character-variable soup).
I agree with a sibling comment that physicists often seem to make the best coders, for some reason.
My hypothesis: it's because physicists are rigorously trained to model real-world systems directly. What would be considered an "advanced" modeling problem to most would be an intro problem to a physics student.
Math is absolutely related, but I think the secret ingredient is "mathematical maturity" — the ability to fluidly jump between layers of abstraction. Mathematicians are good at this too, but physicists go a step further: they are trained to ground their abstractions in concrete physical phenomena.
Mathematicians ground systems in axioms, sure. But physicists have to tether models back to reality — to processes and measurements — which turns out to be exactly the skill set that makes for good programmers and system designers.
Huge generalization, obviously.
But personally, I've noticed my own programming ability increases the more physics I learn. Physics gives you a systematic framework to reason about complexity — and physicists get the luxury of a "relatively simple" universe compared to fields like chemistry or biology. They're working with rich systems described by just a few tightly-coupled parameters. And the kicker: a lot of those systems are 100% repeatable, every time.
That kind of structure — and the habit of respecting it — is priceless for engineering.
Maybe we should make a javascript UI framework generator. Let an LLM build your next hype UI framework in a matter of seconds.
Could be fun with a highscore that is measured by most amount of dependencies and lines of code, the more the better. The prompt is limited in length. Task for the user is to generate the most amount of code with a single prompt.
I started with Repomix as an MCP server plus system prompt to reduce the scope to single packages. However, it still consumed too many tokens (and polluted the context with useless information). I used Gemini, context size wasn't an issue, but it was too expensive. Now I just use Cursor, it has built-in indexing with embeddings (I assume).
You can absolutely etch silicon at home. Processes like wet etching (KOH, HF), reactive ion etching (RIE), laser ablation, and even electron beam lithography using repurposed CRTs are all viable at the DIY scale.
They're not used in high-volume manufacturing (you’re not replacing ASML), but they’re solid for prototyping, research, and niche builds.
Just don’t underestimate the safety aspect—some of these chemicals (like HF) are genuinely nasty, and DIY high voltage setups can bite hard.
You're not hitting nanometer nodes, but for MEMS, sensors, and basic ICs, it’s totally within reach if you know what you’re doing.
Major tools that exist today for partial structure traversal and focused manipulation:
- Optics (Lenses, Prisms, Traversals)
- Zippers - Query Languages (for semantic traversal and deep search)reply