Hacker News new | past | comments | ask | show | jobs | submit | Xmd5a's comments login

a

Tree traversal primitives (clojure.walk):

    (defn walk [inner outer form]
      (cond
       (list? form) (outer (with-meta (apply list (map inner form)) (meta form)))
       (instance? clojure.lang.IMapEntry form)
       (outer (clojure.lang.MapEntry/create (inner (key form)) (inner (val form))))
       (seq? form) (outer (with-meta (doall (map inner form)) (meta form)))
       (instance? clojure.lang.IRecord form)
         (outer (reduce (fn [r x] (conj r (inner x))) form form))
       (coll? form) (outer (into (empty form) (map inner form)))
       :else (outer form)))
    
    (defn postwalk [f form]
      (walk (partial postwalk f) f form))
    
    (defn prewalk [f form]
      (walk (partial prewalk f) identity (f form)))

Another reason why this perlisism holds:

    9. It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.
"Let's move on."

It is even better to have 100 functions work on 100 data structures. Powerful programming languages like Lisp and Haskell give you that. Generics gives you most of that.

If every ounce of performance matters, e.g. in a database, you want 10000 functions, 100 for each data structure.


In addition, clojure.core has the handy tree-seq function:

    (defn tree-seq
      "Returns a lazy sequence of the nodes in a tree, via a depth-first walk.
       branch? must be a fn of one arg that returns true if passed a node
       that can have children (but may not).  children must be a fn of one
       arg that returns a sequence of the children. Will only be called on
       nodes for which branch? returns true. Root is the root node of the
      tree."
      {:added "1.0"
       :static true}
       [branch? children root]
       (let [walk (fn walk [node]
                    (lazy-seq
                     (cons node
                      (when (branch? node)
                        (mapcat walk (children node))))))]
         (walk root)))

    (defn tree-seq-breadth
      "Like tree-seq, but in breadth-first order"
      [branch? children root]
      (let [walk (fn walk [node]
                   (when (branch? node)
                     (let [cs (children node)]
                       (lazy-cat cs (mapcat walk cs)))))]
        (cons root (walk root))))

Didn't Wirth(may be it was someone else) saily that it is better to have a complex data structure and simple algorithm/code that works on them than having simple data structures and complex code.

Complex data structures absorb lot of the complexity of the problem and reduce the complexity of the rest of the code.



I got filtered by the Ent arc of LOTR and dropped the book.

Indeed but the idea that this is a "cope" is interesting nonetheless.

>Your test is only testing for bias for or against [I'm adapting here] you.

I think this raises the question of what reasoning beyond Doxa entails. Can you make up for one's injustice without putting alignment into the frying pan? "It depends" is the right answer. However, what is the shape of the boundary between the two ?


I wrote an anagrammatic poem that poses an enigma, asking the reader: "who am I?" The text progressively reveals its own principle as the poem reaches its conclusion: each verse is an anagrammatic recombination of the recipient's name, and it enunciates this principle more and more literally. The last 4 lines translate to: "If no word vice slams your name here, it's via it, vanquished as such, omitted." All 4 lines are anagrams of the same person's name.

LLMs haven't figured this out yet (although they're getting closer). They also fail to recognize that this is a cryptographic scheme respecting Kerckhoffs's Principle. The poem itself explains how to decode it: You can determine that the recipient's name is the decryption key because the encrypted form of the message (the poem) reveals its own decoding method. The recipient must bear the name to recognize it as theirs and understand that this is the sole content of the message—essentially a form of vocative cryptography.

LLMs also don't take the extra step of conceptualizing this as a covert communication method—broadcasting a secret message without prior coordination. And they miss what this implies for alignment if superintelligent AIs were to pursue this approach. Manipulating trust by embedding self-referential instructions, like this poem, that only certain recipients can "hear."


That’s a complex encoding. I wonder if current models could decode it even given your explanation.


ByteBuddy is atrocious.

>In October 2015, Byte Buddy was distinguished with a Duke's Choice award by Oracle. The award appreciates Byte Buddy for its "tremendous amount of innovation in Java Technology". We feel very honored for having received this award and want to thank all users and everybody else who helped making Byte Buddy the success it has become. We really appreciate it!

Don't misread me. It's solid software. And an instance of a well structure objet-oriented code base.

But it's impossible to do anything without having a deep and wide understanding of the class hierarchy (which is just as deep and wide). Out of 1475 issues on the project's Github page, 1058 are labelled as questions. You can't just start with a few simple bricks and gradually learn the framework. The learning curve is super steep from the get go, all of the complexity is thrown into your face as soon as you enter the room.

This is the kind of space where LLM would shine


And yet prompts can be optimized.

You can optimize a prompt for a particular LLM model and this can be done only through experimentation. If you take your heavily optimized prompt and apply it to a different model there is a good chance you need to start from scratch.

What you need to do every few months/weeks depending of when the last model was released is to reevaluate your bag of tricks.

At some point it becomes a roulette - you try this, you tray that and maybe it works or maybe not ...


Stumbled upon this in another thread:

https://ai-analytics.wharton.upenn.edu/generative-ai-labs/re...

My point still holds that it is optimizable though (https://github.com/zou-group/textgrad, https://arxiv.org/abs/2501.16673)

>Subjects develop elaborate "rain dances" in the belief that they can influence the outcome. Not unlike sports fans superstitions.

Anybody tuning neural weights by hand would feel like this.


Truthfulness doesn't always align with honesty. The LLM should have said: "oops i saw the EXIF data, please pick another image".

And I don't even think it's a matter of the LLM being malicious. Humans playing games get their reward from fun, and will naturally reset the game if the conditions do not lead to it.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: