Hacker Newsnew | past | comments | ask | show | jobs | submit | ofirpress's commentslogin

We (the Princeton SWE-bench team) built an agent in ~100 lines of code that does pretty well on SWE-bench, you might enjoy it too: https://github.com/SWE-agent/mini-swe-agent

OK that really is pretty simple, thanks for sharing.

The whole thing runs on these prompts: https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...

  Your task: {{task}}. Please reply
  with a single shell command in
  triple backticks.
  
  To finish, the first line of the
  output of the shell command must be
  'COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT'.

Pretty sure you also need about 120 lines of prompting from default.yaml

https://github.com/SWE-agent/mini-swe-agent/blob/7e125e5dd49...


  system_template: str = "You are a helpful assistant that can do anything."
anything? Sounds like an AI Safety issue ;)

You’d be surprised at the amount of time wasted because LLMs “think” they can’t do something. You’d be less surprised that they often “think” they can’t do something, but choose some straight ignorant path that cannot work.

There are theoretically impossible things to do, if you buy into only the basics. If you open your mind, anything is achievable; you just need to break out of the box you’re in.

If enough people keep feeding in that we need a time machine, the revolution will play out in all the timelines. Without it, Sarah Connor is lost.


> 1. Analyze the codebase by finding and reading relevant files 2. Create a script to reproduce the issue 3. Edit the source code to resolve the issue 4. Verify your fix works by running your script again 5. Test edge cases to ensure your fix is robust

This prompt snippet from your instance template is quite useful. I use something like this for getting out of debug loops:

> Analyse the codebase and brainstorm a list of potential root causes for the issue, and rank them from most likely to least likely.

Then create scripts or add debug logging to confirm whether your hypothesis is correct. Rule out root causes from most likely to least by executing your scripts and observing the output in order of likelihood.


Does this mean it's only useful for issue fixes?

A feature is just an issue. The issue is that the feature isn't complete yet.

when a problem is entirely self contained in a file, it's very easy to edit it with LLM.

that's not the case with a codebase, where things are littered around in tune with specific model of organisation the developer had in mind.



> in tune with specific model of organisation

You wish


Nice but sad to see lack of tools. Most your code is about the agent framework instead of specific to SWE.

I've built a SWE agent too (for fun), check it out => https://github.com/myriade-ai/autocode


> sad to see lack of tools.

Lack of tools in mini-swe-agent is a feature. You can run it with any LLM no matter how big or small.


I'm trying to understand what does it got to do with LLM size? Imho, right tools allow small models to perform better than undirected tool like bash to do everything. But I understand that this code is to show people how function calling is just a template for LLM.

Mini swe agent, as an academic tool, can be easily tested aimed to show the power of a simple idea against any LLM. You can go and test it with different LLMs. Tool calls didn't work fine with smaller LLM sizes usually. I don't see many viable alternatives less than 7GB, beyond Qwen3 4B for tool calling.

> right tools allow small models to perform better than undirected tool like bash to do everything.

Interesting enough the newer mini swe agent was refutation of this hypothesis for very large LLMs from the original swe agent paper (https://arxiv.org/pdf/2405.15793) assuming that specialized tools work better.


cheers i'll add it in.

What sort of results have you had from running it on its own codebase?

[I'm one of the co-creators of SWE-bench] The team managed to improve on the already very strong o3 results on SWE-bench, but it's interesting that we're just seeing an improvement of a few percentage points. I wonder if getting to 85% from 75% on Verified is going to take as long as it took to get from 20% to 75%.


I can be completely off base, but it feels to me like benchmaxxing is going on with swe-bench.

Look at the results from multi swe bench - https://multi-swe-bench.github.io/#/

swe polybench - https://amazon-science.github.io/SWE-PolyBench/

Kotlin bench - https://firebender.com/leaderboard


I kind of had the feeling LLMs would be better at Python vs other languages, but wow, the difference on Multi SWE is pretty crazy.


Maybe a lot of the difference we see between peoples comments about how useful AI is for their coding, is a function of what language they're using. Python coders may love it, Go coders not much at all.


Not sure what you mean by benchmaxxing but we think there's still a lot of useful signals you can infer from SWE-bench-style benchmarking.

We also have SWE-bench Multimodal which adds a twist I haven't seen elsewhere: https://www.swebench.com/multimodal.html


I mean that there is the possibility that swe bench is being specifically targeted for training and the results may not reflect real world performance.


How long did it take to go from 20% to 75%?




Indeed a bitter lesson. I once enjoyed encoding human knowledge into a computer because it gives me understanding of what's going on. Now everything is becoming a big black box that is hard to reason about. /sigh/

Also, Moore's law has become a self-fulfilling prophecy. Now more than ever, AI is putting a lot of demand on computational power, to the point which drives chip makers to create specialized hardware for it. It's becoming a flywheel.


I am still hoping AI progress will get to the point where the AI can eventually create AI's that are built up out of robust and provable logic which can be read and audited. Until that time, I wouldn't trust it for risky stuff. Unfortunately, it's not my choice and within a scarily short timespan, black boxes will make painfully wrong decisions about vital things that will ruin lives.


AI assisted theorem provers will go a bit in that direction. You may not know exactly how they managed to construct a proof, but you can examine that proof in detail and verify its correctness.


Yes, I have a small team of (me being 1/3) doing formal verification in my company and we do this and it doesn't actually matter if how the AI got there; we can mathematically say it's correct which is what matters. We do (and did) program synthesis and proofs but this is all very far from doing anything serious at scale.


What kind of company needs formal verification? Real time systems?


Companies designing digital circuits use it all the time.

Say you have a module written in VHDL or Verilog and it is passing regressions and everyone is happy. But as the author, you know the code is kind of a mess and you want to refactor the logic. Yes, you can make your edits and then run a few thousand directed tests and random regressions and hope that any error you might have made will be detected. Or you can use formal verification and prove that the two versions of your source code are functionally identical. And the kicker is it often takes minutes to formally prove it, vs hundreds to thousands of CPU hours to run a regression suite.

At some point the source code is mapped from a RTL language to gates, and later those gates get mapped to a mask set. The software to do that is complex and can have bugs. The fix is to extract the netlist from the masks and then formally verify that the extracted netlist matches the original RTL source code.

If your code has assertions (and it should), formal verification can be used to find counter examples that disprove the assertion.

But there are limitations. Often logic is too complex and the proof is bounded: it can show that from some initial state no counter example can be found in, say, 18 cycles, but there might be a bug that takes at least 20 cycles to expose. Or it might find counter examples and you find it arises only in illegal situations, so you have to manually add constraints to tell it which input sequences are legal (which often requires modeling the behavior of the module, and that itself can have bugs...).

The formal verifiers that I'm familiar with are really a collection of heuristic algorithms and a driver which tries various approaches for a certain amount of time before switching to a different algorithm to see if that one can crack the nut. Often, when a certain part of the design can be proven equivalent, it aids in making further progress, so it is an iterative thing, not a simple "try each one in turn". The frustrating thing is you can run formal on a module and it will prove there are no violations with a bounded depth of, say, 32 cycles. A week later a new release of your formal tool comes out with bug fixes and enhancements. Great! And now that module might have a proof depth of 22 cycles, even though nothing changed in the design.


Real time / embedded / etc for money handling, healthcare, aviation/transport... And 'needs' is a loaded term; the biggest $ contributors to formal verification progress are blockchain companies these days while a lot of critical systems are badly written, outsourced things that barely have tests.

My worst fear, which is happening because it works-ish, is vague/fuzzy systems being the software because it's so like humans and we don't have anything else. It's a terrible idea, but of course we are in a hurry.


>AI can eventually create AI's that are built up out of robust and provable logic

That's the approach behind Max Tegmark and Steven Omohundro's "Provably Safe AGI":

https://arxiv.org/abs/2309.01933

https://www.youtube.com/watch?v=YhMwkk6uOK8

However, there are issues. How do you even begin to formalize concepts like human well-being?


> However there are issues. How do you even begin to formalize concepts like human well-being?

Oh agreed! But with AI we might(!) have the luxury to create different types of brains; logically correct brains for space flight, building structures (or at least the calcuations), taxes, accounting, physics, math etc and brains with feelings for many other things. Have those cooperate.

ps. thanks for the links!


The only problem is that "logical correctness" depends on the limits of human brain too: formal logic is based on the usual pre-accepted assumptions and definitions ("axioms").

This is what I consider the limit of the human mind: we have to start with a few assumptions we can't "prove" to build even a formal logic system which we then use to build all the other provably correct systems, but we still add other axioms to make them work.

It's hard for me to even think how AI can help with that.


Quis custodiet ipsos custodes?

https://en.m.wikipedia.org/wiki/Quis_custodiet_ipsos_custode...

excerpt of the first few paragraphs, sorry about any wrong formatting, links becoming plain text, etc. just pasted it as is:

Quis custodiet ipsos custodes? is a Latin phrase found in the Satires (Satire VI, lines 347–348), a work of the 1st–2nd century Roman poet Juvenal. It may be translated as "Who will guard the guards themselves?" or "Who will watch the watchmen?".

The original context deals with the problem of ensuring marital fidelity, though the phrase is now commonly used more generally to refer to the problem of controlling the actions of persons in positions of power, an issue discussed by Plato in the Republic.[citation needed] It is not clear whether the phrase was written by Juvenal, or whether the passage in which it appears was interpolated into his works. Original context edit

The phrase, as it is normally quoted in Latin, comes from the Satires of Juvenal, the 1st–2nd century Roman satirist. Although in its modern usage the phrase has wide-reaching applications to concepts such as tyrannical governments, uncontrollably oppressive dictatorships, and police or judicial corruption and overreach, in context within Juvenal's poem it refers to the impossibility of enforcing moral behaviour on women when the enforcers (custodes) are corruptible (Satire 6, 346–348):

audio quid ueteres olim moneatis amici, "pone seram, cohibe." sed quis custodiet ipsos custodes? cauta est et ab illis incipit uxor.

I hear always the admonishment of my friends: "Bolt her in, constrain her!" But who will watch the watchmen? The wife plans ahead and begins with them!


Apologies for taking the phrase in a slightly farcical (& incurious ?) direction:

   Who will take custody of the custodians?


#!/usr/bin/badlatininterpreter

no comprendere tu commentum

but

apologia unneeded est


"Take custody" => infantilize, as of children => handling people with power like children => copium, wankery

Apologia not uh in the realm of consideration, marginally insightful because shitty latin marginally enjoyable


Well, take compiler optimization for example. You can allow your AI to use correctness-preserving transformations only. This will give you correct output no matter how weird the AI behaves.

The downside is that you will sometimes not get the optimizations that you want. But, this is sort of already the case, even with human made optimization algorithms.


This depends a little bit on what the goal of AI research is. If it is (and it might well be) to build machines that excel at tasks previously thought to be exclusively reserved to, or needing to involve, the human mind, then these bitter lessons are indeed worthwhile.

But if you do AI research with the idea that by teaching machines how to do X, we might also be able to gain insight in how people do X, then ever more complex statistical setups will be of limited information.

Note that I'm not taking either point of view here. I just want to point out that perhaps a more nuanced approach might be called for here.


> if you do AI research with the idea that by teaching machines how to do X, we might also be able to gain insight in how people do X, then ever more complex statistical setups will be of limited information

At the very least we know consistent language and vision abilities don't require lived experience. That is huge in itself, it was unexpected.


> At the very least we know consistent language and vision abilities don't require lived experience.

I don't think that's true. A good chunk of the progress done in the last years is driven by investing thousand of man-hours asking them "Our LLM failed at answering X. How would you answer this question?". So there's definitely some "lived experience by proxy" going on.


Is that true though given e.g. the hallucinations you regularly get from LLMs?


> In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded.Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

I was there, at that moment where pattern matching for vision started to die. That was not completely lost though, learning from that time is still useful on other places today.


I was an undergrad interning in a computer vision lab in the early 2010s. During group meeting, someone presented a new paper that was using abstract machine learning like stuff to do vision. The prof was so visibly perturbed and agnostic. He could not believe that this approach was even a little bit viable, when it so clearly was.

Best lesson for me - vowed never to be the person opposed to new approaches that work.


> Best lesson for me - vowed never to be the person opposed to new approaches that work.

I think you'll be surprised at how hard that will be to do. The reason many people feel that way is because: (a) they've become an expert (often recognized) in the old approach. (b) They make significant money (or something else).

At the end of the day, when a new approach greatly encroaches into your way of life -- you'll likely push back. Just think about the technology that you feel you derive the most benefit from today. And then think if tomorrow someone created something marginally better at its core task, but for which you no longer reap any of the rewards.


Of course it is difficult, for precisely the reasons you indicate. It's one of those lifetime skills that you have to continuously polish, and if you fall behind it is incredibly hard to recover. But such skills are necessary for being a resilient person.


You are acting like it was obvious that machine learning was the future, but this person was just stubborn. I don't think that was necessarily the case in the early 2010s and skepticism was warranted. If you see results and ignore them, sure that is a problem. But it wasn't until ML vision results really started dominating conferences such as CVPR that it became clear. It's all a tradeoff of exploration/exploitation.


Oof. Imagine the bitter lesson classical NLP practitioners learned. That paper is as true today as ever.


This describes Go AIs as a brute force strategy with no heuristics, which is false as far as I know. Go AIs don't search the entire sample space, they search based on their training data of previous human games.


First there was AlphaGo, which had learnt from human games, then further improved from self-play, then there was AlphaGo Zero which taught itself from scratch just by self-play, not using any human data at all.

Game programs like AlphaGo and AlphaZero (chess) are all brute force at core - using MCTS (Monte Carlo Tree Search) to project all potential branching game continuations many moves ahead. Where the intelligence/heuristics comes to play is in pruning away unpromising branches from this expanding tree to keep the search space under control; this is done by using a board evaluation function to assess the strength of a given considered board position and assess if it is worth continuing to evaluate that potential line of play.

In DeepBlue (old IBM "chess computer" that beat Kasparov) the board evalation function was hand written using human chess expertise. In modern neural-net based engines such as AlphaGo and AlphaZero, the board evaluation function is learnt - either from human games and/or from self-play, learning what positions lead to winning outcomes.

So, not just brute force, but that (MCTS) is still the core of the algorithm.


This a somewhat uninteresting matter of semantics, but I think brute force generally refers to exhaustive search. MCTS is not brute force for that very reason (the vast majority of branches are never searched at all).


OK, but I think it's generally understood that exhaustive search is not feasible for games like Chess and Go, so when "brute force" is used in this context it means an emphasis on deep search and number of positions evaluated rather than the human approach where many orders of magnitude less positions are evaluated.


I think that kind of erodes the meaning of the phrase. A typical MCTS run for alphazero would evaluate what, like 1024 rollouts? Maybe less? That's a drop in the ocean compared to the number of states available in chess. If you call that brute force then basically everything is.

I've personally viewed well over a hundred thousand rollouts in my training as a chess bot =P


> Game programs like AlphaGo and AlphaZero (chess) are all brute force at core -

What do you call 2500 years of human game play if not brute force? Cultural evolution took 300K years, quite a lot of resources if you ask me.


That 2500 years of game play is reflected in chess theory and book openings, what you might consider as pre-training vs test time compute.

A human grandmaster might calculate 20-ply ahead, but only for a very limited number of lines, unlike a computer engine that may evaluate millions of positions for each move.

Pattern matching vs search (brute force) is a trade off in games like Chess and Go, and humans and MCTS-based engines are at opposite ends of the spectrum.


Either you missed an /s or I am very interested to hear you unpack this a little bit. If you are serious, it just turns "brute force" into a kind of empty signifier anyway.

What do you call the attraction of bodies if not love? What is an insect if not a little human?


> ... This describes Go AIs as a brute force strategy with no heuristics ...

no, not really, from the paper

>> Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear.

important notion here is, imho "learning by self play". required heuristics emerge out of that. they are not programmed in.


The paragraph on Go AI looked accurate to me. Go AI research spent decades trying to incorporate human-written rules about tactics and strategy. None of that is used any more, although human knowledge is leveraged a bit in the strongest programs when choosing useful features to feed into the neural nets. (Strong) Go AIs are not trained on human games anymore. Indeed they don't search the entire sample space when they perform MCTS, but I don't see Sutton claiming that they do.


I remember the article, and remember how badly it missed the point... The goal of writing a chess program that could beat a world champion wasn't to beat the world champion... the goal was to gain understanding into how anyone can play chess well. The victory in that match would've been equivalent to eg. drugging Kasparov prior to the match, or putting a gun to his head and telling him to lose: even cheaper and more effective.


"The goal of Automated driving is not to drive automatically but to understand how anyone can drive well"...

The goal of DeepBlue was to beat the human with a machine, nothing more.

While the conquest of deeper understanding is used for a lot of research, most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before. (Understanding human intelligence is nowadays a different field)


Seems like you missed the point too: I'm not talking about DeepBlue, I'm talking about using the game of chess as a "lab rat" in order to understand something more general. DeepBlue was the opposite to the desire of understanding "something more general". It just found a creative way to cheat at chess. Like that Japanese pole jumper (I think he was Japanese, cannot find this atm) who instead of jumping learned how to climb a stationary pole, and, in this way, won a particular contest.

> most AI (read modern DL) research is not about understanding human intelligence, but automatic things we could not do before.

Yes, and that's a bad thing. I don't care if shopping site recommendations are 82% accurate rather than 78%, or w/e. We've traded an attempt at answering an immensely important question for a fidget spinner.

> Understanding human intelligence is nowadays a different field

And what would that be?


The Bitter Lesson seems to be generally accepted knowledge in the field. Wouldn't that make DeepSeek R1 even more of a breakthrough?


that was “bitter lesson” in action.

for example there are clever ways of rewarding all the steps of a reasoning process to train a network to “think”. but deepseek found these don’t work as well as much simpler yes/no feedback on examples of reasoning.


nice read and insightful


I'm one of the co-authors of SWE-bench. We just created a Javascript (+visual) SWE-bench: https://www.swebench.com/multimodal.html

We're going to release the eval suite for this soon so that people can start making submissions.


Thanks for posting this! I'm here if you have any questions.


The ALiBi paper shows that our method beats the sinusoidal PE you refer to across many benchmarks. https://arxiv.org/abs/2108.12409


(I wrote ALiBi)

Thanks for posting this! You can view a video where I explain what we did and why it's useful at: https://www.youtube.com/watch?v=Pp61ShI9VGc


Thanks a lot! I always felt weird about positional embeddings, because positions are not a set, they’re a continuum. My initial guess for why they don’t extrapolate was that the extrapolated embeddings step on the others’ turf once a few computations or layers are applied, causing the model to be confused about order, as if random concepts were inserted here and there. (Position overfit seems like it would weigh in though indeed.)

Have you experimented with nonlinear biases?


Is ALiBi still the sota for this setting, or have there been advances beyond this in the last 8 months? I know there has been a lot of interest in longer context lengths recently.


xpos is SoTA right now: https://arxiv.org/pdf/2212.10554.pdf


Thanks!


If I understand it correctly, you are only attending preceding tokens in your paper. Can the constant bias matrix be made symmetric for unmasked tasks?


I’m curious as to whether this inductive bias wouldn’t hurt on tasks where the first sentence of a long corpus would contain the most useful information.

Nonetheless, very clever trick and congrats on the great paper!


(I wrote ALiBi) You can read the paper here https://arxiv.org/abs/2108.12409

While intuitively it does seem like ALiBi would make it hard for the model to attend to things that are far away, in many scenarios we've tested with different models trained on different datasets, ALiBi always performs better than sinusoidal, rotary, and other embedding types, even when we're not using it to extrapolate to longer sequence lengths.

These findings have been confirmed by others, including by the BLOOM open source LM project.


Small world!

Thanks for the link (which I've now skimmed beyond the abstract). What wasn't obvious to me from the abstract is that different attention heads have different penalty strengths, so if some prediction task requires long range dependencies you might expect one of the less-penalized heads to end up specializing. I wonder what would happen if the penalty for one head is zero? (The paper suggests this might've been tried and just made things worse, but unclear)

I must admit that this is a wonderfully elegant (and interpretable) way to do this... much more intuitive (to me at least, a wannabe practitioner) than all of the trig-based embeddings.


> so if some prediction task requires long range dependencies you might expect one of the less-penalized heads to end up specializing Exactly. You have heads that focus on content nearby and ones that focus on stuff that is far away.

> I wonder what would happen if the penalty for one head is zero? (The paper suggests this might've been tried and just made things worse, but unclear) Yup, this is something we tried. Making one of the heads zero doesn't improve or degrade performance.

>I must admit that this is a wonderfully elegant (and interpretable) way to do this... much more intuitive (to me at least, a wannabe practitioner) than all of the trig-based embeddings.

Thanks so much!!


Cool new efficient inference method that saves 2x memory and does not degrade performance for large language models!

More from the author about this at: https://twitter.com/Tim_Dettmers/status/1559892888326049792


Thanks for posting our paper! If anyone has any questions, I'll stick around this thread for a bit.

There's a summary of our paper on twitter: https://twitter.com/OfirPress/status/1344387959563325442

And our code is on GitHub: https://github.com/ofirpress/shortformer


No questions. After giving just a quick skim, this paper looks like great work. The findings are remarkable, and they're presented in clear, to-the-point language.

I confess to being a bit shocked that given the same number of parameters, training is 1.65x faster (whoa), generation is 9x faster (wait, what!?), and perplexity is better (which is a flawed measure, but still), and all by using a new form of "curriculum learning" and adding position embeddings to the queries and keys but not the values.

And it's so nice to see new ideas and improvements that don't rely on yet more computation or yet more parameters (I'm looking at you, GPT-3).

Congratulations!


Thank you! We spent a lot of time on making this as easy to understand as possible.


What's "perplexity" a measure of? First I've heard of it.


e^loss. It's a bad name for a confusing concept: Loss. (e^loss is just another way of plotting loss, after all.)

Loss isn't the whole story -- the steepest slope during training often produces the worst quality language models. You want a nice, gentle downward slope.

SubsimulatorGPT2 (https://reddit.com/r/subsimulatorgpt2) continued to improve in terms of human evaluation even though the loss stayed flat for over a week.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: