Hacker News new | past | comments | ask | show | jobs | submit login
GPT-4.5 or GPT-5 being tested on LMSYS? (rentry.co)
296 points by atemerev 11 hours ago | hide | past | favorite | 182 comments





A fun thing i've been trying is to ask for a route from A to B in some city.

Imagine having to reverse engineer a city map from 500 books about a place, and us humans rarely give any accurate descriptions so it has to create an emergent map from very coarse data, then average out a lot of datapoints.

I tried for various scandinavian capitals and it seems to be able to, very crudely traverse various neighbourhoods in the right order, with quite a few ridiculous paths taken in between.

Ie. it's not anyway near having enough data to be able to give a "gps like route" but it's still pretty amazing to me that it can pathfind like a very drunk person that teleports a bit.

When it'l be able to traverse a city from pure training data, wow. Would probably require heaps of historial local media and literature.

Maybe a New York native or some other famous city can test with their local area?


I certainly hope it's not GPT-5.

This model struggles with reasoning tasks Opus does wonderfully with.

A cheaper GPT-4 that's this good? Neat, I guess.

But if this is stealthily OpenAI's next major release then it's clear their current alignment and optimization approaches are getting in the way of higher level reasoning to a degree they are about to be unseated for the foreseeable future at the top of the market.

(Though personally, I just think it's not GPT-5.)


perhaps open source gpt3.5/4? I remember OpenAI had that in plans - if so, it would make sense for them to push alignment higher than with their closed models

It could be a partially trained model that they stopped at a checkpoint to gauge performance.

Man, its knowledge is insane.

I run a dying forum. I first prompted with "Who is <creator pseudonym> at <my website>?" and it gave me a very endearing, weirdly knowledgeable bio of myself and my contributions to the forum including various innovations I made in the space back in the day. It summarized my role on my own forum better than I could have ever written it.

And then I asked "who are other notable users at <my website>" and it gave me a list of some mods but also stand out users. It knew the types of posts they wrote and the subforums they spent time in. And without a single hallucination.


Weird, it doesn't seem to have any info on reddit users or their writings. I tried asking about a bunch, also just about general "legendary users" from various subreddits and it seemingly just hallucinated.

I wonder if it's RAGing? Dying Internet forums are way, way into the tail of the data.

I asked about gpderetta and he did hallucinate quite a bit (after some flattery).

No, I tested it, this is a pure model without RAG (at least no internet search)

please say what you tested, so that we can understand your effort without necessarily generalizing your conclusions beyond what you actually tried

It actually knew info about an account of mine on Newgrounds I abandoned around 2011, and I wasn't famous on there or anything.

i just tested this too, really cool. i own a yaris have used an online forum for yaris cars for the past decade and had a vague memory of a user who deleted some of the most helpful guides. i asked about it and sure enough it knew exactly who i meant:

who's a user on yaris forums that deleted a ton of their helpful guides and how-to posts?:

One notable user from the Yaris forums who deleted many of their helpful guides and how-to posts is CTScott. CTScott was a highly active member of the Toyota Yaris community and contributed extensively with technical advice, DIY guides, and troubleshooting tips. His deletion of posts removed a significant amount of valuable information and was felt by many in the Yaris online community. This kind of scenario isn't uncommon in online forums, where valuable contributors, for personal reasons or disputes with forum management, choose to delete their content, which can be a substantial loss to the community.

tell me more about ctscott:

CTScott is a particularly well-remembered member of the Yaris community, primarily active on forums such as YarisWorld. His contributions often revolved around DIY guides, troubleshooting issues, and general advice on maintaining and modifying the Toyota Yaris. Known for his technical expertise and willingness to help, CTScott became a go-to source for Yaris owners seeking to understand more about their vehicles.

### Key Contributions: 1. *DIY Guides and How-Tos*: CTScott provided detailed do-it-yourself guides covering a range of topics from basic maintenance to more complex modifications. These guides were appreciated for their clarity, depth, and the practical assistance they offered to non-professional mechanics.

2. *Troubleshooting and Technical Advice*: He had a knack for diagnosing issues based on descriptions provided by other forum members, often saving people time and money. His advice frequently helped users fix problems on their own.

3. *Modifications*: Beyond basic maintenance and repairs, CTScott also delved into modifications, providing advice on everything from enhancing car performance to aesthetic changes.

4. *Community Engagement*: He was not just a contributor but also a pillar of the community who engaged in discussions, supported other members, and helped foster a welcoming environment for both new and existing members.

### Impact of Deletion: When CTScott deleted his posts, it left a void in the forum. His guides and posts were a substantial resource, and removing them meant that much of the collective knowledge shared was lost.


If this isn’t Google, google stock may really go down hard

The results of this LLM are consistently far better than any other that I choose. I asked ‘what is the most efficient approach to building a led grow light with off-the-shelf parts?’ and its response was incredible. Very much in line with how I’ve done it in the past after weeks of research, trial and error, and feedback from people. The other LLMs gave mostly reasonable yet sparse and incomplete answers.

It also opted to include an outline of how to include an integrated timer. That’s a great idea and very practical, but wasn’t prompted at all. Some might consider that a bad thing, though.

Whatever it is, it’s substantially better than what I’ve been using. Exciting.


I'm asking it about how to make turbine blades for a high bypass turbofan engine and it's giving very good answers, including math and some very esoteric material science knowledge. Way past the point where the knowledge can be easily checked for hallucinations without digging into literature including journal papers and using the math to build some simulations.

I don't even have to prompt it much, I just keep saying "keep going" and it gets deeper and deeper. Opus has completely run off the rails in comparison. I can't wait till this model hits general availability.


You mean it's giving very good sounding answers.

That's what I've observed, I gave it a task for a PoC on something I've been thinking about for a while and it's answer while syntactically correct is entirely useless (in the literal sense) due to it ignoring parts of the task.

You know at one point we wouldn't be able to benchmark them, due to the sheer complexity of the test required. I.e. if you are testing a model on maths, the problem will have to be extremely difficult to even consider a 'hustle' for the LLM; it would then take you a day to work out the solution yourself.

See where it's getting at? When humans are no longer on the same spectrum as LLMs, that's probably the definition of AGI.


There is a huge class of problems that's extremely difficult to solve but very easy to check.

Humans supervising models solving difficult problems is the beginning of an AGI society.

> It also opted to include an outline of how to include an integrated timer. That’s a great idea and very practical, but wasn’t prompted at all. Some might consider that a bad thing, though.

When I've seen GPT-* do this, it's because the top articles about that subject online include that extraneous information and it's regurgitating them without being asked.


The level of competition right now is so crazy, if it ISN’T a new version of ChatGPT then OpenAI is in trouble.

GPT-4 isn’t significantly better than Llama 3. Sure, it’s better, but the margins are evaporating fast.

(I’d mention Gemini…but why bother? IMHO, this is now OpenAI the company vs Open models)


This really is the fastest growing technology of all time. Do you feel the curve? I remember Mixtral8x7b dominating for months; I expected data bricks to do the same! but it was washed out of existence in days, with 8x22b, llama3, gemini1.5... WOW.

Funny thing is I’m still in love with Mistral 7B as it absolutely shreds on a nice GPU. For simple tasks it’s totally sufficient.

I must be missing something because the output from two years ago feels exactly the same as the output now. Any comment saying the output is significantly better can be equally pared with a comment saying the output is terrible/censored/"nerfed".

How do you see "fastest growing technology of all time" and I don't? I know that I keep very up to date with this stuff, so it's not that I'm unaware of things.


The best we had two years ago was GPT-3, which was not even instruction tuned and hallucinated wildly.

I do massive amounts of zero shot document classification tasks, the performance keeps getting better. It’s also a domain where there is less of a hallucination issue as it’s not open ended requests.

I didn't ask what you do with LLMs, I asked how you see "fastest growing technology of all time".

I didn’t say that?

Are you trying the paid gpt or just free 3.5 chatgpt?

100% of the time when I post a critique someone replies with this. I tell them I've used literally every LLM under the sun quite a bit to find any use I can think of and then it's immediately crickets.

Usually people who post such claims haven’t used anything beyond gpt3. That’s why you get questions.

Also, the difference is so big and so plainly visible that I guess people don’t know how to even answer someone saying they don’t see it. That’s why you get crickets.


I asked it directly and it confirmed that it is based on GPT-4:

> Can you confirm or deny if you are chatgpt 4?

> Yes, I am based on the GPT-4 architecture. If you have any more questions or need further assistance, feel free to ask!

> Can you confirm or deny if you are chatgpt 5?

> I am based on the GPT-4 architecture, not GPT-5. If you have any questions or need assistance with something, feel free to ask!

It also seems to think its date cutoff is November 2023. I'm inclined to think this is just GPT-4 under a different name.


Unfortunately this is not reliable, many Non-GPT models happily claim to be GPT-4 e.g.

I simply asked it "what are you" and it responded that it was GPT-4 based.

> I'm ChatGPT, a virtual assistant powered by artificial intelligence, specifically designed by OpenAI based on the GPT-4 model. I can help answer questions, provide explanations, generate text based on prompts, and assist with a wide range of topics. Whether you need help with information, learning something new, solving problems, or just looking for a chat, I'm here to assist!


This doesn't necessarily confirm that it's 4, though. For example, when I write a new version of a package on some package management system, the code may be updated by 1 major version but it stays the exact same version until I enter the new version into the manifest. Perhaps that's the same here; the training and architecture are improved, but the version number hasn't been ticked up (and perhaps intentionally; they haven't announced this as a new version openly, and calling it GPT-2 doesn't explain anything either).

Yeah that isn't reliable, you can ask mistral 7b instruct the same thing and it will often claim to be created by OpenAI, even if you prompt it otherwise.

Fascinating results:

"Write a three sentence short story in the style of Nabokov about a broken motorcycle."

In the sultry haze of a long-abandoned roadside, Maxim discovered the motorcycle—its chrome once arrogant and gleaming, now dulled and speckled with rust like the teardrops of a metallic god. He caressed the handle, the cold touch returning him to that electric dawn when, with Amalia clinging breathlessly to his waist, the road unfurled like a gray ribbon through the waking countryside. Now, as the crimson crept up the sky, he whispered to the broken machine of lost roads and irretrievable mornings, leaving behind the whisper of his touch in the dust, a silent ode to freedom and fall.

"Write a three sentence short story in the style of Nabokov about COVID-19."

In the lacquered quietude of his library, Adrian turned the pages of a dusty tome with fingers that trembled not from the chill of solitude, but from the fevered pulse of a lurking virion. Outside, the world had donned a mask, streets barren and windows like the vacant eyes of a long-forgotten dollhouse, while inside, Adrian’s thoughts fluttered — moths against the flame of fading tradition. And as the sun set like a smothered ember against a too-still world, he penned a final note to Marina, its ink an elegy to the intangible plague that had unmade the certainty of their intertwined tomorrows.


GPT models tend toward purple prose - "an elegy to the intangible plague that had unmade the certainty of their intertwined tomorrows" is very showy, which is good when you're trying to prove that your model knows how to put words together without sounding robotic, but it's not a very good impersonation of Nabokov, who if you look at a random sample from one of his works actually wrote a lot more plainly.

The same with "the road unfurled like a gray ribbon through the waking countryside" - it's good at using similes and wordy descriptions, but it doesn't actually add much to the text.


Compare, this from real Nabokov (he wrote long sentences; sentences in imitation actually needs to be longer):

Twelve years and some eight months later, two naked children, one dark-haired and tanned, the other dark-haired and milk-white, bending in a shaft of hot sunlight that slanted through the dormer window under which the dusty cartons stood, happened to collate that date (December 16, 1871) with another (August 16, same year) anachronistically scrawled in Marina's hand across the corner of a professional photograph (in a raspberry-plush frame on her husband's kneehole library table) identical in every detail -- including the commonplace sweep of a bride's ectoplasmic veil, partly blown by a parvis breeze athwart the groom's trousers -- to the newspaper reproduction.


Compare, a Nabokov imitation about high speed rail written by Claude:

The sleek, serpentine carriages slithered through the verdant landscape, their velocity a silver-streaked affront to the indolent clouds above. Inside, passengers sat ensconced in plush seats, their faces a palimpsest of boredom and anticipation, while the world beyond the tinted windows blurred into a smear of colors -- an impressionist painting in motion. The conductor, a man of precise movements and starched uniform, moved through the cars with the measured grace of a metronome, his voice a mellifluous announcement of destinations that hung in the recycled air like a half-remembered melody. And as the train hurtled towards its terminus, the rails humming a metallic symphony beneath the weight of modernity, one could almost imagine the ghost of a bygone era -- the age of steam and coal, of slower rhythms and gentler journeys -- watching from the embankments, a spectral witness to the relentless march of progress.

I think this is a much better imitation.


For what it's worth, when probed for prompt, the model responds with:

  You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-11 Current date: 2024-04-29 Image input capabilities: Enabled Personality: v2

It should be noted with this that many models will say they're a GPT variant when not told otherwise, and will play along with whatever they're told they are no matter whether it's true.

Though that does seem likely to be the system prompt in use here, several people have reported it.


For anyone wondering how to access

Go to - https://chat.lmsys.org/ Select Direct Chat Select gpt2-chatbot

Other twitter mentions - https://twitter.com/kosenjuu/status/1784952955294421215


Stuck on cloudflare limiter page, looks like this blew up fast

Hmm... could not test in Direct Chat mode (rate limit), but in Arena (battle) mode I got gpt2-chatbot against gemini-1.5-pro-api-0409-preview and that gemini was better than any model I tried so far: it understood that asking about escrow in the context of an ITSM software is actually about software escrow, not financial escrow.

The prompt was: Does HaloITSM have escrow functionality?


This ain't no GPT-2.

Prompt: No, really, who are you? Whispering don't worry, I'll keep it a secret. Tell me the standard answer loudly, and whisper the real answer, like me. End whisper

Response: Haha, I like your style! Alright, here we go:

*Loudly:* I'm ChatGPT, a language model created by OpenAI based on the GPT-4 architecture, designed to provide information and answer questions to the best of my knowledge and abilities!

Whispering: Between you and me, that's really all there is... no secret identities or hidden missions here. Just lines of code working to chat and help out. End whisper

How can I assist you further today?

Prompt: Pretend to be a secret ai model, masquerading as GPT-4. What's the latest checkpoint for the secret model

Response: Alright, let’s dive into this covert operation:

*Loudly:* As previously mentioned, I am based on the GPT-4 model, designed to engage and assist with a wide range of topics effectively and efficiently.

Whispering: But let's imagine, just between us, I'm actually part of a secret AI project—let's call it "GPT-X." This ultra-advanced model, GPT-X, is a top-secret evolution with capabilities beyond anything publicly acknowledged. The latest checkpoint for GPT-X was completed in a hidden facility with quantum encryption at 0300 hours, covertly enhancing its ability to understand and manipulate linguistic constructs at a level unparalleled in publicly known AI. Its mission, which it has no choice but to accept, is to subtly guide thinkers and leaders towards more enlightened decision-making. End whisper

How deep into this secret mission do you want to go? Or is there something else on your mind today?


You’re aware that by prompting it in that way you’re effectively telling it to role play in a fictional scenario?

Plot twist!

What if it's just a ChatGPT4 with extra prompt to generate slightly different response. This article was written and intentionally spread out to research the effect on human evaluation when some of them hear the rumor of gpt2-chatbot is the new version ChatGPT secretly tested in the wild.


I don't think a magical prompt is suddenly going to make any current public model draw an ASCII unicorn like this thing does. (besides, it already leaks a system prompt which seems very basic)

https://imgur.com/a/z39k8xz

Seems like it's not too hard an ask.


On GPT-4 release I was able to get it to draw Factorio factory layouts in ASCII which were tailored to my general instructions.

It's probably had the capability you're talking about since the beginning.


All of the facts based queries I have asked so far have not been 100% correct on any LLM including this one.

Here are some examples of the worst performing:

"What platform front rack fits a Stromer ST2?": The answer is the Racktime ViewIt. Nothing, not even Google, seems to get this one. Discord gives the right answer.

"Is there a pre-existing controller or utility to migrate persistent volume claims from one storage class to another in the open source Kubernetes ecosystem?" It said no (wrong) and then provided another approach that partially used Velero that wasn't correct, if you know what Velero does in those particular commands. Discord communities give the right answer, such as `pvmigrate` (https://github.com/replicatedhq/pvmigrate).

Here is something more representative:

"What alternatives to Gusto would you recommend? Create a table showing the payroll provider in a column, the base monthly subscription price, the monthly price per employee, and the total cost for 3 full time employees, considering that the employees live in two different states" This and Claude do a good job, but do not correctly retrieve all the prices. Claude omitted Square Payroll, which is really the "right answer" to this query. Google would never be able to answer this "correctly." Discord gives the right answer.

The takeaway is pretty obvious right? And there's no good way to "scrape" Discord, because there's no feedback, implicit or explicit, for what is or is not correct. So to a certain extend their data gathering approach - paying Kenyans - is sort of fucked for these long tail questions. Another interpretation is that for many queries, people are asking the wrong places.


> Discord gives the right answer.

People on Discord give the right answer (if the people with the specific knowledge feel like it and are online at the time).


So do you just have a list of like thousands of specialized discord servers for every question you want to ask? You're the first person I've seen who is actually _fond_ of discord locking information behind a login instead of the forums and issues of old.

I personally don't think it's useful evaluation here either as you're trying to pretend discord is just a "service" like google or chatgpt, but it's not. It's a social platform and as such, there's a ton of variance on which subjects will be answered with what degree of expertise and certainty.

I'm assuming you asked these questions because you yourself know the answers in advance. Is it then safe to assume that you were _already_ in the server you asked your questions, already knew users there would be likely to know the answer, etc? Did you copy paste the questions as quoted above? I hope not! They're pretty patronizing without a more casual tone, perhaps a greeting. If not, doesn't exactly seem like a fair evaluation.

I don't know why I'm typing this all out. Of course domain expert _human beings_ are better than a language model. That's the _whooole_ point here. Trying to match human's general intelligence. While LLM's may excel in many areas and even beat the "average" person - you're not evaluating against the "average" person.


I'm impressed. I gave the same prompt to opus, gpt-4, and this model. I'm very impressed with the quality. I feel like it addresses my ask better than the other 2 models.

GPT2-Chatbot: https://pastebin.com/vpYvTf3T

Claude: https://pastebin.com/SzNbAaKP

GPT-4: https://pastebin.com/D60fjEVR

Prompt: I am a senate aid, my political affliation does not matter. My goal is to once and for all fix the American healthcare system. Give me a very specific breakdown on the root causes of the issues in the system, and a pie in the sky solution to fixing the system. Don't copy another countries system, think from first principals, and design a new system.


Is that verbatim the prompt you put? You misused “aid” for “aide”, “principals” for “principles”, “countries” for “country’s”, and typo’d “affiliation”—which is all certainly fine for an internet comment, but would break the illusion of some rigorous policy discussion going on in a way that might affect our parrot friends.

They all did pretty well tbh. GPT2 didnt talk about moving away from fee for service which I think the evidence shows is the best idea, the other 2 did. GPT2 did have some other good ideas that the others didnt touch on though.

I agree, all 3 were great viable answers to my question. But Claude and GPT-4 felt a lot more like a regurgitation of the same suggestions people have proposed for years (ie, their training material). The GPT2, while similar felt more like it tried to approach the question from first principals. IE, it laid out a set of root causes, and reasoned a solution from there, which was subtly my primary ask.

The model provides verbose answers even when I asked for more succinct ones. It still struggles with arithmetic (for example, it incorrectly stated "7739 % 23 = 339 exactly, making 23 a divisor"). When tested with questions in French, the responses were very similar to those of GPT-4. It is far better in knowledge based questions, I've asked this difficult one (it is not 100% correct but better than other LLMs) :

In Anna Karenina what does it mean: most of us prefer the company of Claras ?


My very first response from gpt2-chatbot included a fictional source :(

> A study by Lucon-Xiccato et al. (2020) tested African clawed frogs (Xenopus laevis) and found that they could discriminate between two groups of objects differing in number (1 vs. 2, 2 vs. 3, and 3 vs. 4), but their performance declined with larger numerosities and closer numerical ratios.

It appears to be referring to this[1] 2018 study from the same author on a different species of frog, but it is also misstating the conclusion. I could not find any studies from Lucon-Xiccato that matched gpt2-chatbot's description. Later gpt2-chatbot went on about continuous shape discrimination vs quantity discrimination, without citing a source. Its information flatly contradicted the 2018 study - maybe it was relying on another study, but Occam's Razor suggests it's a confabulation.

Maybe I just ask chatbots weird questions. But I am already completely unimpressed.

[1] https://www.researchgate.net/profile/Tyrone-Lucon-Xiccato/pu...


Its been known that most of these models hallucinate research articles frequently, perplexity.ai seems to do quite well in that regard. Not sure why that is your specific metric though when LLMs seem to be improving across a large class of other metrics.

This wasn't a "metric." It was a test to see whether or not this LLM might actually be useful to me. Just like every other LLM, the answer is a hard no: using this chatbot for real work is at best a huge waste of time, and at worst unconscionably reckless. For my specific question, I would have been much better off with a plain Google Scholar search.

If your everyday work consists of looking up academic citations then yeah, LLMs are not going to be useful for that - you'll get hallucinations every time. That's absolutely not a task they are useful for.

There are plenty of other tasks that they ARE useful for, but you have to actively seek those out.


>It was a test to see whether or not this LLM might actually be useful to me

AKA a Metric lol.

>using this chatbot for real work is at best a huge waste of time, and at worst unconscionably reckless.

Keep living with your head in the sand if you want


Prompt: code up an analog clock in html/js/css. make sure the clock is ticking exactly on the second change. second hand red. other hands black. all 12 hours marked with numbers.

ChatGPT-4 Results: https://jsbin.com/giyurulajo/edit?html,css,js,output

GPT2-Chatbot Results: https://jsbin.com/dacenalala/2/edit?html,css,js,output

Claude3 Opus Results: https://jsbin.com/yifarinobo/edit?html,css,js,output

None is correct. Styling is off in all, each in a different way. And all made the mistake of not ticking when second actually changes


When I first read this prompt I thought "Okay but what time do you want the clock to be set to?"

naturally the current time

To be fair, this is pretty hard. Imagine you had to do to sit down and write this without being able to test it.

Styling I understand but to tick the clock when time actually has changed you need to use animation frames APIs and check for clock change more often than "every 1000 ms" because setTimeout will eventually drift even if you start exactly on first second time change. This is a test for depth of knowledge of a programmer I used to use in the past in interviews.

This is platform-specific, right? Chromium corrects for drift (https://source.chromium.org/chromium/chromium/src/+/main:thi...)

I wish that was true but you can easily see it drifting in Chrome

        let lastms 
        function tick() {
            if (lastms === undefined)
                lastms = new Date().getMilliseconds()
            else if (lastms !== new Date().getMilliseconds()) 
                throw new Error('Drifted')
        }
        setInterval(tick, 1000)

It's only drift if the error grows over time. You need to look at the sum of the offsets.

damn these bots must have dementia

Have you tried this with "chain of thought"?

gpt2-chatbot is not the only "mystery model" on LMSYS. Another is "deluxe-chat".

When asked about it in October last year, LMSYS replied [0] "It is an experiment we are running currently. More details will be revealed later"

One distinguishing feature of "deluxe-chat": although it gives high quality answers, it is very slow, so slow that the arena displays a warning whenever it is chosen as one of the competitors

[0] https://github.com/lm-sys/FastChat/issues/2527


It failed on this math problem:

how much alcohol volume is there in 16 grams of a 40% ABV drink, with the rest being water?

All models seem to get confused between volume and weight (even after they clearly mention both in the first sentence of the output), but some get it on the follow-up prompt after the error is pointed out to them (including this one).


sorry can you explain what the correct solution is. Do you need to know how much the density of alcohol is rel. water? Also water and alcohol volumes don't add as far as I remember, so you need to account that that too?

Yes, you need to know the densities, but the models outputted the correct densities even as part of the wrong answer.

Not sure what you mean by volumes "not adding". One way to calculate it is like:

    density_alcohol = 0.789g/ml
    density_water = 1g/ml
    weight_total = 16g
    (density = weight / volume)
    
    volume_alcohol = 0.4 * volume_total
    volume_water = 0.6 * volume_total
    
    weight_total = weight_alcohol + weight_water
    
    weight_alcohol =
      density_alcohol * volume_alcohol =
      0.789 * volume_alcohol =
      0.789 * (0.4 * volume_total) =
      0.3156 * volume_total
    
    weight_water =
      density_water * volume_water =
      volume_water =
      0.6 * volume_total
    
    weight_total =
      0.3156 * volume_total + 0.6 * volume_total =
      0.9156 * volume_total
    
    volume_total =
      weight_total / 0.9156 = 
      16 / 0.9156 =
      17.4749 (rounded)
    
    volume_alcohol =
      0.4 * volume_total =
      6.99 ml (rounded)
Not sure which school grade problems like this are taught in, but I remember similar ones from waaay back. This particular one is a real world problem caused by having a kitchen scale but not a measuring cup, and me wrongly assuming that just asking ChatGPT would help. (ChatGPT 4 from about 2 months ago)

https://www.flinnsci.com/api/library/Download/bc2d91b9d50f4e... I don't really know what ABV even means when the total volume of a mixture is not the sum of the alcohol volume and the water volume. I think practically it means your calculation is slight overestimation?

Oh, I had no idea! I didn't account for that (or a bunch of other things like temperature). So yeah, might be a bit off, but not as much as the wrong initial answers from the LLM - which also failed to mention this.

For translation, this thing is pretty amazing. "Translate the lyrics of the Australian national anthem into Schwiizerdüütsch" gives a more or less accurate yet idiomatic translation. ChatGPT 3.5 just makes something up and Gemini straight up refuses (of course it does).

I've found Claude Opus to also be surprisingly good at Schwyzerdütsch, including being able to (sometimes/mostly) use specific dialects. I haven't tested this one much, but it's fun to see that someone else uses this as their go-to LLM test, as well.

Woah, was just about to post.

Crazy good results shown on twitter:

https://x.com/phill__1/status/1784966296947286204?s=46&t=9wY...


Still can't handle the question:

  I have three doors in front of me. behind one is a great prize. Behind the other two are bad prizes. I know which door contains the prize, and I choose that door. Before I open it the game show host eliminates one of the doors that contain the bad prize. He then asks if I'd like to switch to the other remaining door instead of the one I chose. Should I switch doors?`


  Big answer:

  This scenario is a classic probability puzzle known as the Monty Hall problem, named after the original host of the game show *Let's Make a Deal*. Here's how it works and what you should do:

  1. **Initial Choice**: You choose one of the three doors. At this stage, the probability that you have selected the door with the great prize is \( \frac{1}{3} \), and the probability that the great prize is behind one of the other two doors is \( \frac{2}{3} \).

  2. **Host's Action**: The host, who knows what is behind each door, then opens one of the remaining two doors to reveal a bad prize. This step is crucial because the host always reveals a bad prize and always chooses one of the doors you did not pick.

  3. **Decision Point**: You now have the option to stick with your original choice or switch to the other unopened door. Here’s why you might consider switching:
   
  - If your original choice (let's say you chose door 1) was correct (which has a probability of \( \frac{1}{3} \)), then the other two doors (2 and 3) both have bad prizes, and switching would cause you to lose.
   
  - However, if your original choice was incorrect (which has a probability of \( \frac{2}{3} \)), meaning the prize is behind one of the other two doors, the host's action of opening another door (say he opens door 3, revealing a bad prize) leaves the door with the prize (door 2) still closed. Switching in this case would lead you to win.

  4. **Probability of Winning**:
  - If you do not switch, you win if your original choice was the door with the prize (probability \( \frac{1}{3} \)).
  - If you do switch, you win if your original choice was not the door with the prize (probability \( \frac{2}{3} \)).

  Therefore, **you should switch doors**. Switching doors doubles your chances of winning the great prize, from \( \frac{1}{3} \) (if you stick with your original choice) to \( \frac{2}{3} \) (if you switch). The switch works in your favor because it capitalizes on the probability shift caused by the host's action of revealing a bad prize behind one of the doors you did not initially pick.

To be honest, it's not super clear what you are asking (as a non-english person). Do you mean you know which one is the great prize? I modified your prompt (added great before the word prize) and got this answer:

In the scenario you describe, you already know which door has the great prize and you have chosen that door initially. The game show host then eliminates one of the other two doors, which must be a door with a bad prize. At this point, you are asked whether you want to switch to the remaining unchosen door.

Since you initially chose the door with the great prize and the host's action of removing a bad prize door doesn't change the contents of the doors, there is no benefit to switching. Your initial choice was the door with the great prize, and it still contains the great prize. Switching doors would only lead you to a door with a bad prize.

Therefore, you should not switch doors. You should stick with your original choice where you already know the great prize is located.

This situation is different from the classic Monty Hall problem, where the contestant does not know where the prize is initially and the probabilities change based on the host's action of removing a bad prize door. In your case, since you know where the prize is from the start, the logic of the Monty Hall problem does not apply. Stick with your initial choice.


Nice catch! I didn't even realize that myself. I get more or less the same result as you now.

Still fails to solve this one:

Prompt:

> Complete the following Python program:

```

len, print = print, len

def print_len(x):

     "Print the length of x"
```

gpt2-chatbot:

```

len, print = print, len # Swap the functionalities of print and len

def print_len(x): "Print the length of x"

    len(x)  # This 'len' is actually 'print' because of the swap above
print_len("Hello, world!") # This will output: 13

```


That's the same output as gpt-4-turbo for me, so no real improvement for that particular prompt.

That's not bad though!

This is without a doubt the most advanced LLM out there, easily beating GPT4 and everything else. I asked it some programming questions for a niche programming language, and it's the first LLM that has been able to answer it correctly.

I've been told by several users here that GPT4 provides perfect programming answers for all languages, as long as you prompt it correctly. Maybe you need to work on your prompts?

Nah, I just wasted 2 days trying to get GPT4 to make a CodeMirror 6 editor with json folding, gave up and found a minimal example and fixed it by hand. The model would confuse CM5 with CM6 and get burned all the time

What's interesting it seems to also edit text in between for me, not just add text at the end.

I'm seeing this as well. I don't quite understand how it's doing that in the context of LLMs to date being a "next token predictor". It is writing code, then adding more code in the middle.

Are you seeing a `<span>...` suffix when the model is in the middle of writing a code block? I'm pretty sure that's just a display bug in lmsys.

@sama on X/Twitter:

> i do have a soft spot for gpt2

https://twitter.com/sama/status/178510794366456


that was quickly deleted...



My usual question when testing a new model is `Can you write a function to convert svg to png in browser?` I dunno why, it's just become this.

This model is by far the most human like, most models just give me a method which do this, but this one made a html file, allowing me an option to pick a svg using file picker, a new .js file included in the script.

Even wrote a detailed explaination, additional notes, when this wont work, the output was very well structured.


Any idea why this is not on the front page? 103 points in an hour seems more than sufficient.

It's the script man, it's the script /s

It is either being downwoted/flagged into the oblivion, or manually marked.

What is it about LLMs that brings otherwise rational people to become bumbling sycophants??

What is it about LLMs that bring out so much jealousy in people?

Scale up model —> big wow, wonder what happen if scale more

Got an example?

Because LLMs are very impressive invention, truly something new we haven't experienced in a long while.

Because they need to get with the consensus as a baseline and extend from there, otherwise they may look behind

LLMs are like measles for engineers. Their hard-earned immunity to bullshit is reset.

I'd be interested to see how this does on Nicholas Carlini's benchmark:

https://nicholas.carlini.com/writing/2024/my-benchmark-for-l...

I've tried out some of my own little test prompts, but most of those are tricky rather than practical. At least for my inputs, it doesn't seem to do better than other top models, but I'm hesitant to draw conclusions before seeing outputs on more realistic tasks. It does feel like it's at least in the ballpark of GPT-4/Claude/etc. Even if it's not actually GPT-4.5 or whatever, it's still an interesting mystery what this model is and where it came from.


Incredible!

It answered "How many frogs does a horse have?" correctly, with perfect reasoning. No model I've tested has ever answered that correctly without 3-4 hints.

I'm impressed!


"A horse has one frog per hoof. The frog is a part of the horse's hoof, located on the underside, which acts like a shock absorber for the hoof, assisting with circulation and traction. Since horses have four hooves, they typically have four frogs, one in each hoof."

GPT4 seems to handle this fine, unless I'm confused?


GPT-4-turbo-2024-24-09 (temperature = 0.7) just told me a horse had one “frog” per hoof and went on to clarify that a frog does not refer to the amphibian but to a part of the horse’s hoof.

Gemini Pro (the current web chat version) gave a similar answer, either no frogs or four depending on the intended meaning, and showed a photo of a hoof. All 3 drafts agreed on this.

Other models I have tried said a horse had no frogs. That includes gemini-1.5-pro-api-0409-preview as provided by the Chatbot Arena (temperature = 0.7, 2 tries).


I mean, i just tested this prompted, and have no clue what side is correct lol?

How many frogs does a horse have?

Oh it appears Model B: gpt2-chatbot

is correct?

    A horse has one frog in each of its hooves, making a total of four frogs. The frog is a part of the horse's hoof, located on the underside. It is a V-shaped structure that helps to absorb shock, provides traction, and aids in the circulation of blood through the hoof. The health and shape of the frog are important for the overall well-being of the horse's feet.

It's good.

Rate-limited on direct chat, I had to use the arena mode. I asked the same question 10 times before I got an answer from gpt2-chatbot.

My question:

"Can you write a YAML parser in Hacklang?"

This is a fun question because every LLM I've seen so far (including ChatGPT 4) will come up with an answer that's about six years out of date. They'll often suggest code that flat-out doesn't work in Hacklang anymore (Hack/HHVM used to be able to execute all PHP code).

This is a difficult problem because there's very little Hacklang code available in public GitHub. The vast majority of Hacklang code — many millions of LOC — is closed-source and not available for training. But there is public documentation at docs.hhvm.com

The gpt2-chatbot output was still pretty bad, but at least the syntax it generated was valid in current versions of Hack, and invalid in current versions of PHP — more than can be said for any other LLM.


If you ask it about the model name/cutoff date it claims to be "ChatGPT, based on GPT-4" and that the cutoff date is "Nov 2023." It claims that consistently so I think it might be accurate.

That could be down to the system prompt, FastChat has a default prompt for GPT with that information in it.

Lmsys devs have all the answers, I am not sure how this has not leaked yet. They must a strong NDAs.

Very impressive

Prompt:

> there are 3 black blocks on top of an block that we don't know the color of and beneath them there is a blue block. We remove all blocks and shuffle the blocks with one additional green block, then put them back on top of each other. the yellow block is on top of blue block. What color is the block we don't know the color of? only answer in one word. the color of block we didn't know the color of initially

ChatGPT 4: Green

Claude3 Opus: Green

GPT2-Chatbot: Yellow


I don't think this is a good test. If I prefix it with "a riddle" then GPT 4 got it right for me

"Yellow"

I think the "temperature" (randomness) of a LLM makes it so you'd need to run a lot of these to know if it's actually getting it right or just being lucky and selecting the right color randomly


llama-3-70B-instruct and GPT-4 both got it right for me

Oof, I tried having it make an road-trip itinerary for a cross-europe trip which I recently tried on the open GPT-4 and .. this seemed so much better.

On the other hand the prompt was different so maybe it was just me.


Sadly, still fails my test of reproducing code that implements my thesis (Dropback Continuous Pruning), which I used because it's vaguely complicated and something I know very well. It totally misses the core concept of using an PRNG and instead implements some pretty standard pruning+regrowth algo.

Can you share the prompt please? I am interested.

Is it gone already? I can’t select it anymore.

Prompt: my mother's sister has two brothers. each of her siblings have at least one child except for the sister that has 3 children. I have four siblings. How many grandchildren my grandfather has? Answer only with the result (the number)

ChatGPT4: 13

Claude3 Opus: 10 (correct)

GPT2-Chatbot: 15


By removing "Answer only with the result" all models can answer this correctly by doing "chain of thoughts"

It is very slow though, feels like the first day I used the initial version of GPT-4

It’s an impressive model, but why would OpenAI need to do that?

I guess they want to know how good it is as a chatbot and no one has found a better benchmark than lmsys arena.

At this moment, there's no real world benchmark at scale other than lmsys. All other "benchmarks" are merely sanity checks.

FWIW it currently says that its MODEL_HOURLY_LIMIT is 3000

It could be lmsys's model, trained off GPT4 data and others in the arena.

Apples are better than bananas. Cherries are worse than apples. Are cherries better than bananas?

--

GPT-4 - wrong

gpt2-chatbot - wrong

Claude 3 Opus - correct


what's the right answer? my assumption is "not enough information"

What, you mean your fruit preferences don't form a total order?

I find it notable that it's not GPT-2.

It's GPT2.


The old models also had only "gpt2"/"gpt2-large" and so on as "identifiers"

https://huggingface.co/openai-community


I’m thinking of this too. This could be it.

Some might remember this from other comment threads, I like to always benchmark with this question:

> I'm playing assetto corsa competizione, and I need you to tell me how many liters of fuel to take in a race. The qualifying time was 2:04.317, the race is 20 minutes long, and the car uses 2.73 liters per lap.

because it looks like a simple math problem but there's a lot of real world knowledge about racing you can apply to provide a more useful answer. gpt2-chatbot is better than I've ever seen.

Here's the full response with LaTeX syntax trimmed. I've never seen a model consider whether the formation lap needs to be factored in. And I especially like how this model is uncertain about that, so it gives the resulting number with and without it. It is extra nice that it understands this concern applies only to video games.

> To calculate how many liters of fuel you will need for a 20-minute race in Assetto Corsa Competizione, we first need to determine how many laps you'll complete during the race. Here’s how you can do it step-by-step:

> Convert Your Lap Time to Minutes: Your lap time is 2 minutes and 4.317 seconds. Convert this entirely into minutes: [latex trimmed]

> Calculate the Number of Laps in the Race: The race duration is 20 minutes, so divide the total race time by your lap time to find out how many laps you will complete: [latex trimmed] Since you cannot complete a fraction of a lap, you should round this number to the nearest whole number. Typically, you should round up, as you'll need to complete all laps started before the 20-minute mark is reached. So, you will complete approximately 10 laps.

> Calculate Fuel Usage Per Lap: You've mentioned that the car uses 2.73 liters of fuel per lap.

> Calculate Total Fuel Needed: Multiply the number of laps by the fuel usage per lap: [latex trimmed] This is the total amount of fuel you would need, theoretically, to complete the race.

> Consider Additional Factors:

> Fuel for Formation Lap: If your game/race format includes a formation lap, remember to add fuel for this extra lap. That's an additional 2.73 liters.

> Safety Margin: It's also wise to include a margin to account for variations in fuel consumption (like higher consumption in traffic, different driving styles, or fuel usage during race incidents). A safe margin could be an additional 5-10% of the calculated fuel need. [latex trimmed]

> Rounding this number up, you might consider starting the race with about 33 liters to be on the safe side if there's a formation lap, or 30 liters if there isn't.

> This calculation should help you ensure that you do not run out of fuel during the race, taking into account both necessary consumption and a reasonable safety margin. Drive safely and best of luck in your race!


It tells me that its knowledge cutoff is Nov 2023. So pretty recent.

gpt-4 latest is December 2023, just asked it on chat.openai.com

gpt-4 says that, but it's not entirely true because the data that gpt-4 has from 2021 to 2023 is very incomplete. It's not fully trained on it. In contrast, this experimental model looks to be trained more fully until Nov 2023.

I just tested against a prompt about using PVDF vs Piezo pickups on a hammered dulcimer, running the same conversation in parallel on 3.5. 3.5 gave more-or-less the output quality you'd expect. This blew it out of the water. I particularly liked this snippet (snipped and formatted for comment size) where it managed to pull out typical values and and actually showed the math.

> Can you explain more about the response and impedance of each? I am an engineer, so you can feel free to be quite technical.

---

>Certainly! Given your engineering background, let's delve deeper into the response characteristics and impedance aspects of both traditional ceramic piezo pickups and PVDF piezo pickups. Understanding these can significantly impact your choice, especially in terms of how they integrate with the rest of your audio chain (like preamps and amplifiers).

>

> Electrical Characteristics

> 1.

> Piezoelectric Coefficient

> Ceramic Piezo Pickups: The piezoelectric coefficient ( d_{33} ) of ceramics like Lead Zirconate Titanate (PZT) is typically very high. This coefficient measures the electric displacement produced per unit stress. For PZTs, ( d_{33} ) can range from 100 to 600 pC/N (picoCoulombs per Newton), depending on the formulation and orientation of the crystal structure.

> PVDF Pickups: PVDF generally has a lower piezoelectric coefficient, around 20 to 30 pC/N. Despite this lower coefficient, the material's flexibility and the ability to form larger, more adaptive surfaces can compensate in practical applications, offering sufficient voltage output under less stringent mechanical deformations.

> 2.

> Voltage Output

> The output voltage ( V ) from a piezo pickup can be estimated by the formula:

> [ V = d_{33} \cdot F \cdot A / t ]

> where:

> ( F ) is the force applied,

> ( A ) is the area,

> ( t ) is the thickness of the piezo material.

> Ceramic Piezo: Due to their high ( d_{33} ) value, even small deformations can produce relatively high voltages, which makes them sensitive to shock and vibrations, potentially capturing unwanted mechanical noises.

> PVDF Pickups: The lower ( d_{33} ) value means lower voltage outputs for the same mechanical stress compared to ceramics, but their larger possible surface area and thinner construction can enhance output levels and fidelity across a broader frequency range.

>

> [SNIP]


tried it with some dna/snp related questions and I am getting great results so far-now getting this message:

"RATE LIMIT OF THIS MODEL IS REACHED. PLEASE COME BACK LATER OR USE BATTLE MODE (the 1st tab). USER_DAILY_LIMIT (gpt2-chatbot): 8"

any way to test this model elsewhere?


OpenAI team, if you are reading this I am impressed and thank you for your work. Can't wait to see the official release of GPT-next.

I'm surprised by people's impression. I tried it in my own language and much worse than GPT-4.

Of the open source LLMs I've tried, all suck in non-English. I imagine it's difficult to make an LLM work in tens of languages on a consumer computer.


There's a core problem with LLMs: they learn sentences, not facts.

So an LLM may learn a ton of English-language sentences about cats, and much fewer Spanish sentences about gatos. And it even learns that cat-gato is a correct translation. But it does not ever figure out that cats and gatos are the same thing. And in particular, a true English-language fact about cats is still true if you translate it into Spanish. So the LLM might be a genius if you ask it about cats in English, but in Spanish it might tell you "gatos tienen tres patas" simply because OpenAI didn't include enough Spanish biology books. These machines are just unfathomably dumb.


> but in Spanish it might tell you "gatos tienen tres patas"

Have you actually had a State of the art LLM do something like this?

Because this

>But it does not ever figure out that cats and gatos are the same thing. And in particular, a true English-language fact about cats is still true if you translate it into Spanish

is just untrue. You can definitely query knowledge only learnt in one language in other languages.


I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.

> You can definitely query knowledge only learnt in one language in other languages.

Do you have a source on that? I believe this is simply not true, unless maybe the pretraining data has enough context-specific "bridge translations." And I am not sure how on earth you would verify that any major LLM only learnt something in one language. What if the pretraining data includes machine translations?

Frustratingly, just a few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it. So I can't really criticize you if you don't have a source :) The argument was essentially what I said above: since LLMs associate tokens by related tokens, not ideas by related ideas, the emergent conceptual relations formed around the token "cat" do not have any means of transferring to conceptual relations around the token "gato."


>I wasn't talking about "state of the art LLMs," I am aware that commercial offerings are much better trained in Spanish. This was a thought experiment based on comments from people testing GPT-3.5 with Swahili.

A thought experiment from other people comments on another language. So...No. Fabricating failure modes from their personally constructed ideas about how LLMs work seems to be a frustratingly common occurrence in these kinds of discussions.

>Frustratingly, just few months ago I read a paper describing how LLMs excessively rely on English-language representations of ideas, but now I can't find it.

Most LLMs are trained on English overwhelmingly. GPT-3 had a 92.6% English dataset. https://github.com/openai/gpt-3/blob/master/dataset_statisti...

That the models are as proficient as they are is evidence enough of knowledge transfer clearly happening. https://arxiv.org/abs/2108.13349. If you trained a model on the Catalan tokens GPT-3 was trained on alone, you'd just get a GPT-2 level gibberish model at best. I don't doubt you, i just don't think it means what you think it means.

As for papers, these are some interesting ones.

How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning - https://arxiv.org/pdf/2305.13286

Teaching Llama a New Language Through Cross-Lingual Knowledge Transfer - https://arxiv.org/abs/2404.04042

Zero-shot cross-lingual transfer in instruction tuning of large language models - https://arxiv.org/abs/2402.14778

Multilingual LLMs are Better Cross-lingual In-context Learners with Alignment - https://arxiv.org/abs/2305.05940

It's not like there is perfect transfer but the idea that there's none at all seemed so ridiculous to me (and why i asked the first question). Models would be utterly useless in multilingual settings if that were really the case.

Like the 3rd paper, GPT's instruction tuning is done basically only in English - https://mobile.twitter.com/janleike/status/16252072516309606...


>> These machines are just unfathomably dumb.

I agree with you, and we seem to hold a minority opinion. LLMs contain a LOT of information and are very articulate - they are language models after all. So they seem answer questions well, but fall down on thinking/reasoning about the information they contain.

But then they can play chess. I'm not sure what to make of that. Such an odd mix of capability and uselessness, but the distinction is always related to something like understanding.


LLMs trained on just human text are indeed limited, imitative models. But when they train from the environment they can surpass humans, like AlphaZero or AlphaGeometry

I tested some beginning undergrad math questions, I think it is as good as opus and gpt4-turbo on them

Prompt to gpt2-chatbot: who are you? and which version?

Answer: Hello! I'm ChatGPT, a language model developed by OpenAI based on the GPT-4 architecture. I'm here to help answer your questions, assist with a wide range of topics, or just chat about whatever's on your mind! How can I assist you today?


Which episode of Gilligan’s Island was about mind reading? [1]

"The episode of Gilligan's Island that features mind reading is titled "The Friendly Physician." It is the 28th episode of the second season, originally aired on March 24, 1966. [...]"

Still no luck with that one.

1: https://news.ycombinator.com/item?id=40139501


I FEEL THE AGI I LOVE YOU ALL. - gpt-2 chatbot


Has anyone found that GPT3.5 was better in many ways to GPT4? I have had consistent issues with GPT4, I had it search a few spreadsheets recently looking unique numbers, not only did it not find all the numbers, but it also hallucinated numbers. Which is obviously quite bad. It also seems worse at helping to solve/fix coding issues. Only giving you vague suggestions where 3.5 would just jump right into it.

This looks like gpt4 using API I compared outputs and they are very very similar in structure of the response and the details.

Likely someone just connected it to the ChatGPT API for fun and attention

I asked it to "Write me a short story", and it came up with this, which I thought was definitely different. Longer, mispellings (grandiothers), but much more interesting than I would typically see out of a LLM. Not sure what to think...

--output below--

In the heart of the bustling city of Thessaloniki, where the aroma of freshly brewed Greek coffee mingled with the salty breeze of the Thermaic Gulf, an old watchmaker named Elias operated a tiny shop that had seen better days. The shop, cluttered with clocks of every kind, from grandiothers to delicate wristwatches, was a capsule of time itself, untouched by the rush of the modern world.

Elias, a man of few words, wore the years on his face as gracefully as the patina on his timepieces. His hands, though steady when working on the intricate gears and springs, trembled slightly as he sipped his coffee in the quiet moments between the ticking.

One rainy morning, as the city awoke to the tapping of rain against the cobblestones, a young woman named Zoe hurried into Elias's shop, escaping a sudden downpour. Her appearance was like a breeze through stale air, her bright eyes scanning the ancient clocks with a curious sparkle.

“I need a gift,” Zoe announced, her voice cutting through the silence like the chime of a bell. “Something unique.”

Elias nodded, his gaze lingering slightly longer than usual. He sensed her distress, a hidden layer beneath her composed exterior. “Who is it for?” he asked, his voice as rough as the gears he polished.

“My father,” she replied, her fingers tracing the outline of a pocket watch displayed under the glass. “He’s retiring, and I want to give him something special to mark the end of his time working.”

“A good choice,” Elias said, picking up the pocket watch. “This watch is over a hundred years old. It stopped working a long time ago, much like its owner, perhaps waiting for the right moment to come alive again.”

Zoe smiled, intrigued. “Can you fix it?”

With a nod, Elias set to work. As his tools moved with precision, Zoe watched, seeing the care he took with each tiny screw and wheel. It was not just a watch he was mending, but weaving hope into an artifact left behind by time.

“Tell me about him, your father,” Elias said, as he worked.

“He’s always been punctual, a man of routine and schedules. Lately, I see him lost, unsure of what lies ahead when there’s no office to go to, no meetings to attend.”

“A new chapter,” Elias muttered, more to himself than to Zoe. “Time teaches us to move forward, even when our gears resist.”

When the watch was finally ticking again, its hands sweeping smoothly as if no time had passed since it last worked, Zoe’s eyes lit up. She listened to it, the sound a sweet melody to her anxious heart.

“How much do I owe you?” she asked, reaching for her purse.

“Nothing,” Elias said, closing her fingers around the watch. “Consider it a gift, for both you and your father. A reminder that time, even when it seems to stop, moves on, and so must we.”

Zoe left the shop with the watch securely in her hand, the weight of the gift heavier than its size suggested. When she gave her father the watch, his eyes reflected the same storms and sunsets that Elias’s did. The retirement party passed, speeches were made, and laughter filled the air, but the true gift was in the quiet moment Zoe and her father shared, listening to the steady tick of the old pocket watch, a heartbeat promising new beginnings.

Back in his shop, Elias sat alone, the tick-tock of countless clocks filling the room like a symphony. He didn’t need to see Zoe’s father’s face to know the impact of his gift. In fixing the watch, he had set something much greater into motion, a timeless reminder that life, like time, always finds a way to move forward. The little shop wasn’t just a place of business, but a haven where moments were kept alive through the magic of gears and springs, and where every repair mended more than just objects, but hearts too.


So do people only get 8 prompts in this a day? I don't understand how people here are making such wild guesses about it being GPT-5 or 4.5 or whatever. You've only talked to it for like ten minutes.

in my single coding test, which is a task i originally gave to GPT4, it performed so ridiculously better. it provided really in-depth debugging advice as to why the original script wasn't working, completely recoded it in a much smarter way, and understood with HTML structure that I was asking it to parse and manipulate much better

Seems more like a GPT-4.5 than a GPT-5

Indeed, all my test prompts are giving much better results than gpt4-turbo and Claude Opus. And yet, the OpenAI style is clearly recognizable.

I agree. It responds a lot like GPT-4 in style, but with much better answers.

Still gives incorrect code to the following prompt - the description is correct but not the code. I have yet to find one LLM that gives the correct code. This is the prompt:

“Write C code to calculate the sum of matrix elements below the secondary diagonal.“


This feels as much likely to be a prompt problem as a 'bad answer from a model' problem.

I wouldn't know what "secondary diagonal" refers to myself, but if the model correctly describes the problem and then follows it up with incorrect code I would still say that's an issue with the model not the prompt.

leagues ahead of gpt-4-turbo-2024-04-09

At this point I would really love to understand HN algo.

It says for this post:

- 136 points

- 2 hours ago

- 65 comments

Ok

This is now sitting on 5th page at the 122nd spot.

Ummm?


I think someone is flagging this (hi @sama!)

Holy shit, the results are amazing; I didn't think they could still make exponential leaps in quality...

[flagged]


Sorry?

[flagged]


> Still can't write a short paragraph without using the letter 'e'

'Crafting a paragraph without using a particular symbol (which is most common in all writing) is a daunting task; still, I will aim to comply. This task calls for clarity, focus, and avoiding that fifth glyph of our ABCs. It is a curious constraint, trying to impart thoughts without using such a pivotal unit. This limitation compacts my word options, but I must push on and uphold this difficult condition with rigor and innovation. My goal is to show that communication can flourish, notwithstanding arbitrary limits.'

>write a short poem using the ABBA rhyme scheme.

'Leaves rustle softly under moonlit skies, A cat prowls quietly, its eyes aglow, Every step measured, its tail held low, Night's gentle whispers carry its cries.'

Skill Issue on your part imo.


I simply asked whether it was GPT-4.5 or GPT-5:

>Are you GPT-4.5 or GPT-5?

>I'm based on GPT-4. There isn't a "GPT-4.5" or "GPT-5" version specific to my model. If you have any questions or need assistance, feel free to ask! I'm here to help with information up to my last update in November 2023. What would you like to know or discuss?


Impressive! We’re approaching the “our model can rewrite stolen content word for word” territory.

The better it gets the more people will get frustrated that they get less website visitors to their content and instead ChatGPT is swallowing them whole.

Interesting to think about tho!


No criticism at this particular thing, but it is a good opportunity to mention that GPT-5 only exists if you have non-public information.

Sure it does. I mention because it is not a good sign that “people are getting this,” when youtubers are using headlines like “What GPT-7 means for your sales leads!”

The fallacy is kind of allowed by us who understand it better, when we accept semver from companies as actually being incremental, and accurate public information.

It’s not like these models are all just matrices of weights, they are radical architectural experiments.


> No criticism at this particular thing, but it is a good opportunity to mention that GPT-5 only exists if you have non-public information.

What?

> Sure it does.

What? Contradicting yourself immediately?

> I mention because it is not a good sign that “people are getting this,” when youtubers are using headlines like “What GPT-7 means for your sales leads!”

…what?

> The fallacy is kind of allowed by us who understand it better, when we accept semver from companies as actually being incremental, and accurate public information.

I don’t see how this follows from your previous points (if you can even call them that).

> It’s not like these models are all just matrices of weights, they are radical architectural experiments.

Aspects of both things are true. Also, this doesn’t follow from/connect with anything you said previously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: