Skip to main content Love for Big Pickle : r/opencodeCLI

Love for Big Pickle

disclaimer: I'm not a vibe coder. I’m a senior backend dev and I don’t code on things I don’t understand at least 70% clarity is mandatory for me.

That said, I love Big Pickle.

The response speed is insane, and more importantly, the quality doesn't degrade while being fast. I've been using it for the past hour for refactoring, debugging, and small script creation it just works. "Great" feels like an understatement.

I don't care whether it's GLM-4.6, Opus, or something else. I only care about two things: high tokens/sec and solid output quality. Big Pickle nails both.

Whoever operating this model at this speed I genuinely love you.

My only concern: it's currently free. That creates anxiety. I don’t want the model to stop working in the middle of serious work.

Please introduce clear limits or a paid coding plan (ZAI-level or slightly above).
If one plan expires, I'll switch accounts or plans and continue no issue.

Just give us predictability


Carvana, one of the US’s top used-car sellers, is on a quest to become the Amazon of cars.
Thumbnail image: Carvana, one of the US’s top used-car sellers, is on a quest to become the Amazon of cars.

I think they self host their free models and say they don't cost much to host or something so they decide to provide them for free. I might be wrong tho.

7

My guess was it is smaller models that wanna market their models so they give free access for a limited time.

2
1
Continue this thread
Continue this thread

The free ones on opencode zen are with clear TOS. You get free. They get your data and feedback to improve. They will all eventually move to paid only.

Big Pickle is more. It’s a stealth model . That means one of the big ai companies has a new model they’re testing pre-release. There is no paid version because it’s not yet released. And we might never find out when it’s released that it was previously called big pickle.

You have to take that into account if using free models.

7

Big pickle is not a stealth model, it's glm 4.6 with a funny name hosted with one of their providers. Dax has confirmed this multiple times already.

https://twitter.com/thdxr/status/1984090146460020966

3

It may have been glm-4.6 at the time he said that, but nothing prevents it from being changed.

Kilo has a new stealth model from a Chinese Lab called Giga Potato. Similar naming; size + food. Could be coincidence.

When it leaked that Mistral’s model was stealth (spectre I think), they declined it and the following day announced it.

So take what you see on X with a grain of salt and assume that using Big Pickle for free means you’re helping them train, debug, and scale to get it to a state that they are confident charging for.

2
Continue this thread
Continue this thread

yeah i read it but it is being stealth for a very long time

1
Continue this thread

Pretty sure its k2 thinking

9

dax has confirmed multiple times before, its just glm 4.6 with a funny name

10

So why use it over GLM 4.7 ? Is it faster?

5
Continue this thread

i am kind of using glm models like from 4.5 it doesn't seem like 4.6 i might be wrong when context increased it kind of behaved on it's own k2 will do that or I might be wrong

4
Continue this thread

It certainly used to be GLM-4.6, but I'm pretty sure it's been replaced with K2 Thinking now. If you notice at the OpenCode Desktop app, Big Pickle allows you to change the reasoning effort, just like K2 Thinking. GLM-4.6/4.7 do not have this freedom.

1
Continue this thread
Continue this thread

can be, I completely forgot that it existed

1
Continue this thread
Breaches happen. Damage is optional.

That anxiety about “this is awesome AND free, so it’s probably going to vanish mid‑project” is very real. Free tiers are nice for experimentation, but for serious backend work predictability > freebies.

What worked for me was building around a paid coding plan with known limits as the backbone, and then treating fast/free models like Big Pickle as opportunistic accelerators. Opus (or similar) sets the architecture, GLM 4.7 and Big Pickle handles the implementation and refactor loops, and anything else fast just rides on top.

If you’re looking for something closer to a predictable, paid plan rather than a gamble on a free endpoint, Zai has coding plans where you can still get 50% discount for first year + 30% discount (current offers + additional 10% coupon code) but I think it will expire soon (some offers are already gone!) > https://z.ai/subscribe?ic=TLDEGES7AK

4

thanks i have max plan zai it is my work horse, chatgpt for architectural decisions but sometimes zai goes very slow for a simple tasks glm 4.7 took 28 sec same big picke took 7.5 sec but when the depth increased big pickle kind of left me and wrote its own code despite having correct plan.md in place never happened with glm 4.7. I completely agree with u

3

Try GLM 4.7 on Cerebras. You can try it out on the free tier. The speed is actually insane. Fastest response I've ever seen for a smart coding model. It's addictive and I hope they offer it on their coding plan whenever there's availability again.

2
Continue this thread
Continue this thread
Continue this thread

Interesting. I've found big pickle to be very slow when using it. Also found it to be very buggy. One time it just randomly switched to chinese and all the output was in chinese characters, no idea why lol.

2

😂😂 switch to chinese happened in Antigravity as well when did you tested this?

1

This was right before Christmas. The funny thing it still understood me and kept doing what I asked despite me having no clue what it was saying back lol

2
Continue this thread
Continue this thread

Use GLM-4.7 with Fireworks or Cerebras.

2
Continue this thread
VPS Hosting For Self Hosted AI Tools.

Big pickle was a total joke at first. I used it again on a whim after hitting limits and was blown away. Is also possible I got better at talking to the things in the interim, but it went from trash to cash.

2

I’m enjoying it too, I’m working with a pretty complex api and then literally made my own api using the api and big pickle just doesn’t care, most ais break when they try to work on my program, it’s over 10k lines over 24 different classes with like 2 layers of api and it just does it

2

I asked the model to tell me what it thinks it is and it's now claiming it's Claude Sonnet

1

Is Big Pickle still being developed? I think there's a meaningful difference between GLM 4.6, used by the Big P, and GLM 4.7. And the costs seems competitive as well. https://arena.ai/leaderboard/code?q=glm

1