PSA: Humans are scary stupid
Apologies for the harsh post title but wanted to be evocative & sensationalist as I think everyone needs to see this.
This is in response to this submission made yesterday: Qwen3.5 4b is scary smart
Making this post as a dutiful mod here - don't want this sub to spread noise/misinformation.
The submission claimed that Qwen3.5 4b was able to identify what was in an image accurately - except it was COMPLETELY wrong and hallucinated a building that does not exist. The poster clearly had no idea. And it got over 300 upvotes (85% upvote ratio).. The top comment on the post points this out but the upvotes suggest that not only were most people blindly believing the claim but did not open the thread to read/participate in the discussion.
This is a stark example of something I think is deeply troubling - stuff is readily accepted without any validation/thought. AI/LLMs are exacerbating this as they are not fully reliable sources of information. Its like that old saying "do you think people would just go on the internet and lie?", but now on steroids.
The irony is that AI IS the tool to counter this problem - when used correctly (grounding in valid sources, cross referencing multiple sources, using validated models with good prompts, parameters, reasoning enabled etc.)
So requesting: a) Posters please validate before posting b) People critically evaluate posts/comments before upvoting c) Use LLMs correctly (here using websearch tool would have likely given the correct result) and expect others on this sub to do so as well
People will always upvote ideas that reinforce their existing beliefs. Truth is a distant second
I believe this to be true. Have my upvote.
This one guy thought evidence would make people change their minds. I linked three papers showing that's not true. He still thought evidence would work.
I see what you did there..
Reddit in a nutshell
P.S: I normally would have removed that post. I didn't because by the time I caught it, the damage was done (already had several comments and upvotes). I instead changed flair to Misleading and making this post as Im hoping the "show, don't tell" is going to be more helpful than just silently removing it post-fact
THIS IS THE PROBLEM - you NEED to remove these posts! This sub is becoming infected with these low effort no-thought posts.
Im already removing a ton.. If i'm a day late, then most people who will see the post have already seen it so removing it has marginal value..
Being exposed to misleading information that's clearly labeled as misleading helps to become more sensitive towards that kind of thing though. Let's hope people notice the banner or read the first comment.
I saw that post and just laughed yesterday
Practitioners here wouldn’t even trust Qwen 3 VL 235b with that type of task
A 4b VL post must be a parody is what I figured
Saw the post and made sure to report + upvote the callout posts, but the underlying reason for yesterday is because this sub is a trusted source of news and many of us have outsourced our trust to communities like this
Very true. Which is why keeping that bar high is super important.
This thought actually gives me more certainty in removing low effort posts!
I've noticed a ton of posts that provide "findings" or results from AI, and comments will flood in with praise, sometimes minutes or seconds after a post. So clearly people aren't reading posts or articles before responding and up voting
We all talk about how important it is to be critical of AI.
We all assume that we ourselves are critical, but others are accepting it at face value.
We all think AI is a great tool and hallucinations are not a problem for us since we can distinguish them, while others are proven to not be able to.
I think it will take a decade at least to make a dent to this fallacy, and in the meanwhile, we will keep repeating these lines in every passing.
I think the people upvoting plausible but incorrect things on reddit thereby corrupting the training data are the real heroes standing between greedy companies and ASI.
You are assuming that the scraper bots and connected data pipelines would be smart enough to account for up/downvotes when using the data.
Well, that's normal - unfortunately. Except that the comment explaining that / why it's wrong went to the top in time. Often (in other subs) its buried 5 pages down. Verifying is expensive, blindly trusting what seems plausible is easy - like with a lot of the vibe-coded success projects shared here.
People see what matches their opinion and they upvote. Yes, some read the comments, but when you look at the view statistics per comment vs. per posting then you can see that it's not that many. For example one of my postings has 250k views, and my earliest and top-most comments underneath are between 2k and 10k.
Even when people read the comments, Reddit tends to sometimes collapse interesting comments, which is why I like "expand all".
I appreciate this crashout, thanks king
Wikipedia's been the biggest wakeup call for me. A while back I stumbled on a wikipedia article on a subject that probably doesn't come up too much in most people's lives but enough that it should get a steady stream of fresh eyes on it. What stuck out is that it's a subject that I have enough of an academic background in to consider myself competent to critique it. Within the first few paragraphs there was a mistake that was glaring in both how misleading it'd be to the reader and how unaware of the subject one would need to be in order to accept it. The citation for it was laughably bad. But I thought it'd be interesting to see how long it'd take for something so obvious to be corrected.
About two years later and it's still there. And it's really struck me that wikipedia is pretty much 'the' goto for general purpose information. And people obviously aren't checking the citations when reading it. Just taking it in on face value. I mean obviously anyone should know that wikipedia isn't to be taken as authoritative. We know it intellectually. But I still find myself doing it too. Just loading up a page to quickly check on something I don't know about.
Well, be the change you want to see, right?
The worst that will happen, and unfortunately it probably will happen, is that some officious moron will revert your change.
Can we have a way for others to mark a post as potentially misleading? A flair for example. Then people actually read the post can re-vote whether it's actually misleading or not.
Only mods can change the flair.. It would be great if reddit had a feature like that but I guess just the reporting function encompasses this
The SLOP is so real.
This might be a crazy idea but is there a way to keep track of the number of posts that get X upvotes within Y minutes of posting and automatically tag ones being brigaded with "Brigading detected"? I'm not sure if that would have even helped here, but figured I'd ask to see if you have the metrics to find out.
I mean, I know our knee-jerk reaction is to downvote anything that seems to stink of manipulation, but I would like to think the stuff being brigaded in a positive way (meaning upvotes instead of downvotes) by a team of people that are actually bringing something truthful and new to the discussion would survive the tag while the posts being brigaded in a positive way by a team of people that are not bringing something untruthful or old to the discussion would be judged a bit more harshly accordingly.
Obviously this would have to go through a testing phase to see if it actually produces the desired results. We wouldn't want Unsloth posts, for example, being downvoted as bridgading just because there a handful of people following daniel, but I'd like to think that such posts would survive the tag.
People are going to be mad if you do and mad if you don't. I just want to thank you for the work that you do. This sub is still one of my favorite places on the internet, and that would not happen without dedicated mods like yourself.
The mods of this sub have allowed anyone and everyone to post here with new accounts and no prior thought or investigation. The new people inherantly either cannot understand that their questions are better suited to a cloud model or they refuse to interact with AI for the simplest of questions prefering that a human answer them instead.
So requesting: a) mods please add a minimum amount of time (1 - 2 months) that a user must first be a member of the sub before being allowed to post b) do a better job of removing obvious slop and shit posts that should be answered with a cloud model (as stated in OPs post as "the irony" and c) you are the problem mods not the stupid users, you need to set up parameters to keep your sub from becoming the garbage that most other "AI" subs have become - this sub was the gold standard a month ago and now its a mess.
6 months minimum. Ideally before Covid so you know it’s not a normie but that would be draconian lol
Critical thinking both is a nontrivial skill and a hell of an effort. Also, people are lazy. What else did you expect?
The IQ on this sub is dropping rapidly probably due to growth.
Intervention is unfortunately necessary :(
This is why I always run image claims through multiple models and reverse image search. Takes 30 seconds, saves credibility
We as human beings have a limited cognitive bandwidth. When inundated with perpetually "infinite" information, we can be overwhelmed and fatigued.
Its not possible to validate and verify every piece of information we come across. We just dont have the time. This is why we rely on each other as a group to validate information.
Unfortunately, we just accept information as presented to us from time to time and this has also been a cognitive loophole.
For example, the is a ton of information on YouTube. It is not physically possible or practical for every human to watch, validate, verify, and cross check every piece of information presented to us. It would take multiple life times to do so.
This is not to excuse it, but to just illuminate the core issue. I upvoted it, but Im feeling burnt out. So, much so, I can barely keep up with the rapid pace that current events unfolding. Im human and I need to take breaks to "refresh", which means I fall into this trap as do most others as well. Just because you understand, does not mean you can mitigate or prevent it (this is also a cognitive bias, see wikipedia list of cognitive biases for a general overview and light introduction).
Were not wired in a way to handle these issues. But Im sure its possible to setup safegaurds somehow, Im just not sure what they are or what they would look like.
Regardless, I appreciate the attention to detail. As an aside, Ive noticed that Qwen3.5 is not that great. It has potential, but it also has holes in its execution compared to previous releases. Not to say its a total flop, but its not great either.
@grok is this true
the upvote-first-read-later pattern is genuinely getting worse. people see a confident output and their brain just accepts it. whats wild is that hallucination detection is actually a solvable problem - grounding responses in sources, flagging low-confidence outputs - but most people just dont bother setting that up. the tool exists, the defaults are just bad...
Super important
Patting myself on the back slowly for not upvoting that thread.
That said, I have no idea of that pic location, otherwise I would've pointed out or joined the top comment there.