Hacker News new | past | comments | ask | show | jobs | submit | ekidd's comments login

This is all broadly true, historically. Automating jobs mostly results in creating more jobs elsewhere.

But let's assume you have true, fully general AI. Further assume that it can do human-level cognition for $2/hour, and it's roughly as smart as a Stanford grad.

So once the AI takes your job, it goes on to take your new job, and the job after that, and the job after that. It is smarter and cheaper than the average human, after all.

This scenario goes one of three ways, depending on who controls the AI:

1. We all become fabulously wealthy and no longer need to work at all. (I have trouble visualizing exactly how we get this outcome.)

2. A handful of billionaires and politicians control the AI. They don't need the rest of us.

3. The AI controls itself, in which case most economic benefits and power go to the AI.

The last historical analog of this was the Neanderthals, who were unable (for whatever reason) to compete with humans.

So the most important question is, how close actually are we to this scenario? Is impossible? A century away? Or something that will happen in the next decade?


> But let's assume you have true, fully general AI.

Very strong assumption and very narrow setting that is one of the counter examples.

AI researchers in the 80s already told you that AI is around the corner in the next 5 years. Didn't happen. I wouldn't hold my breath this time either.

"AI" is a misnomer. LLMs are not "intelligence". They are a lossy compression algorithm of everything that was put into their training set. Pretty good at that, but that's essentially it.


Yes, Chomsky's earlier positions include the idea that recursive natural grammars cannot be learnt by machines because of the "poverty of the stimulus." But this result is only true if you ignore probabilistic grammars. (See the Norvig article for some footnotes linking to the relevant papers.)

And of course, LLMs generate perfectly reasonable natural language without any "universal grammar", or indeed, much structure beyond "predict the next token using a lot of transformer layers."

I'm pretty sure that most of Chomsky's theoretical model is dead at this point, but that's a long discussion and he's unlikely to agree.


Chomsky had a stroke awhile back, which apparently left him unable to speak. But I guarantee that there are many linguists who would not agree that his model is dead.

As for LLMs, at present they require orders of magnitude more training data than children are exposed to, so it's unclear they have anything to say about how humans learn language.


> For example spaceships swarming with low skill level crew members that swab the decks and replace air filters.

This is largely a function of what science fiction you read. Military SF is basically about retelling Horatio Hornblower stories in space, and it has never been seriously grounded in science. This isn't a criticism, exactly.

But if you look at, say, the award-winning science fiction of the 90s, for example you have A Fire Upon the Deep, the stories that were republished as Accelerando, the Culture novels, etc. All of these stories assume major improvements in AI and most of them involve breakneck rates of technological change.

But these stories have become less popular, because the authors generally thought through the implications of (for a example) AI that was sufficiently capable to maintain a starship. And the obvious conclusion is, why would AI stop at sweeping the corridors? Why not pilot the ship? Why not build the ships and give them orders? Why do people assume that technological progress conveniently stops right about the time the robots can mop the decks? Why doesn't that technology surpass and obsolete the humans entirely?

It turns that out that humans mostly want to read stories about other humans. Which is where many of the better SF authors have been focusing for a while now.


This reminds me of my favorite note [1] from Ursula Le Guin on technology:

> Its technology is how a society copes with physical reality: how people get and keep and cook food, how they clothe themselves, what their power sources are (animal? human? water? wind? electricity? other?) what they build with and what they build, their medicine — and so on and on. Perhaps very ethereal people aren’t interested in these mundane, bodily matters, but I’m fascinated by them, and I think most of my readers are too.

> Technology is the active human interface with the material world.

[1] https://www.ursulakleguin.com/a-rant-about-technology


Yeah that tracks. If we're being real, there won't ever be much of actual human exploration beyond Earth, it'll all be done with fully automated systems. We're just not physically made for the radiation and extremely long periods of idle downtime. Star Wars has the self-awareness to call itself fantasy as some kind of exception, even though 99% of all other other sci-fi is pretty much that too.

Seeing drones do all the work unfortunately isn't very interesting though.


While it doesn’t touch on AI at all (that I remember, I think there is some basic ship AI but it’s not a major plot point and it never “talks”) the Honor Harrington series is “Horatio Hornblower in space” and I highly recommend it.

Also I love the Zones of Thought series and The Culture.


Plausible SF plot: some (sort of) humans cobayes try to escape the robots biotech shiplab.


I've seen recent interesting papers on reasoning models with costs from US$6 to US$4,5000. The problem is that you need a bunch of fast RAM for efficient training. But you can do some limited fine-tuning (Q-LoRA, etc) of models up to 14G on a 24 GB graphics card, and full fine-tunes of 1.5G models.

It's very affordable for a small university research group. And not totally out of reach for hobbyists.


Really $6? Where was that?


Once the problem gets narrow enough, do you risk training a model that reinvents a straightforward classic algorithm at far higher cost?


Well, in this case there is a much more straightforward method with the same CP-SAT solver used to create the puzzles. This is more of a fun experiment to see if we can train LLMs to solve these kinds of logical deduction problems.


I've been experimenting with vlm-run (plus custom form definitions), and it works surprisingly well with Gemini 2.0 Flash. Costs, as I understand, are also quite low for Gemini. You'll have best results with simple to medium-complexity forms, roughly the same ones you could ask a human to process with less than 10 minutes of training.

If you need something like this, it's definitely good enough that you should consider kicking the tires.


BTW Check out the Gemini qualitative results here in our hub: https://github.com/vlm-run/vlmrun-hub?tab=readme-ov-file#-qu....

It gives you an idea of where today's models fail (Gemini Flash, OpenAI gpt4o+mini, open-source ones like Llama 3.2 Vision, Qwen VL 2.5 etc).


Very cool! If you have more examples / schemas you'd be interested in sharing, feel free to add to the `contrib` section.


"Pick & place" is a term for a kind of robot that can pick up scattered items from a conveyor belt and arrange them in a regular fashion.

The really fast multi-arm versions can be hypnotic to watch. You can see an example at 1:00 in this video: https://youtu.be/aPTd8XDZOEk

The limitation of industrial pick & place robots is that they're configured for a single task, and reconfiguring them for a new product is notoriously expensive.

Magma's "pick & place" demo is much slower and shakier than a specialized industrial robot. But Magma can apparently be adapted to a new task by providing plain English instructions.


This model can't code at all.

It does high school math homework, plus maybe some easy physics. And it does them surprisingly well. Outside of that, it fails every test prompt in my set.

It's a pure specialist model.


> On the other hand, I do know that it's a widespread feeling among salespeople, that a person who interacts with them, without buying anything, is stealing from them.

It's less this, and more that some customers have no business buying certain products, and the sooner you can filter them out of your sales funnel, the happier everyone will be a year from now.

Let's say you sell specialized enterprise SaaS that normally gets integrated at the API level. This means:

- Lots of potential customers are too small to benefit from the product, or they don't understand what problem the product is actually solving.

- Closing a sale probably involves lawyers and custom contracts, which costs $$$.

- Customers need to either know how to use an API, or they need to have budget to pay the SaaS provider to write integration code.

- Once the sale is closed, there's likely to be labor-intensive onboarding process.

- You don't actually want customers who are going to try it for a year, hate it, and bail. You want people who love it and who will renew every year, or who will buy more without another sales process.

So sales people are trained to think about these issues, and to prioritize leads who have relevant problems, a good baseline understanding of what the product does, and enough budget to be able to properly set up and use the product.


> heqlthcare is breaking under the load.

I live in a part of the US with high average incomes and an absolutely excellent hospital system.

And it's breaking, too. If you go to the ER and you're not literally bleeding to death, it will be a 5 or 10 hour wait. I saw someone wait over 3 hours with a visibly and severely dislocated bone.

Non-emergency visits for anything more complicated than "put some ice on it and take some NSAIDs" can easily approach $1,000, and a routine childbirth is up to over $50,000, I think?

Departments are horribly understaffed, the administration pays themselves buckets of money and manages things from 30,000 feet with Excel, and at one point they employed 50 programmers to deal with constantly shifting medical coding rules for dozens of insurance companies.

Insurance for a family often runs $1,000 to $1,500 per month for the employee part, with the employer spending plenty more. And everything about insurance is a corrupt nightmare.

It all barely holds together somehow, at one of the highest costs in the world. And when our local system eventually gets around to it, they provide excellent care—but nothing dramatically better than a private hospital in Paris, and at a much higher price.


Please pick any semi-advanced economy other than USA when talking about healthcare. USA is well known for its corrupt healthcare system. You are picking the worst of the worst as an example.


They're picking the US as an example because the person who started this discussion was saying that the European model is in their view unsustainable. It's possible that that person wants to change Europe into Singapore or something, but given this site, I consider it much more likely that they meant "unsustainable compared to the US".


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: