But it's not necessarily good hacker news material. We get links that do not work, publishers get free promotion without providing anything. We will say something about viable models, and then somebody will post an archive.org link, bypassing the paywall and the viable model.
I flag links that do not work, not because I'm opposed to subscriptions (I subscribe to some online publications), but because I think Hacker News should only link to articles that are actually on the internet.
Just an internationally known 137 yr old news outlet that is ranked #3 for Finance news in web traffic... Not knowing about FT says alot more about the poster than FT.
His thoughts and opinions are not related to FT's and so while the combo of his article title + a paywall might appear ironic, in reality it's just happenstance, with no deeper meaning or hypocrisy.
The "walled gardens" that he and others speak about in this context do not refer to sites/apps that cost money, they have a massively different meaning. But perhaps it wouldn't be fair to expect you to know this if the paywall prevented you from reading the article. Fear not, now you can: https://archive.ph/4Vvms
It could be argued that the emergence of the web and search engines in particular has established this as a common pattern long before AI was around. I'm not convinced that AI represents a dramatic change to this behavior, though the point about anthropomorphizing AI likely acts as a magnifier.
I think the main difference is the degree of anthropomorphizing that happens with new chatbots. I mean, most kids in the 2000's didn't believe that they were literally asking Jeeves a question, but a lot of users today actually think of AI as an anthropomorphic being.
Same things I use it for as well - crap like "update this class to use JDK21" or "re-implement this client to use AWS SDKv2" or whatever.
And it works maybe... 80% of the way and I spend all my time fixing the remaining 20%. Anecdotally I don't "feel" like this really accelerates me or reduces the time it would take me to do the change if I just implemented the translation manually.
Amazon is publicly claiming that they have saved hundreds of millions on jvm upgrades using AI, so while it feels trivial - because before that work would end up in the "just don't do it" pile - it's a relevant use case.
I think this is overestimating the impact of LLMs.
Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).
A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...
I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.
> A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...
What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.
Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.
I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.
I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.
Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).
Same here - seen it happen most strongly once the company switched from a growth (OrderProductSales optimization) approach to one that maximizes cashflow. Basically a switch from explore to exploit mindset - which cynically can be directly connected to "enshitification" as a philosophy. It's done a number on me since I originally joined the company due to it's "peculiar" culture - something that has long since died.
I do appreciate the other major theme of the announcement today: removal of bureaucracy and pointless layers of management. I'm hoping this will lead to a collapse of some of these silly little empires/kingdoms that L7-L8s have built up for themselves in the past 6 years.
Honestly, it's just easier to block/drop Facebook entirely and actually talk to the people I want to talk to directly.
Sure I miss out on some things, but I still have friends and family and I still talk to them. I won't make any broad moralistic/judgemental statements here, but for me at least I've found this to be a return to a healthier relationship with a number of people.
If that's the root reason, then there is zero reason that footage is open to consumption by the manufacturer (and made generally available to Tesla employees). That is owner data, not company data and it should be stored in a cryptographically secure manner accessible only to the owner themselves. This is entirely possible to implement but it just isn't because it would be forfeiting part of their information asymmetry that Telsa enjoys over it's customers/market.
We don't have any evidence it's generally open to consumption by all Tesla employees, and I'd be shocked if it was. I also don't think there's any cloud video storage happening by any large company that encrypts video files, Tesla isn't like uniquely dumb about this
It's already there, except the ads are baked into most content as "sponsored videos". They make it easy to skip over the ads (seriously, just fast forward 20-60 seconds depending on the video).
For better or worse, the vast majority of my media consumption is youtube these days and of all the subs I pay for, it's the one I get the most value out of. I don't get the cynicism.
What's the alternative here? Just offer the service with minimal ads and just hope people decide to sign up for the ad-free version - a value proposition that makes little sense since the ads are minimal?
They aren't bullying anyone. They are trying to make a business model work as efficiently as possible. Anything that relies on ad revenue is going to be predatory like this.
Links to pay wall trying to get me to pay for some subscription service I've never heard of and would never want to sign up for sight unseen.
mmmhmmm