Hacker Newsnew | past | comments | ask | show | jobs | submit | munificent's commentslogin

I feel like being a journalist in a warzone is already exposure to a sufficient number of threats for the benefit of human society that we shouldn't simply accept them being exposed to any entirely different set of completely unnecessary threats from a pile of sociopaths running their own sick gambling dead pools.

> Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.

Yes, but shares are not at all uniformly distributed. Tim Cook owns 3.28 million shares of AAPL. For comparison, the 50 million Vanguard customers have to divide 1.3 billion shares amongst them, averaging about 26 shares of AAPL each.

> And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.

The majority of those end up getting bought by larger software companies.

Overall capital ownership is increasingly concentrated among a small number of elites.


This excellent article reminded me a lot of something I tried to get at a while back:

https://journal.stuffwithstuff.com/2010/11/26/the-biology-of...

When I wrote that article, it didn't seem to resonate with anyone at the time. I've been thinking about it more lately in the era of LLMs.


"Progressive disclosure" is the name of the UX principle that aims to provide a continuum between the user need for simplicity versus fine-grained control.

> if companies reported dollars in and dollars out live to shareholders at least we would have an idea of how the company is doing in a general sense.

Goodhart's law is knocking on your door right now.


Help me understand what you are saying here. For those that don't know this one is "a measure becomes a target, it ceases to be a good measure".

I'm not advocating for a single metric that can be gamed. A business is fundamentally about dollars in and dollars out. Maybe add receivables in there and a few other metrics from the P&L. I'm not trying to be prescriptive here on purely cash in and out.

I do think there is a low friction way that companies could report daily certain metrics that over time would give their shareholders a sense of the company's health and trajectory.


Dollars/receivables in and dollars/deliverables out is just a question of rate, unless I'm missing something.

If a 10 billion dollar company has a per-second dollar out/in rate of $1,000,000 due to actual organic business, a company with $2,000,000 can set up an LLC it buys and sells from, and legally 'swap' $1,000,000 a second back and forth in services "bought and sold" to mimic the appearance of the $10B company, to generate business interest/confidence/investment.

That's an extreme example, but the point is that real-time money flow has nothing to do with the actual 'health' of a company.


> - OSS is valuable for decentralizing power and influence

That was the intention and hope, but I think the past twenty years has shown that it largely had the opposite effect.

Let's say I write some useful library and open source it.

Joe Small Business Owner uses it in his application. It makes his app more useful and he makes an extra $100,000 from his 1,000 users.

Meanwhile Alice Giant Corporate CEO uses it in her application. It makes her app more useful by exactly the same amount, but because she has a million users, now she's a billion dollars richer.

If you assume that open source provides additive value, then giving it to everyone freely will generally have an equalizing effect. Those with the least existing wealth will find that additive value more impactful than someone who is already rich. Giving a poor person $10,000 can change their life. Give it to Jeff Bezos and it won't even change his dinner plans.

But if you consider that open source provides multiplicative value, then giving it to everyone is effectively a force multiplier for their existing power.

In practice, it's probably somewhere between the two. But when you consider how highly iterative systems are, even a slight multiplicative effect means that over time it's mostly enriching the rich.

Seven of the ten richest people in the world got there from tech [1]. If the goal of open source was to lead to less inequality, it's clearly not working, or at least not working well enough to counter other forces trending towards inequality.

[1]: https://en.wikipedia.org/wiki/The_World%27s_Billionaires


When presented with a choice between:

1. Take a job making $$$$$$$ at a company making the world worse.

2. Take a job making $$$ at a company not making the world worse.

Very few people have a personality such that they'll pick 2.


exactly what I was asking OP, her/his comment sounded like people will pick the later (I agree with you)

> The implicit unfounded assumption is whether that's actually worth more than a well written orderly response.

It's not implicit or unfounded. The parent comment is explicitly saying that's what they prefer. And, as an actual human, their preference is intrinsically valid for them.

If I like my kid's crappy cooking over a Michelin-star meal made by a robot... then I get to like my kid's crappy cooking more. I have that right. There is no social consensus when it comes to what I want. You can't argue whether my preference is correct or not, it's my preference.


As a software developer and human being, I know people often say they prefer one thing while actually preferring something else. That's human nature.

People have strong feelings about AI in general and that can definitely cloud what they will say about it. Everybody hates AI but, like CGI in movies, they only likely hate the AI or CGI that they notice.


Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.

To say otherwise is to say that worrying about lung cancer is clouding one's view of smoking.

> they only likely hate the AI or CGI that they notice.

No, this is simply not true at all. I dislike use of AI even more when I don't notice it. My goal getting on the Internet is to connect with other actual people and their creativity. I want actual people to be more connected to each other, and AI makes that worse, especially when it's good enough that people don't even realize their are being intermediated by corporations pumping out simulated humanity.


> Believing that, say, the use of AI will primarily enrich billionaires that are already doing societal harm is not clouding one's view of AI. It is one's view of AI.

That's fine. Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.

> My goal getting on the Internet is to connect with other actual people and their creativity.

It's too bad your goal doesn't include interacting with people who don't speak your language and use AI to translate for them. Or people who struggle with writing in general. I don't think it's as black and white as you make it out to be.


> Nobody is forcing you to use AI. I dislike it when people force their ideas onto others.

I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.

We had the President of the United States posting AI-manipulated propaganda on social media. Millions of voters saw that, regardless of whether or not I happen to personally use ChatGPT.

It doesn't matter if I light up a cigarette myself if I have to spend all day in a crowded bar where everyone else is smoking.

> I don't think it's as black and white as you make it out to be.

I'm not saying it's black and white. All I'm saying is that your description of someone's strong feelings about AI as "clouding" their stance is incorrect. You can be clear-headed about feeling something is a large net negative for the world.


> I'm still being forced to live in a world filled with people who do use it and whose behavior affects me.

My point... way at the top... is exactly that. People's behavior does have an effect but it always has.

The President of the United States posting manipulated propaganda is the problem; using AI now just makes it more obvious. It's actually better, right now, that it is so obvious. But anyone can, and has, done that with lesser tools to better affect.

People posting bullshit on the Internet has always been a problem. I'm not even sure how an AI ban is enforceable. While I don't think I have the solution, I think it makes more sense to look at this as content problem instead of tool problem. Both quality and quantity.


> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

If your definition of "superior" includes some amount of "provides a meaningful connection to another living being", then LLM output will rarely be superior even when it's factually and grammatically correct.


When someone's communication is casual and informal, without any context, you really can't distinguish between:

* The author is being flippant and not taking the situation seriously enough.

* The author is presuming a high-trust audience that knows that they have done all the due diligence and don't have to restate all of that.

In this case, it's a devlog (i.e. not a "marketing post") for a language that isn't at 1.0 yet. A certain amount of "if you're here, you probably have some background" is probably reasonable.

The post does link directly to the PR and the PR has a lot more context that clearly conveys the author knows what they are doing.

It is weird reading about (minor) breaking language changes sort of mentioned in passing. We're used to languages being extremely stable. But Zig isn't 1.0 yet. Andrew and friends certainly take user stability seriously, but you signed up for a certain amount of breakage if you pick the language today.

As someone who maintains a post-1.0 language, there really is a lot of value in breaking changes like this. It's good to fix things while your userbase is small. It's maddening to have to live with obvious warts in the language simply because the userbase got too big for you to feasibly fix it, even when all the users wish you could fix it too. (Witness: The broken precedence of bitwise operators in C.)

It's better for all future users to get the language as clean and solid as you can while it's still malleable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: