Hacker News new | past | comments | ask | show | jobs | submit | t8sr's comments login

When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures. It turns out that using CSP in any large, complex codebase is asking for trouble, and that this is true even about projects where members of the core Go team did the CSP.

If you take enough steps back and really think about it, the only synchronization primitive that exists is a futex (and maybe atomics). Everything else is an abstraction of some kind. If you're really determined, you can build anything out of anything. That doesn't mean it's always a good idea.

Looking back, I'd say channels are far superior to condition variables as a synchronized cross-thread communication mechanism - when I use them these days, it's mostly for that. Locks (mutexes) are really performant and easy to understand and generally better for mutual exclusion. (It's in the name!)


> When I did my 20% on Go at Google, about 10 years ago, we already had a semi-formal rule that channels must not appear in exported function signatures.

That sounds reasonable. From what little Erlang/Elixir code I’ve seen, the sending and receiving of messages is also hidden as an implementation detail in modules. The public interface did not expose concurrency or synchronization to callers. You might use them under the hood to implement your functionality, but it’s of no concern to callers, and you’re free to change the implementation without impacting callers.


How large do you deem to be large in this context?

I had success in using a CSP style, with channels in many function signatures in a ~25k line codebase.

It had ~15 major types of process, probably about 30 fixed instances overall in a fixed graph, plus a dynamic sub-graph of around 5 processes per 'requested action'. So those sub-graph elements were the only parts which had to deal with tear-down, and clean up.

There were then additionally some minor types of 'process' (i.e. goroutines) within many of those major types, but they were easier to reason about as they only communicated with that major element.

Multiple requested actions could be present, so there could be multiple sets of those 5 process groups connected, but they had a maximum lifetime of a few minutes.

I only ended up using explicit mutexes in two of the major types of process. Where they happened to make most sense, and hence reduced system complexity. There were about 45 instances of the 'go' keyword.

(Updated numbers, as I'd initially misremembered/miscounted the number of major processes)


What is "20% on Go"? What is it 20% of?

At least historically, google engineers had 20% of their time to spend on projects not related to their core role

This still exists today. For example, I am on the payments team but I have a 20% project working on protobuf. I had to get formal approval from my management chain and someone on the protobuf team. And it is tracked as part of my performance reviews. They just want to make sure I'm not building something useless that nobody wants and that I'm not just wasting the company's time.

I see why they do this, but man it almost feels like asking your boss for approval on where you go on vacation. Do people get dinged if their 20% time project doesn't pan out, or they lose interest later on?

I assume this means "20% of my work on go" aka 1 out of 5 work days working on golang

Google historically allowed employees to self-direct 20% of their working time (onto any google project I think).

I think the two basic synchronisation primitives are atomics and thread parking. Atomics allow you to share data between two or more concurrently running threads whereas parking allows you to control which threads are running concurrently. Whatever low-level primitives the OS provides (such as futexes) is more an implementation detail.

I would tentatively make the claim that channels (in the abstract) are at heart an interface rather than a type of synchronisation per se. They can be implemented using Mutexes, pure atomics (if each message is a single integer) or any number of different ways.

Of course, any specific implementation of a channel will have trade-offs. Some more so than others.


To me message passing is like it's own thing. It's the most natural way of thinking about information flow in a system consisting of physically separated parts.

What you think is not very relevant if it doesn't match how CPUs work.

I guess I'm officially listed as a "staff engineer". I have been at this for 20 years, and I work with multiple teams in pretty different areas, like the kernel, some media/audio logic, security, database stuff... I end up alternating a lot between using Rust, Java, C++, C, Python and Go.

Coding assistant LLMs have changed how I work in a couple of ways:

1) They make it a lot easier to context switch between e.g. writing kernel code one day and a Pandas notebook the next, because you're no longer handicapped by slightly forgetting the idiosyncrasies of every single language. It's like having smart code search and documentation search built into the autocomplete.

2) They can do simple transformations of existing code really well, like generating a match expression from an enum. They can extrapolate the rest from 2-3 examples of something repetitive, like converting from Rust types into corresponding Arrow types.

I don't find the other use cases the author brings up realistic. The AI is terrible at code review and I have never seen it spot a logic error I missed. Asking the AI to explain how e.g. Unity works might feel nice, but the answers are at least 40% total bullshit and I think it's easier to just read the documentation.

I still get a lot of use out of Copilot. The speed boost and removal of friction lets me work on more stacks and, consequently, lead a much bigger span of related projects. Instead of explaining how to do something to a junior engineer, I can often just do it myself.

I don't understand how fresh grads can get use out of these things, though. Tools like Copilot need a lot of hand-holding. You can get them to follow simple instructions over a moderate amount of existing code, which works most of the time, or ask them to do something you don't exactly know how to do without looking it up, and then it's a crapshoot.

The main reason I get a lot of mileage out of Copilot is exactly because I have been doing this job for two decades and understand what's happening. People who are starting in the industry today, IMO, should be very judicious with how they use these tools, lest they end up with only a superficial knowledge of computing. Every project is a chance to learn, and by going all trial-and-error with a chatbot you're robbing yourself of that. (Not to mention the resulting code is almost certainly half-broken.)


This is pretty much how i use llm for coding. I already know what i want i just don't want to type it out. I ask the llm to do the typing for me and then i check it over, copy/paste it in, and make any adjustments or extensions.


This is the way.

Just last night I did a quick test on Cursor (first time trying it). Opened up my IRC bot project and asked it to "add relevant handlers for IRC messages".

It immediately recognised the pattern I had used before and added CTCP VERSION, KICK, INVITE and 433 (nickname already in use). It didn't try to add everything under the sun and just added those. Took me 20 seconds.


I mean this in the nicest way possible: this paragraph, with all the repetition and constant use of the word "expert", is completely unhinged. I really recommend re-reading what you write.


The term expert is used frequently in US government settings, per the US Office of Personnel Management: https://www.opm.gov/frequently-asked-questions/assessment-po...

Anyone above the lowest pay grades gets categorized as some type of "expert". As the gov tries to justify higher pay to keep up with inflation and compete with private job markets, more people become categorized as "experts" to fill higher pay grades. (For perspective, you can't afford to live independently in the DC metro area unless you're in the top 1/3 of pay grades) I can totally see how someone throwing the term around could appear unhinged to an outsider, but the reality is that the US government as a whole lives in it's own unhinged little world.


Anyone above the lowest pay grades gets categorized as some type of "expert".

I've read that at the F.B.I., anyone not pushing a broom gets the title "agent."


It was absolutely not like that, at least up until 10 years ago. Agents and the operational staff were totally separate.


I am not OP and I see nothing of the sort you are implicating. The writing is dry humor and funny. The expert repetition of the word "expert" for the obvious non-expert expert delivers a good bit of the story.


It's a bit dramatic, but "unhinged" is excessive. I imagine the repetition is a stylistic choice. It builds up the conclusion, and turns a one-line anecdote into a story.


It echoes the "cosmonauts just used a pencil!" Copy pasta.


(It's a bot)


"he logged into his remote profile"

Yes.


And his post history. It's always one sentence about who he is, then a paragraph of text of one of his many careers slightly related to OP


What you're describing is not how it works. Chiefly, the hiring pipelines are not set up for a single role, but a whole family of them. They are filled ahead of need. (Or were, at a time when this would've been taking place.)

There are other inaccuracies, but suffice it to say, this comment section is full of comments by people who have never been hiring managers talking about how hiring works.


I re-read the wiki page on the DSA multiple times. It does explicitly spell out that at least one "diverse" candidate must be on the slate for each role. Yes, candidates are considered for multiple roles as they go through the hiring pipeline, but that doesn't change the fact that it prohibits moving forward with a hire if the candidate pool for the role does not include a diverse candidate.

If this is wrong, but all means explain how the DSA actually works.


Nothing you just said contradicts the OP in any way, as those details don't change any aspect of the argument.


The phrase "confidently incorrect" comes to mind


Started 15-20 years ago, whereas Uber and Tesla hopped on the bandwagon late and tried to play catchup. I remember talking about the self-driving project at Google in 2008 or something like that. (IIRC they were trying to use Haskell for something.)


We started January 2009.


And this is why I now have to read 30 page design docs that could have been 3 pages and said the same thing.

Please try to understand why people have such strong dislike of floral writing, especially in technical texts. If you read a lot of papers or designs, it makes your life miserable.


Yes, it's the usual advice of how artists/authors/scientists make something: 1) Make the thing, 2) Try removing each part, 3) If the work fails without that part, put it back.

For example, adverbs are good when readers might have the wrong image without them. E.g., "Alice [quickly] walked." Most of the time, writing is better without words like "very" or "quite."


When it comes to technical writing the only thing I can really discuss is documentation, and the key thing I'm personally looking for there is structure.

It could be about basically anything, just please, pretty please, for the love of god, make it structured. And I don't mean sections with catchy headings, I mean as structured and reference-like as possible.

I want to minimize the amount of time I spend reading prose and searching around, as well as the chance of missing things. I want to hit CTRL+F and be put where I need to be stat and have that be enough. Structure alone can convey a lot of the idea behind how something works - please trust me to able to utilize it to make basic leaps in logic.

A bad example for this is AWS documentation. It's a mish-mash of prose and structured reference. A good example is the AWS CLI documentation (although if they lead with example usages first, that'd be even better).


Writing good technical text is an art. There is a certain amount of fluff that helps, and it’s almost unnoticeable when it’s there. Without it, it’s too terse. Quite often, my complaint of technical documentation is “it did exactly what the docs said it would do, except in a situation that I didn’t expect it to do that”.


This is amazing, but crashes with a 500 every 30 seconds.


This is nihilistic nonsense. The author's problem is that he's only ever seemingly worked on web stuff. People stay in their domain far too often and then come up with big statements like "I have 20 years of experience and don't know what I'm doing." Is that maybe because you stick around a domain and layer defined by people with an average 2 years of experience, many of whom learned their job from a Youtube tutorial?

It's possible for organizations to get better, even good at building software. The foundations of the field haven't changed much, people just don't learn about them and go on to build towers of overwrought abstraction, which is the thing that keeps constantly changing.

If you think of React and Redux as foundational, then everything you say has water under it. Go open a TCP socket.


Rant accepted :-)

One thing that certainly has changed, over these years, is that the size of applications is much, much larger now.

And, certainly, we never had to think about security in the 90's, for regular-degular business apps, anyway. Now, the security dimension is it own barrel of worms, both affecting our software design and requiring network security specialists as well, configuring our boxen. I doubt very many of us are working on pure intranet apps.

So, yeah, the roots are the same, but the trunk is much larger, higher and much more expansively bushy, and its environment is considerably more dangerous.


It's definitely harder, as you say. Not only do you have to think about technical aspects of security and privacy, but also the legal requirements for those things, and how they differ across countries.

But the complexity is not so much worse than it was in 2016, and yet in 2016 we could manage it.

The type of things I see in the FAANG world lately would not be happening 10 years ago: CI that randomly fails 5% of the time because everybody knows React, but nobody knows how to set up a Linux machine. The fact that USB drivers on M1 macs are still broken and nobody at Apple knows how to fix them, or that Apple seems to have no one on staff who knows what EDID is.

We're not failing because of new complexity, it's the stuff we put up to manage the old complexity that we can no longer service. And that won't get better until we accept that the answer is not "rebuild it from scratch, but this time let's have new grads do it with no training."

> I doubt very many of us are working on pure intranet apps.

I think this is part of the problem. The reason why your Mac will kernel panic if you plug in two external screens from the same production run is because nobody is working on "purely intranet apps" like the display driver.

I don't want to sound like I believe we were somehow smarter 10 years ago. But you had relatively solid foundations to build on and people were incentivized and given mentorship to attain mastery. We need to bring that back.


Yes to all that, but the blame -- as always -- belongs to the money people. They are the ones chasing shiny new sh_t instead of fixing what their existing systems are f_cking up royally. It bears remembering that those money folks are the ones choosing the CTOs, which is less likely to do with their technical solidity, but more likely to be because they will fall in line. But that's just my educated guess, by understanding money-centric folks and how they almost always operate. The love of money has corrupted and is corrupting so many branches of modern life.


> t bears remembering that those money folks are the ones choosing the CTOs, which is less likely to do with their technical solidity, but more likely to be because they will fall in line.

Wish somebody told me that 25 years ago. I woke up to this reality way too late in my life and career. Now I started getting that it's about emitting the right signals: mostly that you are meek and malleable, if you want to get hired at certain places.

I of course refused to do that, many times, to the detriment of my career. Though maybe it was just shitty luck 7-8 times in a row, who knows. Also Eastern Europe is far from a good environment, so...


Same here, but I never sold my soul to them, so, while not having nearly as much money, I have my self respect. I hope you feel good about fighting the good fight, even if you did so by remaining in your natural, more innocent, state.

Peace be with you.


The culture around Rust is perfectly calibrated to set off middle-aged C/C++ developers.

1) It's very assertive without having a lot of experience or a track record.

2) It's extremely online, filled with anime references and Twitter jokes, obsessed with cuteness, etc...

3) It's full of whipper-snappers with 6 months of experience who think they can explain your job to you, who have been doing it for 30 years.

IME many arguments around Rust aren't focused on its technical properties, but boil down to "I just plain don't like you people."

If Rust can successfully become "boring", a lot of the opposition will go away.


I faced really strong criticism lately for criticizing (in the context of C++) a Safe C++ proposal that basically copies Rust into C++.

With all good and bad things this entails.

It seems there is a push from some people (that I hope it is not successful in its current shape, I personally find it a wrong approach) to just bifurcate C++ type system and standard library.

More context here if you are curious: https://www.reddit.com/r/cpp/comments/1g41lhi/memory_safety_...

What I found a lot of is literally, intellectual dishonesty in some of the arguments to push in that direction.

Not that it is not a possible direction, but claims like: "there is no other way", "it is the only feasible solution", etc. without exploring other alternative, from which Swift/Hylo value semantics/subscripts (subscripts is basically controlled references for mutation without a full borrow checker) and adding compile-time enforced law of exclusivity semantics without a new type system are possible (and better, IMHO) alternatives... well, you can see the thread.

There seems to be a lot of obsession as "the only right way is the Rust way" and everything else is directly discardable by authors and some people supporting them.

I think there are a lot of strong feelings there.


You hire juniors on potential, seniors to fix a gap. Right now, 90% of hiring is focused on plugging gaps in the next 6 months - that's just where we are in the cycle.

As to why we're here, I think it's obvious at this point that tech massively overhired between 2016 and 2022.

Finally, the growth areas haven't completely disappeared, they've just shifted. If you're a junior right now, you should be getting into ML or something, not webdev. Plenty of entry level roles in ML.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: