Author here. Please note this is an early draft/stream-of-consciousness. Feel free to read and share anyway but my actual published articles hold a higher standard!
I really didn't get on with that one. Felt very much like a book that could have easily been shortened down to an essay and suffered for the additional length.
I disagree about "next". I wasn't confused by the original usage. "Next" is more associated with "subsequent" than "upcoming". The "future" component is contextually inferred.
Probably nobody at all got confused by that word choice.
It didn’t take me long to parse out the meaning but the phrasing was confusing.
“The next book he wrote, Noise, ….” Would have been better or “After that book he wrote Noise….”.
I absolutely was confused for a second or two and thought “wait, are we talking about a different person? He isn’t going to have a ‘next’ book unless he had one queued up?”.
Did I need the explanation above? Not really, I’d come to the right conclusion on my own but I can imagine someone who isn’t a native speaker (reader?) might stumble on that more and I enjoyed the confirmation.
You seem quite interested in right versus wrong. I wonder if you will be intellectually honest if/when I reveal errors, mistakes, oversimplifications, and so on?
> I looked up the word in a few different dictionaries and the top entry aligns more with "subsequent" in every one.
Even if you had looked at every dictionary, would you claim such a process resolves ambiguity in general? I hope not.
As you know, there are other entries other than the first in a dictionary. Multiple entries means there are multiple usages: there can be ambiguity. Sometimes usage diminishes or eliminates ambiguity, but not always.
> I looked up the word in a few different dictionaries...
You only took a small sample. How can you offer this as definitive evidence? You can't.
In case you didn't check it or overlooked it, here is the first entry from the Apple Dictionary:
> 1 (of a time or season) coming immediately after the time of writing or speaking: we'll go next year | next week's parade.
Anyhow, my argument does not rely on pointing to a dictionary and saying "I'm right" and "you are wrong". I am saying:
1. Reasonable people see ambiguity (in this specific case and in general)
2. No one person is the arbiter of what is ambiguous for others.
3. Claiming there is a definitive process to resolve ambiguity for everyone is naive.
The sheer irony of your unwarranted pedantic critique of the usage of “next” is that all HN threaded comments, including yours, have a “next” link in their headers which clearly does NOT refer to unwritten future comments.
This is inaccurate. Here is what troll means to many people "a person who makes a deliberately offensive or provocative online post." My response clarified without being offensive. I was careful to word it neutrally. I hope than a charitable reader can see this.
To put in the terms of Kahnemann's Thinking Fast and Slow: it is worth considering if maybe the commenter above got triggered first (a System 1 emotional reaction) and then later sought to rationalize (System 2) a "reason" for that: namely "he's a pedantic troll".
> If people can construct a simple and coherent story, they will feel confident regardless of how well grounded it is in reality. - Daniel Kahneman
I'm reasonably sure this is not what happened, judging by my own recollection of when I have been tempted to write similar things, and my discussions with people who have written similar things. However, your story is both simple and coherent.
It's much easier to point out others' alleged irrational thinking, but the main purpose of books like this is to help you better understand your own thinking.
> It's much easier to point out others' alleged irrational thinking, but the main purpose of books like this is to help you better understand your own thinking.
That sounds right. I only can make probabilistic guesses as to what is happening in someone else's brain. By posing a question to someone else, there is some chance that person may ask it of themselves. If not today, then perhaps in future.
> all HN threaded comments, including yours, have a “next” link in their headers which clearly does NOT refer to unwritten future comments.
You are claiming this resolves written ambiguity in comments? I hope not.
Also: I hope you recognize that the "next" link doesn't appear until a comment has a reply. So think about what someone sees when they are writing a comment; namely, there is no "next" link visible. This undermines your implied argument, which seems to be "the word 'next' is visible in the headers, which will make the use of 'next' in the comment itself unambiguous".
Forgive me for saying so, but what a silly argument -- on either level. You are clearly upset and bothered and resorting to rationalization and attacks.
This kind of defensive and dismissive response is so common we should make a name for it! Perhaps we could call it the "It Was Obvious To Me" Fallacy.
Here's one way to commit the fallacy: when someone points out a communication issue, mock them for being "too literal" or "pedantic" rather than acknowledging the ambiguity existed.
> I literally thought some unpublished book. But you shouldn't have doubled down on 'next'. Your first para was enough.
Thanks for the feedback.
To focus on "should" for a second. If I would not have written my second paragraph, I would not have made my main point: I'm trying to get people to pay attention to ambiguity more broadly and tamp down this all-too-common tendency for people to think "the way I see things is obvious and/or definitive" which pervades Hacker News like a plague. Perhaps working with computers too much has damaged our cognitive machinery: human brains are not homogeneous nor deterministic parsers of meaning.
Perhaps the second paragraph got some people thinking a little bit. We are discussing Kahnemann's life's work after all. This is a perfect place to discuss our flawed intellectual machinery and our biases. Kahnemann would be happy if people here improved their self-understanding and communication with each other.
Meta-commentary on how people might react to the above comment: Some people that think "X is so obvious" that they will frown upon people who think "X is not obvious".
> The irony is that people who insist something is "obvious" are often demonstrating a lack of awareness about how communication actually works. Clear communicators tend to be more empathetic about ambiguity precisely because they understand how easily misunderstandings occur. [1]
> “The curse of knowledge is the single best explanation I know of why good people write bad prose.” [2]
[1] Claude Sonnet 4.5 in response to """Some people that think "X is so obvious" that they will frown upon people who think "X is not obvious"."""
In safety-critical systems, we distinguish between accidents (actual loss, e.g. lives, equipment, etc.) and hazardous states. The equation is
hazardous state + environmental conditions = accident
Since we can only control the system, and not its environment, we focus on preventing hazardous states, rather than accidents. If we can keep the system out of all hazardous states, we also avoid accidents. (Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.)
One such hazardous state we have defined in aviation is "less than N minutes of fuel remaining when landing". If an aircraft lands with less than N minutes of fuel on board, it would only have taken bad environmental conditions to make it crash, rather than land. Thus we design commercial aviation so that planes always have N minutes of fuel remaining when landing. If they don't, that's a big deal: they've entered a hazardous state, and we never want to see that. (I don't remember if N is 30 or 45 or 60 but somewhere in that region.)
For another example, one of my children loves playing around cliffs and rocks. Initially he was very keen on promising me that he wouldn't fall down. I explained the difference between accidents and hazardous states to him in childrens' terms, and he realised slowly that he cannot control whether or not he has an accident, so it's a bad idea to promise me that he won't have an accident. What he can control is whether or not bad environmental conditions lead to an accident, and he does that by keeping out of hazardous states. In this case, the hazardous state would be standing less than a child-height within a ledge when there is nobody below ready to catch. He can promise me to avoid that, and that satisfies me a lot more than a promise to not fall.
If you haven't done so: please write a book. Aim it towards software professionals in non-regulated industries. I promise to buy 50 to give to all of my software developing colleagues.
As for 'N', for turboprops it is 45, for jets it is 30.
I want to write more about this, but it has been a really difficult subject to structure. I gave up halfway through this article, for example, and never published it – I didn't even get around to editing it, so it's mostly bad stream of consciousness stuff: https://entropicthoughts.com/root-cause-analysis-youre-doing...
I intend to come back to it some day, but I do not think that day is today.
Ok. I am impressed with your ability to take such complex subjects and make them plain, you are delivering very high quality here. The subject is absolutely underserved in the industry as far as I'm aware of it, and I would love to have a book that I can hand out to people working on software in critical infrastructure and life sciences that gets them up to speed. The annoying thing is that software skills are values much higher than the ability to accurate model the risks because that is only seen as a function of small choices standing by themselves. A larger, overall approach is what is very often called for and it would help to have a tool in hand to both make that case and to give the counterparty the vocabulary and the required understanding of the subject in order to have a meaningful conversation.
Edit: please post your link from above as a separate submission.
Just started reading the linked text after reading your comment and I agree, this is high quality education, and enjoyable. It's an art, really. Thank you for sharing your work and please keep it up.
Just a thought I had while reading your introduction: this is applicable even to running a successful business model. I'm honestly having trouble even putting it into words, but you have my analytical mind going now at a very late hour... Thanks!
Your writing is good, please keep at it. I think it would help a lot if you made it clearer when you're talking between root-cause-analysis for software, aviation, other things, or generically.
Also, your train-of-thought is pretty deep; bulleting runs out of steam and gets visually confusing, especially with the article table-of-contents on RHS, you're only using <50% of screen width. Suggest you need numbered/lettered lists and section headings and use the full screen width.
See also: various points in the Evil Overlord list[0]. Selected examples:
#12: One of my advisors will be an average five-year-old child. Any flaws in my plan that he is able to spot will be corrected before implementation.
#60: My five-year-old child advisor will also be asked to decipher any code I am thinking of using. If he breaks the code in under 30 seconds, it will not be used. Note: this also applies to passwords.
#74: When I create a multimedia presentation of my plan designed so that my five-year-old advisor can easily understand the details, I will not label the disk "Project Overlord" and leave it lying on top of my desk.
Google’s SRE STPA starts with a similar model. I haven’t read the external document, but my team went through this process internally and we considered the hazardous states and environmental triggers.
That being said: I have - for some years now - started to read air accident board reports (depending on your locale, they may be named slightly different). They make for a fascinating read, and they have made me approach debugging and postmortems in a more structured, more holistic way. They should be freely available on your transportation safety board websites (NTSB in America, BFU in Germany, ...)
> Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.
The reason they had less than 30 minutes of fuel was because the environment wasn't on their side. They started out with a normal amount of reserve and then things went quite badly and the reserve was sufficient but only just.
The question then is, how much of an outlier was this? Was this a perfect storm that only happens once in a century and the thing worse than this that would actually have exhausted the reserve only happens once in ten centuries? Or are planes doing this every Tuesday which would imply that something is very wrong?
This is why staying out of hazardous conditions is a dynamic control problem, rather than a simple equation or plan you can set up ahead of time.
There are multiple controllers interacting with the system (the FADEC computer in the engines, the flight management computer in the plane, pilots, ground crew, dispatchers, air traffic controllers, the people at EASA drafting regulations, etc.), trying to keep it outside of hazardous conditions. They do so by observing the state the system and the environment is in ("feedback"), running simulations of how it will evolve in the future ("mental models"), and making adjustments to the system ("control inputs") to keep it outside of hazardous conditions.
Whenever the system enters a hazardous condition, there was something that made these controllers insufficient. Either someone had inadequate feedback, or inadequate mental models, or the control inputs were inoperational or insufficient. Or sometimes an entire controller that ought to have been there was missing!
In this case it seems like the hazard could have been avoided any number of ways: ground the plane, add more fuel, divert sooner, be more conservative about weather on alternates, etc. Which control input is appropriate and how to ensure it is enacted in the future is up to the real investigators with access to all data necessary.
-----
You are correct that we will not ever be able to set up a system where all controllers are able to always keep it out of hazardous states perfectly. If that was a thing we would never have any accident ever – we would only have intentional losses that are calculated to be worth their revenue in additional efficiency.
But by adopting the right framework for thinking about this ("how do active controllers dynamically keep the system out of hazards?") we can do a pretty good job of preventing most such problems. The good news is that predicting hazardous states is much easier than predicting accidents, so we can actually do a lot of this design up-front without first having an accident happen and then learning from it.
> This is why staying out of hazardous conditions is a dynamic control problem
I don't think this philosophy can work.
If you can't control whether the environment will push you from a hazardous state into a failure state, you also can't control whether the environment will push you from a nonhazardous state into a hazardous state.
If staying out of hazardous conditions is a dynamic control problem requiring on-the-fly adjustment from local actors, exactly the same thing is true of staying out of failure states.
The point of defining hazardous states is that they are a buffer between you and failure. Sometimes you actually need the buffer. If you didn't, the hazardous state wouldn't be hazardous.
But the only possible outcome of treating entering a hazardous state as equivalent to entering a failure state is that you start panicking whenever an airplane touches down with less than a hundred thousand gallons of fuel.
My understanding is that the SOP for low fuel is that you need to declare a fuel emergency (i.e., "Mayday Mayday Mayday Fuel") one you reach the point where you will land with only reserve fuel left. The point OP was making is that the entire system of fuel planning is designed so that you should never reach the Mayday stage as a result of something you can expect to happen eventually (such as really bad weather). If you land with reserve fuel, it is normally investigated like any other emergency.
Flight plans require you to look at the weather reports of your destination before you take off and pick at least one or two alternates that will let you divert if the weather is marginal. The fuel you load includes several redundancies to deal with different unexpected conditions[1] as well as the need to divert if you cannot land.
There have been a few historical cases of planes running out of fuel (and quite a few cases of planes landing with only reserve fuel), and usually the root cause was a pilot not making the decision to go to an alternate airport soon enough or not declaring an emergency immediately -- even with very dynamic weather conditions you should have enough fuel for a go-around, holding, and going to an alternate.
That's very enlightening. I'm casually interested in traffic safety and road/junction designs from the perspective of a UK cyclist and there's a lot to be learnt from the safety culture/practices of the aviation industry. I typically think in terms of "safety margins" whilst cycling (e.g. if a driver pulls out of a side road in front of me, how quickly can I avoid them via swerving or brake to avoid a collision). I can imagine that hazardous states can be applied to a lot of the traffic behaviour at junctions.
I find a useful exercise is to have a cheat sheet of historic flood heights in some area, tell someone the first record high, ask them how high they would make the levee and how long they think it would last. Peoples' sense for extremal events is bad.
That's a great exercise. Where I live a lot of people died because in the past we were not able to make that guess correctly. A lot was learned, at great expense.
Aren't low-speed slips something that makes planes flip upside-down when not used very carefully? (Inadvertent rudder changes corrected with opposite aileron resulting in a snap roll.)
Yes, being that one is cross-controlled they must be used very carefully. It's really obvious that one is cross-controlling. It's the only time outside of really powerful crosswinds that you see what's below and ahead of you out of the side window. That view is what makes it fun.
You're probably thinking of a skid, which is when you put too much rudder in the same direction as the ailerons. Then the lower (and slower because it's on the inside) wing stalls first (and goes lower still) and away you go. Often when turning to land, so there's not enough altitude to recover.
A cross controlled stall can result in a spin (which is probably what you mean by flip upside down). The rudder changes aren't inadvertent, they're intentionally opposite the aileron input - the goal is essentially to fly somewhat sideways, so the fuselage induces drag.
In general forward slips are safe, but yes you have to make sure you keep the nose down/speed up. There's little in aviation that isn't dangerous if you aren't careful.
In visual design, it is things that occupy space. The areas left unoccupied by things are called negative space.
So if you hang a massive painting, that painting takes up positive space. The parts of the wall that are not covered by that painting make up the negative space.
I've just never encountered a situation where that's a necessary distinction. If I say "the painting takes up too much space on the wall" I don't need to say "the painting has too much positive space" nor "the painting removes too much negative space".
Just last week I was hanging photos with my wife in our home and after she had proposed a placement I told her "I don't like the balance of the negative space there". I could have said "I don't feel like the parts of the wall not taken up by photos are balanced there" but "negative space" is a convenient abstraction. (Note that this is different from the photos themselves being unbalanced, which is also a concern but was not a problem then.)
Think of it like a foreach loop. Sure, it's equivalent to the corresponding for(;;)-style loop but it's also a convenient mental shortcut.
I think there is a very large difference between saying, e.g., "there is too much space" (the total area is too large) vs "there is too much negative space" (there are not enough things in the area). I think there's a better argument that "negative space" is redundant with "empty space", but personally I don't mind the term so I will not make that argument.
I think this is a good example of the specific, limited way in which this phrase is useful. It's similar to the - very specific - phrase "price point", which people often use to just mean generic "price" now when they want to sound businessy.
If you are doing visual design, if you want to call out the parts of the space you are working in where you _aren't doing anything_, that is the 'negative space'.
If you are producing a letterform, all the parts of the object you are producing which is not filled by letter is the 'negative space'. The "space" is the whole area, including the letter.
People intentionally play with the distinction in optical illusions:
I used Feeder on my Android phone for the longest time. Recently set up a NixOS server and enabled FreshRSS on it, with FocusReader as the Android client. It is very nice to manage feeds on a server and have the read/unread status sync across devices.
If you have only used device-local readers before and have a server to spare, I recommend at least trying it!
I have freshrss on a VPS and use the web interface as my client on computers and my phone. Is FocusReader a big upgrade over the native web experience?
reply