I like this. Very much falls into the "make bad state unrepresentable".
The issues I see with this approach is when developers stop at this first level of type implementation. Everything is a type and nothing works well together, tons of types seem to be subtle permutations of each other, things get hard to reason about etc.
In systems like that I would actually rather be writing a weakly typed dynamic language like JS or a strongly typed dynamic language like Elixir. However, if the developers continue pushing logic into type controlled flows, eg:move conditional logic into union types with pattern matching, leverage delegation etc. the experience becomes pleasant again. Just as an example (probably not the actual best solution) the "DewPoint" function could just take either type and just work.
Yep. For this reason, I wish more languages supported bound integers. Eg, rather than saying x: u32, I want to be able to use the type system to constrain x to the range of [0, 10).
This would allow for some nice properties. It would also enable a bunch of small optimisations in our languages that we can't have today. Eg, I could make an integer that must fall within my array bounds. Then I don't need to do bounds checking when I index into my array. It would also allow a lot more peephole optimisations to be made with Option.
Weirdly, rust already kinda supports this within a function thanks to LLVM magic. But it doesn't support it for variables passed between functions.
Academic language designers do! But it takes a while for academic features to trickle down to practical languages—especially because expressive-enough refinement typing on even the integers leads to an undecidable theory.
Aren't most type systems in widely used languages Turing complete and (consequently) undecidable? Typescript and python are two examples that come to mind
But yeah maybe expressive enough refinement typing leads to hard to write and slow type inference engines
I think the reasons are predominantly social, not theoretical.
For every engineer out there that gets excited when I say the words "refinement types" there are twenty that either give me a blank stare or scoff at the thought, since they a priori consider any idea that isn't already in their favorite (primitivistic) language either too complicated or too useless.
Then they go and reinvent it as a static analysis layer on top of the language and give it their own name and pat themselves on the back for "inventing" such a great check. They don't read computer science papers.
I proposed a primitive for this in TypeScript a couple of years ago [1].
While I'm not entirely convinced myself whether it is worth the effort, it offers the ability to express "a number greater than 0". Using type narrowing and intersection types, open/closed intervals emerge naturally from that. Just check `if (a > 0 && a < 1)` and its type becomes `(>0)&(<1)`, so the interval (0, 1).
My specific use case is pattern matching http status codes to an expected response type, and today I'm able to work around it with this kind of construct https://github.com/mnahkies/openapi-code-generator/blob/main... - but it's esoteric, and feels likely to be less efficient to check than what you propose / a range type.
There's runtime checking as well in my implementation, but it's a priority for me to provide good errors at build time
type
Foo = range[1 .. 10]
Bar = range[0.0 .. 1.0] # float works too
var f:Foo = 42 # Error: cannot convert 42 to Foo = range 1..10(int)
var p = Positive 22 # Positive and Natural types are pre-defined
You can do this quite easily in Rust. But you have to overload operators to make your type make sense. That's also possible, you just need to define what type you get after dividing your type by a regular number and vice versa a regular number by your type. Or what should happen if when adding two of your types the sum is higher than the maximum value. This is quite verbose. Which can be done with generics or macros.
You can do it at runtime quite easily in rust. But the rust compiler doesn’t understand what you’re doing - so it can’t make use of that information for peephole optimisations or to elide array bounds checks when using your custom type. And you don’t get runtime errors instead of compile time errors if you try to assign the wrong value into your type.
What the GP described could be achieved with dependent types, but could also be achieved with a less powerful type system, and the reduced power can sometimes lead to enormous benefits in terms of how pleasant it actually is to use. Check out "refinement types" (implemented in Liquid Haskell for example). Many constraints can be encoded in the type system, and an SMT solver runs at compile time to check if these constrains are guaranteed to be satisfied by your code. The result is that you can start with a number that's known to be in [0..10), then double it and add five, and then you can pass that to a function that expects a number in [10..20). Dependent types would typically require some annoying boilerplate to prove that your argument to the function would fall within that range, but an SMT solver can chew through that without any problem.
The full-blown version that guarantees no bounds-check errors at runtime requires dependent types (and consequently requires programmers to work with a proof assistant, which is why it's not very popular). You could have a more lightweight version that instead just crashes the program at runtime if an out-of-range assignment is attempted, and optionally requires such fallible assignments to be marked as such in the code. Rust can do this today with const generics, though it's rather clunky as there's very little syntactic sugar and no implicit widening.
AIUI WUFFS doesn't need a full blown proof assistant because instead of attempting the difficult problem "Can we prove this code is safe?" it has the programmer provide elements of such a proof as they write their program so it can merely ask "Is this a proof that the program is safe?" instead.
This is also approximately true of Idris. The thing that really helps Wuffs is that it's a pretty simple language without a lot of language features (e.g., no memory allocation and only very limited pointers) that complicate proofs. Also, nobody is particularly tempted to use it and then finds it unexpectedly forbidding, because most programmers don't ever have to write high-performance codecs; Wuffs's audience is people who are already experts.
ATS does this. Works quite well since multiplication by known factors and addition of type variables + inequalities is decidable (and in fact quadratic).
This can be done in typescript. It’s not super well known because of typescripts association with frontend and JavaScript. But typescript is a language with one of the most powerful type systems ever.
Among the popular languages like golang, rust or python typescript has the most powerful type system.
How about a type with a number constrained between 0 and 10? You can already do this in typescript.
You can even programmatically define functions at the type level. So you can create a function that outputs a type between 0 to N.
type Range<N extends number, A extends number[] = []> =
A['length'] extends N ? A[number] : Range<N, [...A, A['length']]>;
The issue here is that it’s a bit awkward you want these types to compose right? If I add two constrained numbers say one with max value of 3 and another with max value of two the result should be max value of 5. Typescript doesn’t support this by default with default addition. But you can create a function that does this.
// Build a tuple of length L
type BuildTuple<L extends number, T extends unknown[] = []> =
T['length'] extends L ? T : BuildTuple<L, [...T, unknown]>;
// Add two numbers by concatenating their tuples
type Add<A extends number, B extends number> =
[...BuildTuple<A>, ...BuildTuple<B>]['length'];
// Create a union: 0 | 1 | 2 | ... | N-1
type Range<N extends number, A extends number[] = []> =
A['length'] extends N ? A[number] : Range<N, [...A, A['length']]>;
function addRanges<
A extends number,
B extends number
>(
a: Range<A>,
b: Range<B>
): Range<Add<A, B>> {
return (a + b) as Range<Add<A, B>>;
}
The issue is to create these functions you have to use tuples to do addition at the type level and you need to use recursion as well. Typescript recursion stops at 100 so there’s limits.
Additionally it’s not intrinsic to the type system. Like you need peanno numbers built into the number system and built in by default into the entire language for this to work perfectly. That means the code in the function is not type checked but if you assume that code is correct then this function type checks when composed with other primitives of your program.
Complexity is bad in software. I think this kind of thing does more harm than good.
I get an error that I can't assign something that seems to me assignable, and to figure out why I need to study functions at type level using tuples and recursion. The cure is worse than the disease.
It can work. It depends on context. Like let's say these types are from a well renowned library or one that's been used by the codebase for a long time.
If you trust the type, then it's fine. The code is safer. In the world of of the code itself things are easier.
Of course like what you're complaining about, this opens up the possibility of more bugs in the world of types, and debugging that can be a pain. Trade offs.
In practice people usually don't go crazy with type level functions. They can do small stuff, but usually nothing super crazy. So type script by design sort of fits the complexity dynamic you're looking for. Yes you can do type level functions that are super complex, but the language is not designed around it and it doesn't promote that style either. But you CAN go a little deeper with types then say a language with less power in the type system like say Rust.
Typescript's type system is turing complete, so you can do basically anything with it if this sort of thing is fun to you. Which is pretty much my problem with it: this sort of thing can be fun, feels intellectually stimulating. But the added power doesn't make coding easier or make the code more sound. I've heard this sort of thing called the "type puzzle trap" and I agree with that.
I'll take a modern hindley milner variant any day. Sophisticated enough to model nearly any type information you'll have need of, without blurring the lines or admitting the temptation of encoding complex logic in it.
>Which is pretty much my problem with it: this sort of thing can be fun, feels intellectually stimulating. But the added power doesn't make coding easier or make the code more sound.
In practice nobody goes too crazy with it. You have a problem with a feature almost nobody uses. It's there and Range<N> is like the upper bound of complexity I've seen in production but that is literally extremely rare as well.
There is no "temptation" of coding complex logic in it at all as the language doesn't promote these features at all. It's just available if needed. It's not well known but typescript types can be easily used to be 1 to 1 with any hindley milner variant. It's the reputational baggage of JS and frontend that keeps this fact from being well known.
In short: Typescript is more powerful then hindley milner, a subset of it has one to one parity with it, the parts that are more powerful then hindley milner aren't popular and used that widely nor does the flow of the language itself promote there usage. The feature is just there if you need it.
If you want a language where you do this stuff in practice take a look at Idris. That language has these features built into the language AND it's an ML style language like haskell.
I have definitely worked in TS code bases with overly gnarly types, seen more experienced devs spend an entire workday "refactoring" a set of interrelated types and producing an even gnarlier one that more closely modeled some real world system but was in no way easier to reason about or work with in code. The advantage of HM is the inference means there is no incentive to do this, it feels foolish from the beginning.
> 1 + "1"
(irb):1:in 'Integer#+': String can't be coerced into Integer (TypeError)
from (irb):1:in '<main>'
from <internal:kernel>:168:in 'Kernel#loop'
from /Users/george/.rvm/rubies/ruby-3.4.2/lib/ruby/gems/3.4.0/gems/irb-1.14.3/exe/irb:9:in '<top (required)>'
from /Users/george/.rvm/rubies/ruby-3.4.2/bin/irb:25:in 'Kernel#load'
from /Users/george/.rvm/rubies/ruby-3.4.2/bin/irb:25:in '<main>'
Some people mistakenly call dynamic typing "weak typing" because they don't know what those words mean. PSA:
Static typing / dynamic typing refers to whether types are checked at compile time or runtime. "Static" = compile time (eg C, C++, Rust). "Dynamic" = runtime (eg Javascript, Ruby, Excel)
Strong / weak typing refers to how "wibbly wobbly" the type system is. x86 assembly language is "weakly typed" because registers don't have types. You can do (more or less) any operation with the value in any register. Like, you can treat a register value as a float in one instruction and then as a pointer during the next instruction.
Ruby is strongly typed because all values in the system have types. Types affects what you can do. If you treat a number like its an array in ruby, you get an error. (But the error happens at runtime because ruby is dynamically typed - thus typechecking only happens at runtime!).
It's strongly typed, but it's also duck typed. Also, in ruby everything is an object, even the class itself, so type checking there is weird.
Sure it stops you from running into "'1' + 2" issues, but won't stop you from yeeting VeryRawUnvalidatedResponseThatMightNotBeAuthorized to a function that takes TotalValidatedRequestCanUseDownstream. You won't even notice an issue until:
- you manually validate
- you call a method that is unavailable on the wrong object.
I recall a type theorist once defined the terms as follows (can't find the source): "A strongly typed language is one whose type system the speaker likes. A weakly typed language is one whose type system the speaker dislikes."
So yeah I think we should just give up these terms as a bad job. If people mean "static" or "dynamic" then they can say that, those terms have basically agreed-upon meanings, and if they mean things like "the type system prohibits [specific runtime behavior]" or "the type system allows [specific kind of coercion]" then it's best to say those things explicitly with the details filled in.
It would be weak if that was actually mutating the first “a”. That second declaration creates a new variable using the existing name “a”. Rust lets you do the same[1].
Rust lets you do the same because the static typing keeps you safe. In Rust, treating the second 'a' like a number would be an error. In ruby, it would crash.
These are two entirely different a's you're storing reference to it in the same variable. You can do the same in rust (we agree it statically and strongly typed, right?):
let a = 1;
let a = '1';
Strongly typing means I can do 1 + '1' variable names and types has nothing to do with it being strongly typed.
In the dynamic world being able to redefine variables is a feature not a bug (unfortunately JS has broken this), even if they are strongly typed. The point of strong typing is that the language doesn't do implicit conversions and other shenanigans.
Well yeah, because variables in what you consider to be a
strongly typed language are allocating the storage for those variables. When you say int x you're asking the compiler to give you an
int shaped box. When you say x = 1 in Ruby all you're doing is saying is that in this scope the name x now refers to the box holding a 1. You can't actually store a string in the int box, you can only say that
from now on the name x refers to the string box.
The “Stop at first level of type implementation” is where I see codebases fail at this. The example of “I’ll wrap this int as a struct and call it a UUID” is a really good start and pretty much always start there, but inevitably someone will circumvent the safety. They’ll see a function that takes a UUID and they have an int; so they blindly wrap their int in UUID and move on. There’s nothing stopping that UUID from not being actually universally unique so suddenly code which relies on that assumption breaks.
This is where the concept of “Correct by construction” comes in. If any of your code has a precondition that a UUID is actually unique then it should be as hard as possible to make one that isn’t. Be it by constructors throwing exceptions, inits returning Err or whatever the idiom is in your language of choice, the only way someone should be able to get a UUID without that invariant being proven is if they really *really* know what they’re doing.
(Sub UUID and the uniqueness invariant for whatever type/invariants you want, it still holds)
> This is where the concept of “Correct by construction” comes in.
This is one of the basic features of object-oriented programming that a lot of people tend to overlook these days in their repetitive rants about how horrible OOP is.
One of the key things OO gives you is constructors. You can't get an instance of a class without having gone through a constructor that the class itself defines. That gives you a way to bundle up some data and wrap it in a layer of validation that can't be circumvented. If you have an instance of Foo, you have a firm guarantee that the author of Foo was able to ensure the Foo you have is a meaningful one.
Of course, writing good constructors is hard because data validation is hard. And there are plenty of classes out there with shitty constructors that let you get your hands on broken objects.
But the language itself gives you direct mechanism to do a good job here if you care to take advantage of it.
Functional languages can do this too, of course, using some combination of abstract types, the module system, and factory functions as convention. But it's a pattern in those languages where it's a language feature in OO languages. (And as any functional programmer will happily tell you, a design pattern is just a sign of a missing language feature.)
I find regular OOP language constructor are too restrictive. You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully or return a specific subtype; you need a static factory method to do something more than guaranteed successful construction w/o exception.
Does this count as a missing language feature by requiring a "factory pattern" to achieve that?
The natural solution for this is a private constructor with public static factory methods, so that the user can only obtain an instance (or the error result) by calling the factory methods. Constructors need to be constrained to return an instance of the class, otherwise they would just be normal methods.
Convention in OOP languages is (un?)fortunately to just throw an exception though.
In languages with generic types such as C++, you generally need free factory functions rather than static member functions so that type deduction can work.
> You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully
Throwing an error is doing exactly that though, its exactly the same thing in theory.
What you are asking for is just more syntactic sugar around error handling, otherwise all of that already exists in most languages. If you are talking about performance that can easily be optimized at compile time for those short throw catch syntactic sugar blocks.
Java even forces you to handle those errors in code, so don't say that these are silent there is no reason they need to be.
This is why constructors are dumb IMO and rust way is the right way.
Nothing stops you from returning Result<CorrectObject,ConstructorError> in CorrectObject::new(..) function because it's just a regular function struct field visibility takes are if you not being able to construct incorrect CorrectObject.
I don't see this having much to do with OOP vs FP but maybe the ease in which a language lets you create nominal types and functions that can nicely fail.
What sucks about OOP is that it also holds your hand into antipatterns you don't necessarily want, like adding behavior to what you really just wanted to be a simple data type because a class is an obvious junk drawer to put things.
And, like your example of a problem in FP, you have to be eternally vigilant with your own patterns to avoid antipatterns like when you accidentally create a system where you have to instantiate and collaborate multiple classes to do what would otherwise be a simple `transform(a: ThingA, b: ThingB, c: ThingC): ThingZ`.
Finally, as "correct by construction" goes, doesn't it all boil down to `createUUID(string): Maybe<UUID>`? Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.
> Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.
One way to think about exceptions is that they are a pattern matching feature that privileges one arm of the sum type with regards to control flow and the type system (with both pros and cons to that choice). In that sense, every constructor is `UUID.from(string): MaybeWithThrownNone<UUID>`.
The best way to think about exceptions is to consider the term literally (as in: unusual; not typical) while remembering that programmers have an incredibly overinflated sense of ability.
In other words, exceptions are for cases where the programmer screwed up. While programmers screwing up isn't unusual at all, programmers like to think that they don't make mistakes, and thus in their eye it is unusual. That is what sets it apart from environmental failures, which are par for the course.
To put it another way, it is for signalling at runtime what would have been a compiler error if you had a more advanced compiler.
Unfortunately many languages treat exceptions as a primary control flow mechanism. That's part of why Rust calls its exceptions "panics" and provides the "panic=abort" compile-time option which aborts the program instead of unwinding the stack with the possibility of catching the unwind. As a library author you can never guarantee that `catch_unwind` will ever get used, so its main purpose of preventing unwinding across an FFI boundary is all it tends to get used for.
Just Java (and Javascript by extension, as it was trying to copy Java at the time), really. You do have a point that Java programmers have infected other languages with their bad habits. For example, Ruby was staunchly in the "return errors as values and leave exception handling for exceptions" before Rails started attracting Java developers, but these days all bets are off. But the "purists" don't advocate for it.
I've recently been following red-green-refactor but instead of with a failing test, I tighten the screws on the type system to make a production-reported bug cause the type checker to fail before making it green by fixing the bug.
I still follow TDD-with-a-test for all new features, all edge cases and all bugs that I can't trigger failure by changing the type system for.
However, red-green-refactor-with-the-type-system is usually quick and can be used to provide hard guarantees against entire classes of bug.
I like this approach, there are often calls for increased testing on big systems and what they really mean is increased rigor. Don't waste time testing what you can move into the compiler.
It is always great when something is so elegantly typed that I struggle to think of how to write a failing test.
What drives me nuts is when there are testing left around basically testing the compiler that never were “red” then “greened” makes me wonder if there is some subtle edge case I am missing.
As you move more testing responsibilities to the compiler, it can be valuable to test the compiler’s responsibilities for those invariants though. Otherwise it can be very hard to notice when something previously guaranteed statically ceases to be.
I found myself following a similar trajectory, without realizing that’s what I was doing. For a while it felt like I was bypassing the discipline of TDD that I’d previously found really valuable, until I realized that I was getting a lot of the test-first benefits before writing or running any code at all.
Now I just think of types as the test suite’s first line of defense. Other commenters who mention the power of types for documentation and refactoring aren’t wrong, but I think that’s because types are tests… and good tests, at almost any level, enable those same powers.
I dont think tests and types are the same "thing" per se - they work vastly better in conjunction with each other than alone and are weirdly symmetrical in the way that theyre bad substitutes for each other.
However, Im convinced that theyre both part of the same class of thing, and that "TDD" or red/green/refactor or whatever you call it works on that class, not specifically just on tests.
Documentation is a funny one too - I use my types to generate API and other sorts of reference docs and tests to generate how-to docs. There is a seemingly inextricable connection between types and reference docs, tests and how-to docs.
Types are a kind of test. Specifically they’re a way to assert certain characteristics about the interactions between different parts of the code. They’re frequently assertions you’d want to make another way, if you didn’t have the benefit of a compiler to run that set of assertions for you. And like all tests, they’re a means to gain or reinforce confidence in claims you could make about the code’s behavior. (Which is their symmetry with documentation.)
Union types!! If everything’s a type and nothing works together, start wrapping them in interfaces and define an über type that unions everything everywhere all at once.
Welcome to typescript. Where generics are at the heart of our generic generics that throw generics of some generic generic geriatric generic that Bob wrote 8 years ago.
Because they can’t reason with the architecture they built, they throw it at the type system to keep them in line. It works most of the time. Rust’s is beautiful at barking at you that you’re wrong. Ultimately it’s us failing to design flexibility amongst ever increasing complexity.
Remember when “Components” where “Controls” and you only had like a dozen of them?
Remember when a NN was only a few hundred thousand parameters?
As complexity increases with computing power, so must our understanding of it in our mental model.
However you need to keep that mental model in check, use it. If it’s typing, do it. If it’s rigorous testing, write your tests. If it’s simulation, run it my friend. Ultimately, we all want better quality software that doesn’t break in unexpected ways.
union types are great. But alone they are not sufficient for many cases. For example, try to define a datastructure that captures a classical evaluation-tree.
You might go with:
type Expression = Value | Plus | Minus | Multiply | Divide;
interface Value { type: "value"; value: number; }
interface Plus { type: "plus"; left: Expression; right: Expression; }
interface Minus { type: "minus"; left: Expression; right: Expression; }
interface Multiply { type: "multiply"; left: Expression; right: Expression; }
interface Divide { type: "divide"; left: Expression; right: Expression; }
And so on.
That looks nice, but when you try to pattern match on it and have your pattern matching return the types that are associated with the specific operation, it won't work. The reason is that Typescript does not natively support GADTs. Libs like ts-pattern use some tricks to get closish at least.
And while this might not be very important for most application developers, it is very important for library authors, especially to make libraries interoperable with each other and extend them safely and typesafe.
Monetization is easy. Threaten to show them somebody else's pictures: pay up or we'll show you photos of your sexy neighbor (could be AI-generated). And tell your partner.
I am confused by this, isn't Bisect just some git ergonomics with respect to flagging a checked out commit as a success failure and then checking out the next midway commit depending of if it was a success/failure to enable classic binary search?
It might be slightly more tedious but couldn't you just do the same thing manually and it would add just a couple minutes to the search and you would still save weeks? I like the ergonomics but only use it once every couple years.
In most major European cities I have been in the airport to train connection is pretty subpar unless traveling light.
There is typically no integration of the airport baggage handling with the train baggage handling. So you need to move everything with carts that you can't take on the train.
An international trip to Europe for a longer duration is also a significant trip and not something you want to "one bag". Add in jet lag, multiple young kids, car seats, stroller etc. and it quickly becomes easy to see why the train is cumbersome for the initial airport to lodging connection.
I assume young kids, car seats, etc. change the equation. As an adult doing basically urban travel, a few weeks with carryon is perfectly doable in my experience,
Sadly I have encountered this on multiple different kinds of European tech platforms. There is some deep cultural disconnect on understanding how/why American tech companies are successful.
Most often they seem to ape most of major US tech platforms functionality but critically somehow miss the "make something people want" and instead make something that:
- Sort of works? Has all the major screens but the whole experience just feels off and not well thought out.
- Is basically a way for locals to prey on tourists. Or is easily abused to scam etc.
Bluntly that is not a viable business model. Additionally tourism as a whole will not build a durable and innovative economy.
There is this distinct disinterest in serving the customer. Making the experience delightful, frictionless, feeling good is oddly foreign. I basically gave up trying to use local things unless I have to because when things go poorly customer support is basically non-existent.
I know Uber, AirBnb etc for better or worse. I don't want to deal with whatever surprising edge case or unexpectedly subpar experience is normed on the local platforms.
> It's interesting to claim that tourism in Europe won't be "durable" at a moment when tourism in the US is sharply declining..
Sorry to shatter your illusions, but for April 2025 (most recent month with final data) <https://view.officeapps.live.com/op/view.aspx?src=https%3A%2...>, Canadian visits are down 20% yoy but overall worldwide visits to US are up 1.3% yoy, including 17% yoy rise in Mexican visitors.
I don't understand what you are trying to claim.
1. Booking.com is owned by Booking Holdings which is an American parent company.
2. US GDP growth has been massively outperforming the EU since 2008.
I am saying tourism is not something governments should want to heavily optimize an economy for. No amount of taking money from people on vacation will translate into building a more competitive or innovative economy.
Apparently Priceline.com Inc. took over Booking.com (founded in the Netherlands) for € 110 million, and then changed its name to Booking Holdings to reflect the fact that Booking.com was much bigger than Priceline.com. Indeed a great example of "American innovation" :)
How so? At least Booking.com shows me the total price for an accommodation up front, without any additional fees or surprises coming up later in the booking process.
The same cannot be said for AirBnB: if I go to the home page right now it lists a bunch of bookings for e.g. "€ 80 for 2 nights", while when I click through the total price is €160. So apparently what they meant is "€80 per night". I'd call that much more of a dark pattern than anything I've seen Booking do.
Bookings.com runs some incredibly evil tactics. Generally they take about 20% of the booking fees. But they will do things like delist you if you have lower prices anywhere else, and then undercut your prices on their website.
My parents ran a small motel - the only hotel for miles around. But on top of the fees, if they weren't paying for additional promotions Booking would find unrelated distant hotels even when searching in the area. People would sometimes mistakenly book for a motel states away.
I know this is an anecdote but I was curious if Europeans can tell me if this is a one-off experience or if there is something more to do this.
I was booked to catch a DBS train from Brussels to Berlin at 9:45 am. I get to the station at 9:25 looking for the train, can't find it. I go to the counter and get told the train came early at 9:15 then "Not my fault" (the first words out of the DBS attendant's mouth").
I got this same thing from a Swiss Air attendant when something happened. Nearly the first words were "Not My Fault"
I'm not sure I've ever heard that from a customer service rep in the USA and it was shocking to hear those words as the first like conditioned/scripted words from these reps.
I only brought it up because of it seemed to fit the previous comment of poor customer service.
I think there is some cultural difference between the US and Europe where in the US it's seen as somewhat OK to hold the customer service agents as personally responsible for the failings of the company, and treat them accordingly. Customer service agents in Europe dealing with Americans may feel the need to point out that they're not personally responsible for fear of said treatment. That (hopefully) doesn't mean that they won't try to help you, just that they hope you won't be angry at them personally.
It may sometimes be useful to verbalize this explicitly by saying "I know you're not responsible for this, but can you please do XYZ to solve the issue", and if it's a reasonable request I assume they'd be happy to comply. Depending on the country and culture, you may also need to be slightly more direct in asking (nicely!) for what you want, rather than hoping that the customer service rep will "make it right" by guessing what you want. You may perceive that as bad service but I think it's mainly about differing communication styles.
No I've never heard that. I'm an American living in Europe for 20 years. For Swissair you're more likely to hear "it's your fault" because Americans don't understand some concepts that are normal here, like reserving your seat , or, nor swissair related, wire transferring your chalet fee bank-to-bank rather than going through a third-party like Airbnb.
I actually have with Eurostar. They changed the schedule and then it was kind of a mess to rebook with lots of finger pointing. (They had sent an email but it didn't land in my primary Gmail tag and I never saw it.)
My cynical take is that small phones don't exist because they are not the product. Similar to vape pens the product is the addictive substance the device loads. In this case its apps and ads. A smaller screen probably negatively impacts KPIs on many levels, at Google/Apple/Meta/X and on down through the ecosystem.
I understand that Apple did not make enough money to make it worth their while to continue the iphone mini line. However, it does seem like there is a profitable business for someone there given how beloved it was/is.
I only traded out my iphone 12 mini just recently for an iphone 16 pro (likely the last apple product I will ever buy but thats another story) and aside from the camera it is basically the same. Just heavier, awkward to hold and slightly worse designed.
No major player wants a smaller screen because it has downstream impacts on the pipeline of addictive material and ad pixels they can stuff into ocular nerves.
What was so odd was how Apple fumbled the iPhone mini launch by launching the iPhone SE first. At that point there hadn't been a small phone for a few years, There was pent up demand. The SE came out and it was a big success, lots of people wanted ti because it was a small phone.
Then few months later they launched the mini expecting it to sell even more or something. Somehow they missed that everyone that wanted a small phone had just bought the SE, and it just wasn't long enough for them to be worth upgrading to the much better mini.
Had they waited for a year to pass the mini might have done much better because those who wanted a more powerful phone could find an excuse for an upgrade after a year, less then 6 months, not so much.
This is my take as well. I bought the SE 2nd gen because of its size, a longer support cycle, and granular app permissions on iOS. It was my first iPhone and has probably been my last when its time is due.
My phone isn't some entertainment device, it's a utility tool. I don't need it to be "smart", it should be useful on the go. The persona sketched by GP just isn't me: Messaging, maps, weather, 2FA, and calculator come first, email (read only) and news feed second, the camera is a third for documenting purposes (if even, I'd rather take my full frame). The easier it is to carry this thing around and the longer lasting its build quality, the better. Why would I pay almost double (USD 699 VS 399 on launch) for a less robust mini with sharper edges?!
If Apple were to continue the offer of rehashed designs from previous generations (preferably with rounder edges) for a SE line, limit its dimensions to never go beyond 140x67.5x8mm, and make it last for solid 5-year release cycles, then count me in as your most loyal customer. As it currently stands I'm looking out for a small sized phone from any manufacturer. I would even lower my expectations on support cycle and build quality quite a bit (if reasonable priced) before I'd give in on the size.
I've been an iPhone user since late 2007. I current use an SE 2022 and love it.
I've gone iPhone -> 3GS -> 4 -> 5s -> 6s -> 7 -> SE 2020 -> SE 2022.
The Mini never interested me. I love the SE. I love the home button and TouchID. I love the traditional size. If I want more I have an iPad Pro (12.9" original 2015 model bought in 2015 -- the battery still lasts 2 weeks with my usage pattern) or M1 Mac Mini with a 32" 4k screen.
If they don't make a new SE model I don't know what I'll do. I guess, firstly, get a new battery for it before it's out of the support window. Maybe sometime next year. And then see how long app updates support whatever the last OS version it will run is.
The ONLY thing I'd change in my SE, if it was possible, is more than 4 GB of RAM. The latest models have 8 GB and the others at the time the SE was sold already had 6 GB.
With recent system updates I'm getting a lot more of applications restarting when I switch back to them. This is mostly not a huge problem, except that the X app loses your place in the "Following" stream if you're more than a few hours behind and the app reloads.
I'm still using a 13 mini, it's fractionally too large, I think the original SE is perfection.
Regardless, battery life is horrendous now, and it's starting to lag and fail so when the new ultra watch is released I'm going to replace my phone with it.
Getting the battery replaced fixed mine (and seemed to mildly improve system performance, although maybe that’s placebo), might be worth a shot if you like the form factor.
Depending on how degraded the original battery was it isn't necessarily placebo. If iOS detects a severely degraded battery it will clock down the CPU slightly to cope with it, sacrificing a little performance to keep the device stable.
With 3rd party batteries it can't do this, so it doesn't (I think, will admit I'm not entirely sure exactly how iOS deals with 3rd party batteries it can't determine the status of), and if you replaced it with an official part then it would have been in good condition, so regardless which road you took, it's possible that you went from a state where the OS was clocking down, to one where it wasn't anymore.
> If iOS detects a severely degraded battery it will clock down the CPU slightly
I currently have this problem (iPhone 11). It's not slight at all. Keyboard inputs sometimes has up to a full 1000ms latency and that's with autocorrect, suggestions, and spellcheck turned off. Scrolling in most apps are jumpy rather than smooth. When this phone dies, I don't know what I'll get. Hopefully a good linux phone exists by then.
The problem is not the battery, it's the battery, processor and price.
The iPhone 13 Mini made up around 3% of total iPhone sales, so there's clearly a market for compact, mid-range phones ($600-$700). You can manufacture them in China or India for somewhere between $250 and $400, depending on the battery, camera, and overall performance.
The real challenge is that the retail price of a mid-range Android phone can't go over the $500 mark. People in developing countries are always stuck trying to balance quality with price. And for $500 bucks they expect a prime phone nowadays.
it has a few dings on the frame and I'm not especially attached to the form factor more significantly I am addicted to it and need a viable alternative.
Same here. Stuck with SE2 till it stopped catching pokemon properly. Currently pleased with the iPhone 13 mini. I think part of why I like small phones is I carry a laptop and hate web browsing / typing on the phone. It's mostly a modem and camera for me.
Also having a laptop means the battery doesn't matter that much as you can just charge it off that.
I love my iPhone 13 Mini. Its only issues are battery life (now), and non-competitive camera. I'm personally happy with the photos it takes, but then I look at my girlfriend's shots and get FOMO.
While I doubt it's economical, I'd love a small, simple phone with juiced up camera. I'd be fine with worse battery life as external batteries can remedy that in a pinch.
I have a 12 mini, it's about as large a phone as I'd want. I wish the back and/or bezel were a little "grippier" as the phone as it's made is so slippery it almost demands a case, but that adds bulk.
Unlike many it seems, I don't care much about the camera. I'd probably want some sort of camera for scanning QR codes, or snapping a quick photo of something I want to look up later, but otherwise I don't take photos or videos on my phone. I don't use any social media on my phone other than text messaging. This makes the smaller battery size/capacity a non-issue.
Since Apple no longer makes a reasonably-sized phone I'll probably go back to Android after this one dies or becomes unsupported.
I also think it's silly to carry a $1,000 device around with you everywhere, so a "premium" small phone is probably a non-starter for me. My favorite phones were the ~$200 Moto-G phones I had before I got the iPhone (it was a gift).
I don't know. I think the SE was there there to generate services income (Apps, Apple Music, etc.) from people who wouldn't buy an iPhone otherwise. The design was intentionally very stale to avoid cannibalization of their flagships. I don't think a lot of people who bought flagship iPhones before would go to an SE. Imagine going from an iPhone X or XS to an SE, it's a big downgrade. People were not buying the SE because of the size, but because it was cheap (the iPhone 16e that is the cheaper model now, has the same size as the 16).
My wife, I, and several people I know had iPhone 12 or 13 Mini. Their battery life was pretty terrible and word soon got out it was. I think this was in the end what killed it for people who are normally buying Apple flagships and were considering a Mini. It was very hard to get through the day with a Mini.
Besides the abysmal battery life, I think the market for small phones is maybe simply not there. Samsung keeps around one smaller model (base S-series) and arguably the Z Flip is a smaller model (but persistent hardware issues). If there was a large demand for flagship-class small phones, I am sure some Android manufacturers would make them.
They could have made the SE large but slow (instead of small and slow) and avoided all cannibalization future and present.
My hypothesis about the supposed non-existence of the small phone buyer is that they very much do exist (personally, haven't bought anything other than whatever was the smallest Xperia at the time in more than a decade), but that this group has little overlap with the group willing to buy for list price on release day. But the perception of success of a given phone is very much dominated by the latter, the long tail of buyers isn't really seen. Even if the release day premium over mid-lifecycle street price (in countries where price fixing is not allowed) goes to the retailer and is of very little interest to the manufacturer.
Manufacturers should just move compacts to a three year cycle and forget everything about hyper-optimizing desirability for the kind of buyer who spends too much time reading questionable review sites.
I’ve never had battery issues with my mini. But then again, I just want an unobtrusive tool. You can make a lot more money selling phones that are targeted to compulsive/addictive “whales” than you make selling normal phones for normal people.
Your theory makes sense until it falls apart if you consider SE and Mini as the same category of small iPhones. If the only reason why Mini failed was bad launch time, then why haven't Apple launched anything small (SE or Mini) after 2022? Isn't 2024 or even 2025 the perfect time to launch an upgrade for SE or Mini? They now have enough data since the last launch of a small phone.
The problem is that people who want a small phone prioritize the size.
Most of them don't care about the premium features of larger phones. So the Mini was a weird niche within a niche. Small phone with premium price and features.
The Mini and SE2 were virtually identical in physical size. For the 16e they should have used the iPhone 12/13 Mini body and the 13 Mini screen. Use the 15 Pro SoC with 8GB memory, and the 15 camera. Sell it for an SE price. Now you have fused the small phone and budget iPhone markets.
I can confirm, that Apple my misunderstood the market:
I was eager to buy a new iPhone because I just finished my masters degree and started a new job, had a bit of money and than the SE2 launched. My 5s or SE1 started to age and as an beginner app developer a current phone was important. I was so happy because I could not see my self using one of those bigger phones even though the SE2 was still bigger than my 5s/SE1. A few months later the mini was released and my initial reaction was "OMG this is THE perfect phone, but I just got a new one... i can not afford to buy another one".
You’re way too cynical and have let your cynicism cloud history.
The first phablets were probably the Galaxy Note line starting in 2011 which was met with some skepticism due to the size of them. These were well before the edge to edge screen days. So you had 5.7 inch screens with a bezel.
They were huge but I would routinely see small women pull these things out of their hand bags and press a device that obscured almost their whole face and start chatting.
Things steadily got bigger from there. The general population WANTED this.
Parent's take is not whether bigger phones shouldn't exist, it's why smaller phones stopped being produced, which is a fairly different angle.
> women
To note, the initial smartphones were already too big for he taste of many: a clamshell feature phone was almost a third of the size of the original iPhone. From that POV, going to a phone that is twice as big is less of a barrier, as they had to keep it in a bag/purse in the first place.
The return of foldables is also pretty well received in that regard.
Just tonight, I saw a friend of mine, pull a new foldable Razr from her purse.
They are cool phones, but I do iOS. I still use a 13 Mini, and will continue to do so, for quite some time.
As to the point of this article, I seem to recall a couple of very small Android phones, some years ago (about credit-card sized). I guess they didn’t sell well.
IMHO this is just not viable in the current world.
I agree with line the article sets (5"4 for 1080p, almost the size of the Pixel 4a), as mainstream apps will properly work at that size. I still have a working 4a, and some banking apps are getting pretty cramped for instance. And many websites already need furious panning and zooming.
A credit card size phone would only work for people who basically hate their phones I think.
>A credit card size phone would only work for people who basically hate their phones I think.
Probably. It's people who know they have to own a smartphone for so many things like park their car but don't really want one.
This was a number of years back but I know a then tech executive who got a phone (I think it may have been a feature phone at the time) only because their nanny absolutely insisted.
Completely agree. Although not even on "small phones", my S23 isn't small but the design of these apps has regressed so much that I barely see any useful information.
On my old WAP phone I could see bank balance and maybe the last transaction or two. Now half the screens taken up with upselling account levels, invest in shares, buy crypto, you've been pre-approved!
It's the padding! And the UX teams that add them into the designs!
My cynical take is that an unholy pact was formed between FE devs and UX designers:
By adding in "design" and "user experience" you essentially reduce features, complexity and general "dev time" of every single user-screen or page or component. They're no longer cram-packed with oodles of features, toggles, buttons, menus, etc. Most pages are glorified lists of things, with maybe a menu on each item if you are lucky. Devs dev less, have less bugs, just use FE-library of the day and go home happy because they made a CRUD screen essentially.
Meanwhile, UX designers get to play around and constantly fiddle with design because let's all be honest, nothing will ever be truly good and in a perfect "user experience" space because complexity and functionality are never what the user is happy about having, until they need it.
I don't think so. Yesterday I was browsing phones and there was a Google Pixel 9 Fold on dislpay, closed and showing something. That has a display on the outside and a foldable display on the inside.
I opened it, and most of the screen looked like a big, roundish black blob of ink, centred on the fold, on top of the Android animations working perfectly underneath, but only visible at the edges. I was impressed that the rest of the screen around worked perfectly, but it was unusuable due to the size of the black blob.
Something had broken at or near the fold while it was on display.
All other devices were in great condition; it was a well-maintained store.
Apple did a horrible job marketing the mini. I ran into a lot of people who saw my 12 mini and said they would prefer that size, but didn’t know it existed.
When I went to buy it, and the case, the employees at the Apple Store questioned me and tried to push me toward the normal iPhone. This is the first and only time I’ve ever felt Apple Store employees steering purchasing decisions. I had to go in there knowing what I wanted, and had to assert that it was what I wanted repeatedly.
Are people buying big phones because they are addicted to their screens, or are people addicted to their screens because of big phones?
When I went to buy it, and the case, the employees at the Apple Store questioned me and tried to push me toward the normal iPhone.
Probably because they knew that customers would come back to complain about the abysmal battery life of the Mini? I had a 12 Mini, I loved that phone, but man was it hard to get through the day on a single charge.
I generally only charge my 13 mini every other day or so.
The only time I recently struggled getting through the day was when on vacation and constantly using google maps & translate. But that is with a 3 year old phone.
Worth noting that (so I've heard) the most impactful hardware change between the 12 mini and the 13 mini was improvements to the battery life. I've never struggled with the battery life on my 13 mini, either, but the handful of people I know with a 12 mini have always bemoaned it.
13 mini here, also not charging every day. My screen time is around 2 hours a day, which IMO is still to much. I try to keep battery between 20 and 80 percent.
Yes, and people are using their phones for what they previously used TVs, laptops, music players and other dedicated devices for. It's a bit of a cycle.
There's also the accessibility factor. Many people become farsighted later in life. It's much easier to see things on a big phone, especially with increased zoom. (I see this all of the time when I fly.)
> There's also the accessibility factor. Many people become farsighted later in life. It's much easier to see things on a big phone, especially with increased zoom. (I see this all of the time when I fly.)
Or for those of us with higher end myopia whose lenses effectively “shrink” everything they see. I’m -6.75 in each eye and my glasses make my everything seem significantly smaller than it is.
Sometimes I look at my phone or monitor without my glasses and am momentarily shocked at how large they seem and then saddened when I put them back on.
The other problem is that more and more content now is designed for (or only tolerable on) larger phone screens. Go to any website these days on a smaller phone like an iPhone mini and more than 50% (being charitable here) of the screen will be taken up by garbage like ads, cookie banners, popups, etc.
It's a vicious cycle. Phone manufactures make the screen bigger, app and website developers realize they can cram more junk on the page, consumers demand larger screens as a result, return to step 1.
This is HN. OP is 100% right to be flabbergasted that people on this site are not using the best and brightest of the ad blockers available. I know I am.
Ironically, I have the opposite problem with website design. So many sites are clearly designed for mobile screen sizes, with a teensy-tiny strip of text on my large monitor. It's very unpleasant to read lines of text that short, so on a lot of sites I have to go into dev tools and set the text width to 1200px to make it an actual comfortable reading experience. I should not have to mess with CSS to make websites readable, but here we are.
I want larger phones because I am at that particular stage of middle age where I should probably start using reading glasses, but I'm also damned if I'm going to start carrying reading glasses everywhere with me.
I’m even older, walking around with reading glasses on my head all day. Had an iPhone 13 mini, miss the form factor, would prefer a mini as my next phone. But for most mobile use, on the couch, watching videos, etc, I use an iPad, not my phone. For me a large phone is mostly a tablet that’s too small.
> The general population wants larger phones because they are addicted to their screens.
I would rephrase that to: The general population wants a larger phones because phones are defacto PCs these days. They can watch movies, browser the news, listen to music, FaceTime, Maps, ..
Outside of business applications likes Word / Excel, phones basically handles 90% of people's requirements for "computers".
I’m not sure if your comment is sarcastic, but in case it isn’t: my friend group had a get-together two days ago, three out of six had a laptop with them, and it even came in handy when I started talking about the problem I’m working on, somebody got interested and I was able to show the plots and calculations. Also we have PowerPoint Parties somewhat regularly, where most of us bring their computers to make last minute changes or simply have a known environment.
Not sarcastic at all. I have never seen people (not even my nerdy friends, least of all normies) bring laptops to a friend hangout. People might bring laptops if they were getting together to work on stuff, but then that isn't just hanging out any more, there's a purpose to the gathering. I would be astonished if someone could show that non-techy people ever brought laptops when they would hang out. Techy people (like your friend group), maybe. But not your average person.
When I was still working full-time, a co-worker told me their kid had told them they didn't need or want a computer. Probably changes at some point with long writing assignments, etc. but still.
I do increasingly think about whether I need to bring a laptop on various trips. It can be handy but I try to pack light and another few pounds is a lot for me. I've experimented with a newish tablet but it's a bit too in-between for my taste.
I remember people would always be surprised about how home computer ownership was not that high but smartphones (well, Japanese "garakei") were were ubiquitous.
Well, do you think this is a good state of affairs? On one hand, phones are pretty accessible devices, on the other hand there are many aspects of phones that are objectively pretty terrible for consumers (talking about cost and difficulty of repair, walled garden ecosystems, and generally being geared towards consuming things and a lot less effective at producing them than laptops and desktops.)
(Tangential: of course I don't blame anyone for bringing their phone with them everywhere but if you're going to go to a friend group hangout, consider how annoying it is when you're trying to talk to someone and they're clearly checked out browsing some slop on Twitter or talking to someone else entirely. Take a damn break from the phone!)
Yeah and I also remember how Apple fans said "this is ridiculous, nobody needs a screen that big that doesn't fit in your pocket easily", and here we are 15 years later mourning the iPhone Mini/SE.
10 years ago, if your phone was bigger than 5 inches, it looked ridiculous. You'd pull it out and people would look at you like you'd just escaped from a nut house
I am 6ft tall and feel like my hands are above average in size. I have a regular iPhone 16 pro. I still don’t understand how people use bigger devices.
Do they like using two hands? I can’t single hand a phone any larger without having to shift it in my hand.
I don’t want to use two hands on my phone outside of typing.
People type a lot though. It's also better for video, games, reading, general browsing. If you value one handed operation above all that though, then obviously smaller is better.
I just refuse to accept that the first phablet I ever saw, the Galaxy Note, which covered the person's face and looked absolutely comical in their hands, was smaller than my current, very regular-sized phone.
The Samsung Galaxy Note (the first one) had a screen size of 5.3 inches.
The Samsung Galaxy Note 2 had a screen size of 5.5 inches. I had one and regularly had strangers ask me if that was really a phone. I had friends say, "Give me a call on your tablet" as a joke.
I loved it. Now my 6.1 inch iPhone feels on the small side.
The Dell Streak (shoutout to the other 3 people who bought one) had a 5 inch screen in 2010, a notable jump from contemporary phones like the iPhone 4 which was still 3.5", and other Android devices like the HTC Droid series which were around 3.7" and slowly starting to creep upwards to differentiate themselves from the iPhone. I think the largest Android devices you could get at the time were still smaller than 4".
I remember Dell showing this off at the All Thing D conference and Walt Mossberg of the Wall Street Journal asked the Dell spokesperson to put it up to his head, and told him it looked like a waffle. To this days it’s all I think of when I see someone holding a massive phone up to the side of their head.
That thing could really stand out in a crowd. I was at a baseball stadium for a concert that year, and spotted someone with a Dell Streak as I was heading down to the field. In a sea of people that was the one phone I spotted. I stopped to ask the guy about it briefly.
I remember Steve Jobs berating phablets as "the Hummer of phones" and dissing 7-inch Android tablets as too small, and disparagingly saying users would need to "file down their fingers" to use them - without considering how much smaller Apple users' fingers would need to be to use 3.5-inch iPhones.
I also remember the viral, doctored image showing the reachability of phone screens which "proved" that 3.5 inches was the "ideal" phone size.
It's amusing how people try an memory hole their negative reaction and pieces written about the Note. People's mocking web pages have disappeared. Arguments based on the size of the human hand completely forgotten. The very notion that a 5.7 inch screen is big an unwieldly is now met with disdain.
When I first saw the ad on tv, my reaction was "Holy moley, wow, who's going to buy that monstrosity?"
And then a few weeks later I bought one. All the guys in my office laughed and said "Wow, look at that huge thing, it's ridiculous". I chuckled and agreed, though I was quietly enjoying the larger screen.
Smaller phones has always been limited in performance, batter life, the app support and the camera quality. Camera is the most important factor and battery is the second.
General population doesn't buy phones every year and they don't want a nerfed phone when they have to pay 500-1000 $/€s. So they gravitate towards higher end ones.
Companies including Apple has always treated the small size as an entry to mid segment phone. The only exception I know is Sony z3 and z5 compact which suffered heat and battery swelling issues due to Qualcomm messing 810 series SoCs up.
Companies also want you to buy the most expensive phone. So they market the premium models and train their store personnel to sell more of the premium line. If they stop intentionally nerfing the smaller phones, I think there is a market there. However, it will still be smaller.
I used to lug a dedicated camera around all the time. Except for special purposes I just bring a phone these days. And I'm not the only one. I do know people who do a lot of nature photography but I also know people who always had a camera with them who now reserve them for "serious" portraiture and things like that.
It helps, but less than you'd think. The main board's power doesn't dramatically change, and because the full space under the screen isn't battery, reducing the screen size by 40% might cause the battery size to be reduced by 60%
This is the part that frustrates me. Apple keeps introducing software “solutions” for hardware problems. Reachability, Screen Time, Focus Modes, etc. A smaller phone naturally solves most of these problems. Small phones act as more of a utility device for when you’re away from a proper computer. This is all I want my phone to be. I really think they got it right the first time in 2007.
I ended up switching from a 13 mini (I had the 12 mini as well) to a 16 Pro. I was having a lot of battery life issues, and kept running into apps that clearly didn’t fully test with the smaller screen. I also really missed having a telephoto lens.
My phone usage went up; my laptop/desktop usage went down. I don’t like that. Compared to a normal computer, a phone is still worse in almost every way, other than mobility. It’s just now tolerable enough to put up with more of the time. I’m writing this on the phone, it would have been easier on a keyboard and mouse.
"Small phones act as more of a utility device for when you’re away from a proper computer. This is all I want my phone to be."
You, like me, are not representatives of a market phone manufacturers are interested in. Utilitarian and minimal use only sells one phone every few years.
They are catering for the overwhelming market that spends upwards of 5 hours screen time per day, watches movies and TVs, plays games, and generally spends as much time as possible on them, with as much payment and ad revenue as that comes with on top of the original device sale.
I always personally liked the idea of computers being fixed, or semi-fixed (like a laptop), as a place to work or study, and then leave once that is done. The replacement of computers and laptops by tablets and phones is a wider cultural shift from computers being tools and productive technology to entertainment and consumerist technology, in my opinion.
This was the internal debate I was having with myself. I bought the 13 mini the day the 14 was released and saw they were killing the mini line. The goal was to keep it until it was literally dead, replacing the battery as needed. The battery of the 13 mini was supposed to be better than the 12 mini, but that was not my experience.
The battery also wasn’t the main issue, just a contributing factor. I was ok using the battery as a signal for when I was using the phone too much and taking it as a signal to reevaluate my usage. Seeing software bugs related to the screen size, as figuring that would only get worse now that new phones didn’t come in the smaller size… that’s what made me think I might as well get the transition over with.
I’ve had battery issues with the 16 Pro, but those are software bugs. Some days my phone will give me a low battery warning by noon. I end up killing all the apps, charging it up again, and then it’s fine. It’s happened about 4 or 5 times, but I haven’t been able to tell what’s doing it.
The 13 mini didn't solve any of these issues for me plus the worse battery life. I upgraded to 16 pro max. My laptop usage also went down from there. Total screen time probably stayed, but now i carry most of the time just a phone instead of phone and laptop. If you want something less addictive there is probably the apple watch but you still need the phone to configure
and now you're strapped to a device 24/7 just for the sake to be used less.
>No major player wants a smaller screen because it has downstream impacts on the pipeline of addictive material and ad pixels they can stuff into ocular nerves.
There are lots of phone manufacturers who have no ads business. They just make phones so why would they care?
Size is dictated by trouser pocket size/handbag size and usage. Editing photos and movies to upload onto social media is probably better on a big screen.
Also screen size is dictated by common panel sizes, as low volume will mean a higher price.
Folding screens and iPad Mini's existence suggests people want larger screen real estate.
I think photos are a big deal, but IMO it's more about the photo quality. And if you put a nice fancy camera on the phone, suddenly the device gets pretty expensive.
And so while there are people who want "small screen + nice camera". There are people who want "small screen + small price". There are many people who _don't want the small screen_. So you have this phone that can cost a lot of money (in a pretty messy market where most phone models seem to not make money anyways), and you're going to cut off chunks of the market?
So we end up with small screen + shitty camera and specs etc. And people here who want a small phone (but really want a small phone that isn't miserable to use) still are unsatisfied.
I have an iPhone mini, and my understanding is that I lose quite a bit of battery life also by not having the full sized version. The market definitely prefers long runtimes, free from frequent charging, while I need to carry a charge pack sometimes, although just when I expect it to be needed.
> There are lots of phone manufacturers who have no ads business. They just make phones so why would they care?
There are still bound to the screen resolution dictated by the platforms/environment. A maker selling an android phone with a 480x640px screen
would face a huge uphill battle to see any sales.
Going for a smaller physical screen means higher DPI, so higher production costs and quality control issues. It can make more sense to buy cheaper, low DPI screen and make the whole device bigger to match the needed pixel count.
Agreed. I'd prefer a modern iPhone the size of an iPhone 4, it was perfegt. I made the same "upgrade" from 12 mini to 16 Pro, and the 16 Pro is so large and heavy. Feels like we're moving backwards in time.
For... well, most people. Half of people are women, so I don't know how they do it. I'm a man, with man hands, and modern phones are not one hand operable. You need two hands. Even if you can do a particular operation with one hand, the phone is unsteady and it's awkward.
I think people with large hands are definitely the minority. So, we're not optimizing for hand size. We're optimizing for engagement, I think.
It was observed a long time ago on HN that women, with their tiny hands, loved huge phones - since they were using small phones two-handed anyway - and it was the men who complained that small, one-handed phones stopped being sold.
You don’t speak for most people. You can only speak for yourself. The feelings of “Most people” are clear as demonstrated by the market; they not only find large phones fine, they find them preferable.
> modern phones are not one hand operable
So? I mean for me they are so it’s irrelevant, but what should it matter if they are not? The market obviously does not share your interest in devices to be operated in such a manner as a priority or something of particular importance.
That being said, it’s of course unfortunate that if that is your preference, that nothing in the market caters for it. Your preferences and wants are obviously entirely valid and it’s a shame there is no interest even from a boutique vendor in meeting them.
I have plenty of preferences for products that are not catered too, as I am sure is true for us all and of course I don’t love it, but I must live in the reality that the larger market doesn’t always want what I do.
> We’re optimizing for engagement
The market is optimizing for what consumers asked for, which was larger devices. You say I am in a minority, I claim equally that you are in a minority as well.
We are in agreement - you appear to be replying to my comment piece by piece without reading all of it.
I'm speaking about the one-hand operability, which I then conclude must not be very important and obviously the market prefers something else.
I will only address this part:
> The market is optimizing for what consumers asked for
This is hopelessly naive. This is true in the same sense that butane rings in cigarettes is optimizing for "what consumers asked for" - more pleasant to smoke cigarettes. Consumers don't know what they want, they're fed whatever is going to make the most money by advertisers. And they will like it, because there is no other choice.
The market is not some perfectly rational machine. It is, often, a self-eating beast, concerned with it's own self-preservation to such a degree that it destroys itself. Had the Tobacco industry chilled, they wouldn't have been eviscerated by legislation. But no - they had to target children, they had to make the death sticks as addictive as possible. As if to put a bright flashing sign on themselves that says "look at me! Regulate me!"
> Consumers don't know what they want, they're fed whatever is going to make the most money by advertisers. And they will like it, because there is no other choice.
Except we know this is not the reality in this case as the worlds most successful mobile device marketer has made multiple attempts to create and market smaller devices which time and time again the majority of consumers have rejected.
The majority having a preference not matching your own doesn't need to be a conspiracy of consumer stupidity. Apple held out for a long time on making larger devices and ultimately caved to consumer sentiment, they didn’t grow that sentiment, they reacted to it.
Yes, again, similar to how a Tobacco consumer would reject older styles of Cigarettes. They were objectively worse - less nicotine, less impact on the brain, slower burning, and uneven burning. I used to smoke, ask me how I know.
> conspiracy of consumer stupidity
You misunderstand. Consumers aren't stupid, they're human. Human are remarkably easy to exploit. Exploiting the human mind is orders of magnitude easier than exploiting a computer.
I mean, you put a shiny machine in front of a human and tell them there's little to no chance they'll win money and they'll destroy themselves in front of it. Drain their bank accounts, ruin their marriage. You don't even have to lie - you can tell them gambling is bad, you can tell them they won't win, but that doesn't actually affect the exploit. Monkey brain see bright light, dopamine hits.
It's really quiet simple, and you're a market-minded man so you should be able to deduce this: it's all about incentives. You can continue to believe that the devices best for advertisers also happen to be what consumers want most. I think it's painfully naive, almost child-like.
I mean, look at smart TVs. Why do we have those? Do consumers prefer them? Sure. Is it to everyone's benefit that consumers prefer it? Certainly. So then we must ask - how did consumers come to prefer them? Was it, maybe, forced? Were they, maybe, exploited?
Just consider this. If I want to enter the Tobacco market, anywhere in the world, should I enter with a nicotine-free cigarette, or even a low-nicotine cigarette? Would those be successful? No, I think, the company would sink remarkably fast. We'd have no sales, consumers wouldn't buy it.
At 193 cm in height, I have large hands too. I currently use a Zenfone 10 and a Galaxy S10e before that, and I can grip them both just fine in one hand, but I can't also control them with that same hand without awkward contortions and a reliance on gravity.
The only phones I've had that I could comfortably use one-handed were my old BlackBerry Q10 (2013) and BlackBerry Classic (2014). The Q10 because it's short enough to hold between my thumb and ring finger such that I could use my index and middle fingers on the touch screen (slightly unorthodox but it worked really well), and the larger Classic because it has an optical thumbpad and excellent software support for it (it was so good I rarely used the touch screen at all). And both had physical keyboards.
I don't have especially small hands and I can't stand my Nokia XR20 (which isn't even close to the biggest phone out there). If I can't reach every corner of the screen with my thumb while holding the phone, it's uncomfortable and unpleasant to use. Sadly that is most phones these days.
The margins are also worse, which is way, way closer to a manufacturer’s bottom line than the software ecosystem.
There is demand for larger phones, yes, but manufacturers also charge more for bigger devices and most of that is margin. Following their own logic, they also charge less for smaller phones.
If your customers are sticky, then many of the people who buy the smaller phone would have otherwise bought a bigger phone for more money. Introducing a smaller phone brings down profits.
Bigger equals better consumer perception, I imagine driven in part by the top-tier phones being larger to fit additional battery capacity in for the higher performance processors making all larger devices carry some premium cachet.
> I only traded out my iphone 12 mini just recently for an iphone 16 pro (likely the last apple product I will ever buy but thats another story) and aside from the camera it is basically the same. Just heavier, awkward to hold and slightly worse designed.
Just did the exact same thing 5 months ago.. I still miss my 12 mini. Would strongly consider buying a 13 mini instead of its even being sold anymore.
i have a 13 mini. Its beat up, battery life is getting worse (even though I rarely use it) and both cameras are smashed (in my pocket during a motorcycle accident), but I look at all the options now and figure I'll just keep using this one. I'd rather be using an iPhone 4, but I need some stuff that that one didn't have to work with a glucose monitor.
I ended up replacing my 12 mini with a 6z flip from samsung. The only real annoyance is that Apple hasn't enabled RCS for Danish telecom companies yet. Well that and sand... We'll see how long it lives though. The reason I originally went to apple was because my first smartphone (a galaxy 2) sort of did the planned obsolescence thing exactly the same way my two buddies galaxy 2's did at the time. If the flip lives for 5ish years then I'll likely never go back to apple. Unless they make a phone that will actually fit comfortable in my pockets again.
The little half screen on the flip is useless though. Basically nothing works on it.
FM radio uses the headphone cable as an antenna, so with the move to bluetooth I assume it got squashed for similar reasons. The other aspect is it assumes if you want live radio that you're happy needing an active data connection and allowance, any any local stations have a stream available. One of the things I love about streaming (via RadioDroid, etc) is that you can get a station from anywhere on the planet but sometimes you want something a bit more basic.
> FM radio uses the headphone cable as an antenna, so with the move to bluetooth I assume it got squashed for similar reasons.
Some may prefer Bluetooth headphones, and there are countless apologists who now retroactively parrot the manufacturers' excuses for why headphone jacks were eliminated, but it wasn't something the _users_ asked for or wanted.
"Oh, but phones are waterproof now," they claim! Well, so was the Samsung Galaxy S5 I bought in 2014. And by the way, it also had a user-removable battery, removable storage, an FM radio, and an IR blaster. All these years later, you can't find a new phone with all of those features and it's very difficult to find a flagship phone with even _one_ of them.
Just like 16e, Mini and SE were meant to push up the sales of their "other" phones. Otherwise they would not have had both Mini and SE. I mean it was a joke.
But Hanlon's razor and the way Apple has been on a screwing up spree of late I doubt it was anything intentional. They f'ed up knowing not what to do at all. They don't anymore.
I think most major players have the same incentives and minor players don't have the economies of scale to make it work economically.
Also the longer I used my iphone mini and the rest of the world moved to comically large phones the more it became apparent that nobody is thinking about small screen form factors in design and when they do its only around ad placement.
But, for example, what is the money flow from google/advertising in general to Motorola, that makes them not want to release a small screen model in their lineup of cheap phones?
Instagram, Tiktok, and Google have gotten users addicted to consuming content, and larger screens help with that.
We are helplessly addicted to digital cocaine, and so we demand large phones, and so motorola will not make money selling a small phone.
It's like the parent said: our addiction is the product, and so just like a chain-smoker will say "I want to quit" as they buy 5 packs a day, a modern smartphone user will say "I want a smaller screen and to look at ads less" as they hopelessly buy a 10 inch phablet and can't go 5 minutes without pulling it from their pocket to check tiktok.
It is not that the money from advertising flows, it is that the addicted users have already been ruined, and will not buy the devices they say they want.
Sure, but that's something totally different. Basically just "customers don't want it and won't buy it". I understood the root comment to imply some kind of more direct incentive: "A smaller screen probably negatively impacts KPIs on many levels" - if advertising KPIs are supposed to be given precedence over demand from consumers there has to be at least some kind of mechanism for it.
"No major player wants a smaller screen because it has downstream impacts on the pipeline of addictive material and ad pixels they can stuff into ocular nerves." -- what is the direct (or indirect) pressure that the major players can exert over some more or less independent hw manufacturer like Motorola? I'm not saying it's impossible, it reminds me of e.g. the situation where (pre iphone) carriers blocked phones from having wifi because they wanted them to be dependent on their network, but if something like this is happening it should be possible to roughly point out how.
Couldn’t we make it thicker though? The rumored iPhone air is the exact opposite of what I want. Give me a thicker phone with a smaller screen and I’d pay Pro prices for it.
Thicker things don't fit in pockets as well, they're unwieldy.
I've gotten my EDC down to 1 leather ID sleeve with my credit card and drivers license in it, and my phone. This is probably still thicker then it should be, but it's soft so I don't feel the bulk or edges.
This is it, for a while battery life got worse for a while with more powerful chips. But then Samsung goes full in on the big size 6"+ phone and it got better again.
Now even at 80% original capacity, a Samsung can still last me throughout the day given that I am not watching videos constantly. The Iphone 6 back in the day would go to 40% in 3 hours, then suddenly to 5% in minutes.
Plus most people replace their laptop with a phone now. So the big screen size is a must.
That’s how I see it. Screen size is area (x^2) and battery size is volume (x^3). As battery life is a critical feature, a bigger screen fits better battery life.
I think the other thing is pretty much everyone has a smartphone android/ios, and so the rev model has changed for android its youtube/movies, and for ios its apple tv.
I posted a bit too late and didn't see your post first, which more or less is exactly along my thinking.
Modern phones are sold (even at profit) with the intent that there is more payments/ad revenue coming down the line, for movies, TV, games and web browsing/social media. A big screen makes that experience better for people and advertisers. It's a cynical take, but the entire business model is based on building and promoting addiction.
They have no interest in selling phones for utility purposes only, even though that's largely how they advertise the phones, because advertising a 5 hour plus daily screen time isn't sexy at all.
Out of curiosity, why's it your last Apple product?
Watching lots of Louis Rossmann has put me almost ideologically against Apple (even though they design great hardware and smooth UX within their ecosystem), but I'm not good at forming coherent points to present to Apple loving friends.
For me so far, I think it's about control over what I buy - but the rebuttal is always "you're buying a product from them, if you don't like it then tough".
The opinion I got from Louis's content is that in a sense he is right, but also almost every brand is even worse. Apple does pretty much nothing to help 3rd party repair and sometimes actively impeeds it, but most other tech products do that while also not even having 1st party repair options.
I remember when Samsung had removable batteries, I went in to a Samsung store to buy a replacement for my S5 battery and they told me they didn't sell them, only new phones. Meanwhile I can take my iPhone in to any Apple store and they will replace the battery for me.
So yeah Apple does need to be forced to massively improve their practices but so does pretty much the entire tech industry aside from a few small projects that focus on being repairable.
I just don't see the value add anymore and the company appears to have lost its product vision and the design sensibilities are slipping. Apple is controlled by a geriatric board and a logistics expert and it shows.
I feel I am more frequently encountering software bugs, vaporware,(dESiGnEd fOr ApPle InTelLiGeNce), and ridiculous "innovation" (genmoji). I feel the hardware advances are not very relevant to me, I don't need VR or augmented reality. I want a computer to get out of my way and solve problems for me so I can spend time in plain old reality. The hardware upgrades I DO care about are ridiculously overpriced (Ram upgrades are abusively expensive).
While I prefer my computer to be a tool to get a job done and don't want the computer itself to be a hobby. I also do not want to be forced to use AI. I also dislike the rent seeking and toolbooth behavior of iMessage and the App store. Now that linux has more paved paths, things increasingly "just work" and hardware has basically caught up I don't see a good reason to support Apple's non-vision with my money.
What Linux computer can you buy with the battery life, quietness, lack of heat and speed of a modern ARM based Mac?
Battery life, probably none. For the rest it's pretty ok now - I recently got a ThinkPad T14. Performance-wise it's in M1/M2 territory and yes the fans can spin up, but they are not very loud.
I have used MacBooks since 2007, but I have started using the ThinkPad more and more. Why?
I put in 64GiB RAM and a 2TB SSD and it cost me almost nothing. The laptop plus these expansions was 1400 or 1500 Euro, a MacBook with 64GiB RAM and 2TB SSD would cost me 5000 Euro. When the battery has had its time, I can replace it by removing a few screws. I added a PCI cellular modem. The expandability and maintanability is just great.
Even though the GPU in my MacBook Pro (M3 Pro) blows away the ThinkPad's GPU on paper, the ThinkPad with Wayland actually renders everything super-smoothly on my 120Hz 4K screen, while on the MacBook the difference between 60Hz and 120Hz is barely noticeable. On the ThinkPad I can run NixOS, which is generally much nicer than macOS.
The primary thing that my MacBook has over my ThinkPad are battery life and a bunch of really good Mac applications like the Affinity Suite. But since more and more applications are switching to Electron, it has become less of a problem. Heck, I even have 1Password with fingerprint unlock, etc. like if it was a MacBook.
As far as phones - your alternative is to buy an Android phone with an operating system by an ad company that is also pushing AI just as hard.
Or I don't know, you buy a Pixel, install GrapheneOS, and you have better privacy than on an iPhone? And no F1 movie ads too.
I think there are a lot of offerings out there now. Maybe not to the minute with respect to battery life but Apples chip advantage is steadily evaporating. I typically don't need more than 8 hours of battery personally.
Have heard good things about framework computers. As a more efficient chip or battery comes out you just upgrade that component if your use case requires it.
By the way, it's not a lack of heat in the Air. The M4 will hit 105°C and start throttling pretty soon in sustained workloads. At any rate, modern Ryzen laptop CPUs have narrowed the gap with Apple Silicon performance-wise. It's mostly battery life that's still lagging behind. It not only requires a mainboard optimized for power use (which is pretty good nowadays on modern laptops), but also very strong OS integration. I am not sure if non-Apple laptops will get that far, because Linux and Windows simply have to target much more hardware.
At any rate, non-Apple laptops have other benefits, like being able to get 64GiB/128GiB memory and large SSDs without breaking the bank.
In the end it's all a trade-off. If you are a sales representative that needs all-day battery life, MacBook is probably the only option. If you are a developer that needs something portable to hop between desks or on the train, but usually have access to a power socket (yay, Dutch/German trains), a few hours of battery is enough and you might prefer to get an insane amount of memory/storage, a built-in cellular modem, and an ethernet port instead.
Most people don't really need more than 2 hours of battery life anyway[1] as their laptops barely ever leave the house. >8H of battery is nice to have but it is really an important parameter for a specific population while for others it is just convenience. I wouldn't trade an OS/desktop I don't like over my linux setup just because it last longer when I never need more than a couple of hours on battery[3].
[1] which means you need a 4 to 6h range when new if you don't plan to replace the battery too often
[2] students, construction companies, people who are always on the road...
Is that where we are going? Most people don’t need a laptop that has more than 2 hours battery life?
When I was in the office full time in the bad old days, you would be in a conference room and every one would plug their laptops in.
After I started working remotely and still doing business trips, one charge could last a full day either going back and forth between conference rooms, in “war rooms” etc and no one with M series MacBooks even worried about charging.
Heck my MacBook Pro (work laptop) can last a full day on power with my portable USB C powered external monitor where the power and video come from one cord.
I spent almost 10 hours at a coworking space and didn't even worry about charging my M4 MacBook Pro. Apple Silicon is a game changer: incredible performance and long battery life, generally totally silent, no thermal throttling. 10 hours may be extreme, but it's nice to be able to go to a coffee shop and not worry about not having charged your laptop since last week.
I used to run Linux on a laptop (10+ years ago) and you couldn't even close the laptop lid without risking it not going to sleep and overheating in your bag.
It is exactly what I am saying, it is nice, a convenience. But that's it.
I don't worry about closing my thinkpad lid. Well I do because I disable sleep on lid close and prefer using the dedicated button for that. But my thinkpad goes to sleep when I ask it to.
I have an Asus Vivobook S14 laptop with an Intel Core Ultra 258v processor. In Linux, it gets 12-15 hours of real usage (i.e. not manufacturer "playing videos off local storage with wifi off and the screen all the way down" battery life numbers). If I'm doing something like web browsing or streaming videos, the laptop doesn't get hot and the fan doesn't turn on. I've only had the fan turn on when I'm doing something intensive like compiling GCC or video encoding. It feels just as fast as my ARM Macbook Air.
I'd sacrifice some battery life to have a Thinkpad (example: T14 gen 5), with the superior keyboard, Touch point and smaller touchpad (the Mac one is annoyingly too large).
I have stopped caring so I caved in to work policy and got an iPhone, and I really do not see the point. It is just a thing no better or worse than an Android...
That’s cool, but you represent a tiny slice of the market that as devices get more powerful, isn’t addressable in the low volumes needed to make you happy.
When the chips needed to make a phone are priced like toys, maybe you’ll find the product for you.
I feel that the problem with small phones roots in software. Obviously you would need to run smaller resolution. My sweet spot was iPhone 4S. It has 3.5" display with 640x960 resolution. If you would try to run modern Android with this resolution, you would hit multiple obstacles, from popular apps to popular websites scaling badly.
I don’t know.If I have a “big” phone (anything bigger than the iphone mini, at least for me), I’ll leave it at home most of the time. But if it’s small, I’ll take it with me everywhere. So I can be bombarded with crap more time if I use a small phone.
Phones are big now because people want better cameras and longer battery life. Also, people are spending 4-5 hours more per day on screens than they did 15 years ago, so they want bigger screen for reading, playing games and watching videos.
I know I can’t claim to be “the norm”, but all I want is a smaller phone with 1–2 day battery life. I don’t need a better screen, a better camera, or ever more compute. (It’s a freaking smartphone, not a game console!) All I need my phone to do is run a web browser, messaging apps, maps, my banking app, and random little apps some organisations force you to use – like my university’s app, my city’s public transport app, or half the restaurants here. Things that could easily be done with 10–15 year old hardware. Sadly, the industry chose to keep the power consumption constant increasing while computational power, instead of focusing on smaller devices and longer battery life with the same power.
Again, I’m not angry that current phones exist; I’m just sad there aren’t (good) alternatives – at least that I know of.
Yep, in my area a couple of dining places only list their menu in an app [1], a bunch of places have some kind of membership program accessible through the app, and one standup club requires (!) an app to order drinks and food.
[1]: There’s a paper one in the restaurant, of course, but I like to choose beforehand.
I thought the theory behind the 12 and 13 minis was that Apple had a huge surplus of parts that they needed to offload, and making the minis was a profitable way of doing that.
It’s an interesting take but I believe most people just prefer bigger phone. I know it’s weird to those of us who like the opposite and funnily enough it’s often women who have gigantic phone, which they can’t put in their tiny jean pocket.
I don’t explain it and every time I get explained they like it better because it’s got bigger battery and bigger screen, I just don’t understand how you could live your life with a brick constantly on you but it’s what people want.
The market just adapted to the demand and it’s not a 40k « petition » that will change much.
> I know it’s weird to those of us who like the opposite and funnily enough it’s often women who have gigantic phone, which they can’t put in their tiny jean pocket.
Most women carry their phone in a hand bag anyway as the pockets on most pants for women are way to small either way [1].
The pants thing is another baffling mystery. I know exactly zero women who want their pants to not have pockets. Yet manufacturers absolutely refuse to add pockets. It seems like a complete market failure.
I used to buy ZenFones, but they're huge now. It feels like there's some combination of poor sales and parts commonality that causes the problem, not a shadowy conspiracy, since I don't think ASUS and other manufacturers have a significant way to benefit from phone addiction.
Phones had smaller screens when you needed the keypad to interact with the largest number of features.
Phone screen sizes grew as the applications that could use screen space grew in demand.
People are watching 1080p films on the train now. The people who want smaller screens are usually willing to deal with a larger one. People who want larger screens usually cant operate their use cases on a smaller screen. Larger screens also tend to mask larger case meaning less miniaturisation required for the components.
None of this explains why it's just impossible to get small phones.
You have people who want them unusably large and people who want them to fit in your hand. The solution in every other market is that products are manufactured to fit both sets of needs. You don't see pants coming in one size with the advice "wear a belt".
>You don't see pants coming in one size with the advice "wear a belt".
Great example. Because people who are shorter than average tend to have to get pants taken up, and people who are vastly taller than average tend to go to specialty stores.
The average height of pants is largely dictated by what the market will permit, requiring people to make adjustments or leave the market. Having a 2d matrix of height and width defined pant sizes is too complex for the market to bother with.
Technology is worse, anything that requires tooling is done the least number of times possible. While small phone enjoyers are disadvantaged, they arent disadvantaged enough to force them out of the market. Larger tooling is easier to make and caters to all other preferences.
> Technology is worse, anything that requires tooling is done the least number of times possible. While small phone enjoyers are disadvantaged, they arent disadvantaged enough to force them out of the market. Larger tooling is easier to make and caters to all other preferences.
No, you're making up a claim that you know perfectly well is false. Just blank most of your day out of your mind, and then... what? Why?
You don't like pants? Televisions come in dozens of different sizes. Laptops come in dozens of different sizes. Are phones different in some way?
>No, you're making up a claim that you know perfectly well is false. Just blank most of your day out of your mind, and then... what? Why?
I cant even parse this? What am I blanking?
>You don't like pants? Televisions come in dozens of different sizes. Laptops come in dozens of different sizes. Are phones different in some way?
Where did I claim not to like pants?
Laptops come in tons of different sizes. So do phones.
They tried sub 10 inch laptops, in the form of netbooks, the form factor barely exists anymore outside of hobbyists. Netbook enthusiasts either have to exit the market, or go for something 10 inch or higher. Because its not worth the tooling to deal with a niche market.
Phones come in dozens of different sizes too, what are you on about? TVs come in a greater range of sizes because they're designed for different viewing distances and room configurations. Phones don't have this variable, you hold them in your hand.
I don't think it's meaningful. There are not enough people who would buy such a device to make it profitable to design and manufacture. Your priors -
"You have ... people who want them to fit in your hand"
Are incorrect. The number of people who will actually buy small devices is ... small. The number of people who are so interested in small devices they'll overlook things like a lower battery life and whatever other compromises are needed to achieve the smaller size, likely even fewer.
It's not like it hasn't been tried in the past, people in this thread talk about iPhone minis disappearing - Apple couldn't make them a success. Sony couldn't make them a success either and stopped making them AFAICT. As a market segment you're too small to warrant the investment in designing a small flagship. And if nobody's investing in a small flagship, small midmarket isn't going to happen either.
There do appear to be niche manufacturers in this segment (take a look at https://www.reddit.com/r/smallphones/). If the untapped demand is so huge, I would expect to see them become much more mainstream over time.
Unless you're asserting the number of people who will actually buy small devices is zero (which I would hope you aren't given the evidence to the contrary in this thread), his priors are in fact correct. If there exists any number of people willing to buy small phones, then the statement "you have people who want them to fit in your hand" is true.
> If there exists any number of people willing to buy small phones, then the statement "you have people who want them to fit in your hand" is true.
But the subtext is that this is enough of a population to make a viable market, that in fact any number of people, however small, make a viable market. It's just not a reasonable prior.
So I'm asserting that it may as well be zero as far as the big manufacturers are concerned, that with such a small audience it's not profitable. Further, that this dynamic does indeed play out in other markets.
OP is looking for a conspiracy as to why phone manufacturers are leaving money on the table. The truth is they aren't. This situation is exactly what you'd expect when there's no real market - a few niche providers making a few niche products for die-hards (without the scale, support or quality of the majors) and not making a lot of money at it, while the rest of the market ignores them.
That makes no sense. The only phone companies that make money from how often you use your phone and buy apps on it are Apple and Google. If there were a market for it other companies would make them.
As far as the mini phones - because physics - the battery life is atrocious. That was one of the main drivers for me to get a larger phone. Well that and because I can pull down the Control Center and use the widget to make everything on my phone larger and still be able to use it without wearing my glasses. With my glasses, I keep everything the smallest size
Sorry, in my national market. Apple has 51%, Samsung has around 28% and Google 5%. These 3 hold almost 85% of the UK market.
Samsung, from what I remember used to side-load tons of apps (some alongside identical stock Android apps), so it's in their best interest to maximise screen time through media apps, which generally work best on a bigger screen.
It's against all of these companies interests to sell you a phone which you quickly use and put away. They all have every incentive to keep you staring, because they're all getting extra money from every ad you see in a game, TV show, movie, web browsing, or wherever else they can slot it.
Pointless rant. Apple does not earn more or less depending on how many ads it can show.
The market has spoken, it's not worthwhile for Apple to produce small phones.
There are a million companies that are not Google that could also produce mini phones and don't for the same exact reason: most people want large screens to enjoy videos and photos.
Nobody cares about small screens and pockets, everyone holds their phones in their hands or purses at all times.
Like it or not, Apple keeps cancelling smaller phone lines because they don't sell well. That's it. If they sold really well then they'd keep selling them, but they don't.
I would also love more capable small phones personally, but I can't deny that people overall don't seem to want them.
I thought that was the case but I tried going small.
I owned an iPhone 13 mini. Basically the perfect small phone if there ever was one.
The downsides are extensive and the upsides are few.
- Battery life sucked. Since a phone is a 3D object making it bigger substantially increases battery capacity. It also makes packaging difficult especially if the goal is a flagship-quality phone. Good luck fitting in good hardware with a lot of features.
- Eyestrain. It’s small.
- Typing. It sucks. The phone is small.
And it turns out the upside of one-handed operation is limited. A simple PopSocket or OhSnap! will make large phones easy to use in one hand.
Plus, if pocketability is your issue, you can buy a folding phone like a Motorola Razr and still get a nice big screen when you pull it out.
I disagree. I'm reading and typing this from an iPhone 13 mini. I use a big one for work so it's not like I don't know what I'm missing. I very strongly prefer the small form factor
I mean, are the phones themselves really making money off ads or are those totally separate companies? I don't disagree that this brings in business, but I don't agree that this is a significant motivator in terms of phone sizes
The best way I have found to integrate this approach is Test Driven Development.
When done well, every test you write before you see it fail and then you write the barest amount of code that you think will make it pass is a mini-proof. Your test setup and assertions are what cover your pre/post conditions. Base cases are the invariant.
The key here is to be disciplined, write the simplest test you can, see the test fail before writing code, write the smallest amount of code possible to make the test pass. Repeat.
The next level is how cohesive or tightly coupled your tests are. Being able to make changes with minimal test breakage "blast radius" increases my confidence of a design.
I am not a fan of Test Driven Development, not at all.
Having your invariants and pre/post conditions correct is not enough. You also need to do the right thing. For example, you have a function that adds two durations in the form hh:mm:ss, you have mm <= 60 and ss <= 60 as an invariant, testing it is a good thing, but it won't tell you that your addition is correct. Imagine your result is always 1s too much, you can even test associativity and commutativity and it will pass. Having these correct is necessary but not sufficient.
Problem is, when you write tests first, especially tight, easy to run unit tests, you will be tempted to write code that pass the tests, not code that does the right thing. Like throwing stuff at your tests and see what sticks.
I much prefer the traditional approach of first solving the problem the best you can, which may of may not involve thinking about invariants, but always with the end result in mind. And only when you are reasonably confident with your code, you can start testing. If it fails, you will have the same temptation to just pass the test, but at least, you wrote it at least once without help from the tests.
Maybe that's just me, but when I tried the "tests first" approach, the end result got pretty bad.
> Tests do not ensure that your functions are correct; they ensure that you are alerted when their behavior changes.
I agree with that part and I am not against tests, just the idea of writing tests first.
> helps you design good interfaces
I am sure plenty of people will disagree but I think testability is overrated and leads to code that is too abstract and complicated. Writing tests first will help you write code that is testable, but everything is a compromise, and to make code more testable, you have to sacrifice something, usually in the form of adding complexity and layers of indirection. Testability is good of course, but it is a game of compromises, and for me, there are other priorities.
It makes sense at a high level though, like for public APIs. Ideally, I'd rather write both sides at the same time, as to have a real use case rather than just a test, and have both evolve together, but it is not always possible. In that case, writing the test first may be the next best thing.
I don't think it makes sense to do it any other way. If a test case doesn't map on to a real scenario you're trying to implement the code for it doesn't make any sense to write it.
I find that people who write the test after tend to miss edge cases or (when they're trying to be thorough) write too many scenarios - covering the same code more than once.
Writing the test first and the code that makes it pass next helps to inextricably tie the test to the actual code change.
>but it is not always possible
I don't think I've written any production code in years where I gave up because it was intrinsically not possible.
> I don't think I've written any production code in years where I gave up because it was intrinsically not possible.
What I meant by "not possible" is writing both sides of the API at the same time. For example, you write a library for overlaying maps on video feeds, it is good if you are also writing the application that uses it. For example a drone controller. So in the early phase, you write the library specifically for your drone controller, changing the API as needed.
But sometimes, the drone controller will be made by another company, or it may be a project too big not to split up, that's the "not possible" part. And without a clear, in control use case, you have to make guesses, and writing tests can help make good guesses.
>What I meant by "not possible" is writing both sides of the API at the same time. For example, you write a library for overlaying maps on video feeds
If I were doing this I would probably start by writing a test that takes an example video and example map and a snippet of code that overlays one on to the other and then checks the video at the end against a snapshot.
>But sometimes, the drone controller will be made by another company, or it may be a project too big not to split up, that's the "not possible" part. And without a clear, in control use case, you have to make guesses, and writing tests can help make good guesses.
This is the figuring out the requirements part. If you are writing an API for another piece of software to call you might have to do some investigation to see what kind of API endpoint it expects to call.
The flow here for me is if the code is doing the wrong thing I:
- Write a test that demonstrates that it is doing the wrong thing
- Watch it fail
- Change the code to do the right thing
- Ensure the test passes
And in return I get regression prevention and verified documentation (the hopefully well structured and descriptive test class) for almost free.
I don't think any amount of testing absolves the programmer from writing clear, intention-revealing code that is correct. TDD is just a tool that helps ensure the programmers understanding of the code evolves with code. There have been so many times where I write code and expect a test to fail/pass and it doesn't. This detects subtle minute drift in understanding.
I do TDD and write proofs as tests. TDD practitioners never said TDD is a substitute for thinking.
> Problem is, when you write tests first, especially tight, easy to run unit tests, you will be tempted to write code that pass the tests, not code that does the right thing. Like throwing stuff at your tests and see what sticks.
I never had that problem, but I knew how to code before I started TDD.
A test is not a proof, and you can prove something works without ever even running it. There are also properties of a system which are impossible to test, or are so non-deterministic that you a test will only fail once every million times the code is executed. You really need to just learn to prove stuff.
The most complex piece of code I have ever written, as a relevant story, took me a month to prove to everyone that it was correct. We then sent it off to multiple external auditors, one of which who had designed tooling that would let them do abstract interpretation with recursion, to verify I hadn't made any incorrect assumptions. The auditors were confused, as the code we sent them didn't do anything at all, as it had a stupid typo in a variable name... I had never managed to figure out how to run it ;P. But... they found no other bugs!
In truth, the people whom I have met whom are, by far, the worst at this, are the people who rely on testing, as they seem to have entirely atrophied the ability to verify correctness by reading the code and modeling it in some mathematical way. They certainly have no typos in their code ;P, but because a test isn't a proof, they always make assumptions in the test suite which are never challenged.
Actually, a test _is_ a proof! Or more specifically, a traditional test case is a narrow, specific proposition. For example, the test `length([1, 2, 3]) = 3` is a proposition about the behavior of the `length` function on a concrete input value. The proof of this proposition is _automatically generated_ by the runtime, i.e., the proof that this proposition holds is the execution of the left-hand side of the equality and observing it is identical to the right-hand side. In this sense, the runtime serves as an automated theorem prover (and is, perhaps, why test cases are the most accessible form of formal reasoning available to a programmer).
What we colloquially consider "proof" are really more abstract propositions (e.g., using first-order logic) that require reasoning beyond simple program execution. While the difference is, in some sense, academic, it is important to observe that testing and proving (in that colloquial sense) are, at their core, the same endeavor.
Interesting, could you show me a formal proof that can't be expressed in logic (ie. code) and then tested?
My thought here is that since proofs are logic and so is code you can't have a proof that can't be represented in code. Now admittedly this might look very different than typical say JUnit unit tests but it would still be a test validating logic. I am not saying every system is easily testable or deterministic but overall, all else equal, the more tested and testable a system is the better it is.
IME things that are very hard to test are often just poorly designed.
Consider a function that gets an array of integers and a positive number, and returns the sum of the array elements modulo the number. How can we prove using tests, that this always works for all possible values?
Not discounting the value of tests: we throw a bunch of general and corner cases at the function, and they will ring the alarm if in the future any change to the function breaks any of those.
But they don't prove it's correct for all possible inputs.
Tests can generally only test particular inputs and/or particular external states and events. A proof abstracts over all possible inputs, states, and events. It proves that the program does what it is supposed to do regardless of any particular input, state, or events. Tests, on the other hand, usually aren't exhaustive, unless it's something like testing a pure function taking a single 32-bit input, in which case you can actually test its correctness for all 2^32 possible inputs (after you convinced yourself that it's truly a pure function that only depends on its input, which is itself a form of proof). But 99.99% of code that you want to be correct isn’t like that.
As an example, take a sorting procedure that sorts an arbitrarily long list of arbitrarily long strings. You can't establish just through testing that it will produce a correctly sorted output for all possible inputs, because the set of possible inputs is unbounded. An algorithmic proof, on the other hand, can establish that.
That is the crucial difference between reasoning about code versus merely testing code.
I agree with that but I would say that if I required formal verification of that kind I would move the proof based rationale into the type system to provide those checks.
I would add Tests can be probabilistically exhaustive (eg property based testing) and answer questions beyond what proof based reasoning can provide ie. is this sorting of arbitrary strings efficient and fast?
Proofs are arguably still better than tests at evaluating efficiency, at least for smaller components/algorithms in isolation. While there are cases where constant factors that can't be described well in a proof matter, in most cases, the crucial element of an algorithm's efficiency lies in how the complexity scales, which can be proven in the vast majority of cases. On the other hand, relying solely on benchmarking introduces a lot of noise that can be difficult to sort through.
TDD is also great for playing around with ideas and coming up with a clean interface for your code. It also ensures that your code is testable, which leads to improved readability.
You'll know quickly where you're going wrong because if you struggle to write
the test first, it's a symptom of a design issue for example.
That being said, I wouldn't use it as dogma, like everything else in CS, it should be used at the right time and in the right context.
I agree on the Dogma aspect. Plenty of times I have abandoned it. However, I did find it very helpful to spend my first couple years in a strict Xtreme Programming (XP) shop. The rigor early on was very beneficial and its a safety net for when I feel out of my depth in an area.
I tend to go the other way, I don't use TDD when I am playing around/exploring as much as when I am more confident in the direction I want to go.
Leaving a failing test at the end of the day as a breadcrumb for me to get started on in the morning has been a favored practice of mine. Really helps get the engine running and back into flow state first thing.
The dopamine loop of Red -> Green -> Refactor also helps break through slumps in otherwise tedious features.
I know a team using it to replace ancient massive mainframe based systems with modern distributed systems and the gist is that the language is fine, but mostly ideal for use cases that leverage the ErlangVM or BEAM stack.
The downside they run into is the ecosystem isnt there, at least a couple guys wish they had just used Kotlin/Java for library interoperability with so much existing code already built and battle tested for specific purposes.
To put it simply, the BEAM lets you swallow all of your dependent services into a consistent API, no matter network distance or machine dependability. In Python it feels like my main thread, DB, job queue, and OS are all speaking different languages. With Elixir I don't spend much time at all getting different services to work together, at least an order of magnitude less.
Elixir is not perfect, but for me working alone dependency hell was the bottleneck with Python. Now the bottleneck is adding features, which is right where it should be.
I think that's a good point. Our largest pain point with Elixir is definitely the size of the community and the associated dearth of niche libraries. The technology behind it, though, is solid enough that once those libraries exist, things really take off. My team wrote several open source medical libraries for Elixir and we've seen it really expand into the healthcare market.
I think in many cases the ecosystem issues are overblown. For the common 90% of use cases there are battle tested libraries out there.
For the less common ones, we tend to just roll our own which in most cases isn't that bad if you have reference implementations.
I think the most under-appreciated aspect of Elixir is how it helps reduce complexity. And there isn't a silver bullet here, but the tooling, immutability, pattern matching, process-based concurrency model, etc are all design decisions that, IMHO lead to simpler, more robust code.
(Caveat: of course, like any language, you can make a mess of things.)
I wonder how hard it would be to generate synthetic credit card numbers for each subscription service and then just cancel that "card".
I feel there is a whole cadre of consumer tech that is defensive against corporate taxes/tolls on our time. Eg: auto phone tree navigator, only allowing calls from double opted in contacts etc.
Sometimes the company will continue to seek payment and put the missed payments on your credit report.
That should be illegal as well. If people stop paying for a continual service, like a streaming service or a magazine, then the service should just stop; companies shouldn't be able to accrue credit and continue seeking payment, just cancel the service and be done.
If something like a magazine wants a year payment upfront, then let them charge for a full year before the first magazine is delivered.
There are many banks that offer virtual cards. Meaning you can generate unique numbers and individually disable those card numbers.
A related thing is, with Revolut you have disposable cards that are only possible to charge a single time. Unfortunately I have had a bad time trying to use disposable cards. One time I tried it the merchant did a single reversible charge for like a dollar to verify the card and then they couldn’t charge the actual amount so the purchase failed. Another time for a subscription service (I wanted to try their free 30 day trial without forgetting to cancel in time) they apparently got metadata telling them the card was disposable and they refused it so I had to use the non-disposable card number after all.
Can you not instead set the cap at a certain amount? You can do that on privacy, and can also set it to reset the cap after a certain amount of time (for subscriptions)
The tech sector has grown and changed so much. It has gotten much more "professional" which is arguably good but it this in turn promotes a fair amount of "corporate stooge" behavior. I am guilty here for sure, it is really easy to focus on levels, promo packets, OKRs, especially as you age and responsibility grows and forget what make this industry amazing in the first place.
Good reminder to focus on direction and interests and what you feel should be built. Reminds be a bit of the opening section of "The Art of Doing Science and Engineering" which I only came across because I liked other Stripe press books.
You also meet more interesting and passionate people if you pick a direction vs a destination.
Brie, author of the profile here. Funny you mention Art of Doing Science and Engineering. There was a footnote to You and Your Research in an early draft but it hit the cutting room floor in edits. (Also, I helped get Stripe Press off the ground–including tracking down rights to Art of Doing Science and Engineering–so it warms my heart to hear that's how you first came to the essay/speech).
When I met you at Stripe you seemed to me the person with strategic foresight and iron discipline— the kind that gets endless opportunities without even trying. I was hopelessly floundering by comparison, and not in a good Kevin Kelly way. I don’t know if people will think of you in 300 years (the day is young!) but you were definitely a role model for what discipline and great execution look like.
You describe a way of living that is probably much more common than the ramen scurvy CEO lifestyle, but it doesn't get written about because people want to read about financial success and winning at zero sum games.
The typical "success" archetype is often at the peak of some hierarchy (e.g. CEO) where the vast majority in the game literally cannot occupy the top positions. So in those situations most participants are losers. Sounds like you found a way to quietly opt out of that framing of success e.g. in your time at Stripe.
Thank you for normalizing shiny object syndrome floundering!
I always wondered why Stripe Press was a thing. Why was a financial services company publishing books about the lives of great engineers? I'm very happy you did though, the books themselves are a great read, not to mention they are very beautiful. I really liked "The Dream Machine" in particular
Why did you want to start Stripe Press in the first place? How did you get the support to do it?
The issues I see with this approach is when developers stop at this first level of type implementation. Everything is a type and nothing works well together, tons of types seem to be subtle permutations of each other, things get hard to reason about etc.
In systems like that I would actually rather be writing a weakly typed dynamic language like JS or a strongly typed dynamic language like Elixir. However, if the developers continue pushing logic into type controlled flows, eg:move conditional logic into union types with pattern matching, leverage delegation etc. the experience becomes pleasant again. Just as an example (probably not the actual best solution) the "DewPoint" function could just take either type and just work.
reply