You can, because those games are the best according to the preferences of interactive fiction connaisseurs, and the preferences of connaisseurs never match those of the masses.
E.g. beer connaisseurs love IPAs, while most people find them way too bitter.
Beer connoisseurs don't love IPAs. Modern IPAs are the most like Budweiser rice beers that you can get in the fancy beer world, so that's what people who aspire to look like connoisseurs prefer. They prefer them to be very bitter, and/or flavored with exotic fruits, because they can't judge quality and rely on distinctness.
I started playing Infocom games before there were online hints or even Invisiclues. I knew one of the authors quite well and resorted to (literally) calling him from time to time :-)
This is a good point! I have personally decided to save some of the all-time greatest games until I am better at text adventures and can enjoy them with fewer hints.
Mailing lists aren't federated. Everyone has to email one particular address at one particular domain; whoever controls that email address + domain can censor/block emails. (That's a good thing when you're blocking spam!)
If you're OK with the fact that mailing lists are somewhat centralized, there are actually got a ton of great alternatives to pure mailing lists.
All popular open-source web forums support email notifications, and most of them support posting by email, (I know phpBB and Discourse do,) and all of them have sitemaps with crawlable archives.
You can run your own mail server and name server on top. The network of mail is very much federated.
In mail we have so many freedoms. We have become so locked into technology that we have to introduce a term like “federation” to signify the interoperability and freedom of a single component. Mail is federation layered upon federation.
The fact that you can just use a mailings list address as a member of another mailing list gives you even more federation possibilities. All with the simplest of all message exchange protocols.
> You can run your own mail server and name server on top. The network of mail is very much federated.
While I do completely agree with that in theory (and I also love mail) I think it does not stand the reality test because of email deliveravility which tends to be a nightmare.
How do you solve this? Do you use a third party SMTP?
I ran multiple mail servers for years until about 10 years ago (moved out of the industry). The deliverability problem, as far as I know, hasn't really changed that much in the last decade. The key was to configure DKIM, SPF, only use secure protocols and monitor the various black/block-lists to make sure you aren't on them for very long. In my experience, if you end up on a few bad lists, and don't react quickly, the reputation of your domain goes down rapidly and it's harder to get off said lists.
You also want some spam filtering, which, these days, is apparently much more powerful with local LLMs. I used to just use various bayesian classification tools, but I've heard that the current state of affairs is better. Having said that, when you've trained the tool, it does a pretty good job.
It's not "plug-and-play", but it's not that hard. Once you've got it up and running the maintenance load goes to almost zero.
> It's not "plug-and-play", but it's not that hard. Once you've got it up and running the maintenance load goes to almost zero.
This is where I disagree. In my opinion it might not be that hard but the maintenance is really not zero as you just described how you need a reputable IP as a prerequisite and constant monitoring of block lists.
Just having DKIM, SPF and DMARC really was not enough last time I checked for getting delivered to let's say outlook.
I just realised, and this could be red herring, that almost all of the domains I've administered were based in Australia. I suppose it's possible that the IP ranges I'm dealing with have a better reputation than those from other countries. I have administered a few domains from US companies and IPs, but they've often been based in known data centres which may help their cause. I can't really talk to the reliability of hosting a mail server on a consumer / small business IP in the US / Europe/ Asia. It's possible that all known, common IPs in these areas have a natural disadvantage when it comes to reputation. I suppose try running a tunnel from your server to a small VPS in a knwon data centre? Not ideal, but it may help.
It would be annoying if entire US/European/Asian ISP IP ranges were immediately blocked. We should have moved on from that for many reasons unrelated to email.
The monitoring of block lists is much more important than people assume. I haven't looked into it in detail, but it always seemed like the reputation was based on a ratio of number of messages to known bad messages. If you have a moderately busy server, and you manage to keep off the block lists (or at least pro-actively remove yourself from them) then the reputaion gets higher and higher, and the maintenance goes down.
If you're a domain that only receives occasional messages, and you end up on Spamhaus and co, you're gonna have a problem. It seems that reputation at small scale is viral. You need actively good reputation and response time. But, honestly, it seemed that it didn't take more than about 3 months per domain I administered until they were just accepted by the net as valid, good actors.
The fact that it is a nightmare is a bit of a myth. Granted, not everybody can do it, but that's not necessary.
And then there are many mail providers other than Gmail. It's just that nobody cares and probably the fact that a ton of (most?) people were forced to create a Gmail account by Google.
> The fact that it is a nightmare is a bit of a myth. Granted, not everybody can do it, but that's not necessary.
I agree to some extent. But it is more involved than deploying a Discourse instance in my opinion.
> And then there are many mail providers other than Gmail. It's just that nobody cares and probably the fact that a ton of (most?) people were forced to create a Gmail account by Google.
100% agree. This is the tradeoff I went for. I would love for it to be easier to self host but you can definitely use another provider.
It's not about receiving. Receiving is the easy part. It is about the delivery of your own mail.
> you stop giving money to your mail host and get a different one.
I was entertaining the "host your own mail server" thought, I agree that if you don't host it yourself then you can change your provider if it fails you.
Who needs the transmission more - the sender, or the recipient?
Much of the time, when it's for signup verification, especially for a free service, they just write "don't use @live.microsoft.com" underneath the email address box. The user wants to be signed up for the service more than the service provider wants a new user, at least by enough to use an alternate email address. Enough cases like this, and the user quits @live.microsoft.com.
Basically never, because it would require re-standardizing the DOM with a lower-level API. That would take years, and no major browser implementor is interested in starting down that road. https://danfabulich.medium.com/webassembly-wont-get-direct-d...
Killing JavaScript was never the point of WASM. WASM is for CPU-intensive pure functions, like video decoding.
Some people wrongly thought that WASM was trying to kill JS, but nobody working on standardizing WASM in browsers believed in that goal.
People have to look at it, from the same point of view Google sees the NDK on Android.
"However, the NDK can be useful for cases in which you need to do one or more of the following:
- Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
- Reuse your own or other developers' C or C++ libraries."
And I would argue WebGL/WebGPU are preferably better suited, given how clunky WebAssembly tooling still is for most languages.
That's the issue with WASM. Very, very few people want a limited VM that's only good for CPU-intensive pure functions. The WASM group has redirected a huge amount of resources to create something almost nobody wants.
I strongly agree. Algebraic types aren't "scary," but "algebraic types" is a bad term for what they are. In all popular languages that support "sum types" we just call them "unions."
Your favorite programming language probably already supports unions, including Python, TypeScript, Kotlin, Swift, PHP, Rust, C, and C++. C# is getting unions next year.
The article never mentions the word "union," which is overwhelmingly more likely to be the term the reader is acquainted with. It mentions "sets" only at the start, preferring the more obscure terms "sum types" and "product types."
A union of type sets kinda like a sum, but calling it a "sum" obscures what you're doing with it, rather than revealing. The "sum" is the sum of possible values in the union, but all of the most important types (numbers, arrays, strings) already have an absurdly large number of possible values, so computing the actual number of possible values is a pointless exercise, and a distraction from the union set.
Stop trying to make "algebraic" happen. It's not going to happen.
> In all popular languages that support "sum types" we just call them "unions."
When I was doing research on type theory in PL, there was an important distinction made between sum types and unions, so it’s important not to conflate them.
Union types have the property that Union(A, A) = A, but the same doesn’t hold for sum types. Sum types differentiate between each member, even if they encapsulate the same type inside of it.
A more appropriate comparison is tagged unions.
> Stop trying to make "algebraic" happen. It's not going to happen
It's been used for decades, there's no competitor, and ultimately it expresses a truth that is helpful to understand.
I agree that the random mixture of terminology is unhelpful for beginners, and it would be better to teach the concepts as set theory, sticking to set theoretic terminology. In the end, though, they'll have to be comfortable understanding the operations as algebra as well.
No, seriously, you literally never need to understand the operations as algebra. You just need to know how to use your language's type system.
None of the popular languages call them sum types, product types, or quotient types in their documentation. In TypeScript, it's called a "union type," and it uses a | operator. In Python, `from typing import Union`. In Kotlin, use sealed interfaces. In Rust, Swift, and PHP, they're enums.
There are subtle differences between each language. If you switch between languages frequently, you'll need to get used to the idiosyncrasies of each one. All of them can implement the pattern described in the OP article.
Knowing the language of algebraic types doesn't make it easier to switch between one popular language to another; in fact, it makes it harder, because instead of translating between Python and TypeScript or Python and Rust, you'll be translating from Python to the language of ADTs (sum types, tagged unions, untagged unions) and then from the language of ADTs to TypeScript.
People screw up translating from TypeScript to ADTs constantly, and then PL nerds argue about whether the translation was accurate or not. ("Um, actually, that's technically a tagged union, not a sum type.")
The "competitor" is to simply not use the language of algebraic data types when discussing working code. The competitor has won conclusively in the marketplace of ideas. ADT lingo has had decades to get popular. It never will.
ADT lingo expresses no deeper truth. It's just another language, one with no working compiler, no automated tests, and no running code. Use it if you enjoy it for its own sake, but you've gotta accept that this is a weird hobby (just like designing programming languages) and it's never going to get popular.
If you're so confident that algebraic data types will never become popular, I don't understand why you feel it's so important to convince a few nerds not to talk about them.
I don't think anyone here is advocating making the topic compulsory. There will always be some people who are interested in theory and others who aren't.
> None of the popular languages call them sum types, product types, or quotient types in their documentation.
It seems like you may want to spend a bit more time with the ML family of languages. Sure, you can argue the degree of what constitutes "popular" but OCaml and F# routinely make statistics on GitHub and Stack Overflow. The ML family will represent tuples as products directly (`A * B`), and sometimes use plus for sum types, too (though yes `|` is more popular even among the ML family), which they do call sum types.
> Knowing the language of algebraic types doesn't make it easier to switch between one popular language to another
The point of ADTs is not to build a universal type system but to generalize types into Set thinking. That does add some universals that a sum type should act like a sum type and a product type should act like a product type, but not all types in any type system are just sum types or just product types. ADT doesn't say anything about the types in a type system only that there are standard ways to combine them. It says there are a couple of reusable operators to "math" types together.
> ADT lingo expresses no deeper truth.
Haskell has been exploring the "deeper truths" of higher level type math for a long while now. Things like GADTs explore what happens when you apply things like Category Theory back on top of Algebraic Data Types, that because you have a + monoid and * monoid, what sorts of Monads do those in turn describe.
Admittedly yes, a lot of those explorations still feel more academic than practical (though I've seen some very practical uses of GADTs), but just because the insights for now are mostly benefiting "the Ivory Tower" and "Arch Haskell Wizards" doesn't mean that they don't exist. The general trend of this sort of thing is that first academia explores it, then compilers start to use it under the hood of how they compile, then the compiler writers find practical versions of it to include inside their own languages. That seems to be the trend in motion already.
Also, just because it's a relatively small audience building compilers doesn't mean we don't all benefit from the things they explore/learn/abstract/generalize. We might not care to know the full "math" of it, but we still get practical benefits from using that math. If a compiler author does it right, you don't need to know mathematical details and jargon like "what is a product type", you indeed just use the fruits of that math like "Tuples". The math improves the tools, and if the language is good to you, the math provides you with more tools and more freedom to generalize your own algorithms beyond simple types and type constructs, whether or not you care to learn the math. (That said, it does seem like a good idea to learn the type math here. ADTs are less confusing than they sound.)
> The Email Verification Protocol enables a web application to obtain a verified email address without sending an email, and without the user leaving the web page they are on. To enable the functionality, the mail domain delegates email verification to an issuer that has authentication cookies for the user. When the user provides an email to the HTML form field, the browser calls the issuer passing authentication cookies, the issuer returns a token, which the browser verifies and updates and provides to the web application. The web application then verifies the token and has a verified email address for the user.
> User privacy is enhanced as the issuer does not learn which web application is making the request as the request is mediated by the browser.
Yeah you could! The only caveat is that either the whole app, or at least the part of the app showing the view you want to replace, would have to be running via the interpreter.
We're very interested in using the interpreter to improve the Swift developer experience in more ways like that.
I bet you could make a ton of money just selling a better dev experience as an xcode add-on to Swift devs without even having the AI component. (But making an app on my phone with AI is unbelievably neat!).
This article proposes two options, one where you add a `<script>` tag in the XML body, which breaks validating parsers, and another approach where you attach CSS to the XML document with `<?xml-stylesheet type="text/css" href="styles.css"?>`, but this only works with `Content-Type: text/xml` and not `application/rss+xml` or `/atom+xml`.
It would be great if browsers were to support a header (or some other out-of-band signal) would allow me to attach some CSS + JS to my XML without any other changes to the XML content, and without changing the Content-Type header.
Specifically, I'd love to be able to keep my existing application/rss+xml or application/atom+xml Content-Type header and serve up a feed that satisfies a validating parser, but attaching CSS styling and/or JS content transformations.
Browsers don't want to add new ways of running script. That said, I wonder if `<?xml-stylesheet type="text/javascript" href="script.js"?>` could work. It's kinda weird, but `xml-stylesheet` can already run script via XSLT, so it isn't a new way to run script.
The punchline of this article is that all the implementations they tried (WatermelonDB, PowerSync, ElectricSQL, Triplit, InstantDB, Convex) are all built on top of IndexedDB.
"The root cause is that all of these offline-first tools for web are essentially hacks. PowerSync itself is WASM SQLite... On top of IndexedDB."
But there's a new web storage API in town, Origin Private File System. https://developer.mozilla.org/en-US/docs/Web/API/File_System... "It provides access to a special kind of file that is highly optimized for performance and offers in-place write access to its content."
OPFS reached Baseline "Newly Available" in March 2023; it will be "Widely Available" in September.
WASM sqlite on OPFS is, finally, not a hack, and is pretty much exactly what the author needed in the first place.
We do see about 10x the database row corruption rate w/ WASM OPFS SQLite compared to the same logic running against native SQLite. For read-side cache use-case this is recoverable and relatively benign but we're not moving write-side use-case from IndexedDB to WASM-OPFS-SQLite until things look a bit better. Not to put the blame on SQLite here, there's shared responsibility for the corruption between the host application (eg Notion), the SQLite OPFS VFS authors, the user-agent authors, and the user's device to ensure proper locking and file semantics.
Yeah, I did fail to mention OPFS in the blog post. It does look very promising, but we're not in a position to build on emergent tech – we need a battle-tested stack. Boring over exciting.
Not sure anything in the offline-first ecosystem qualifies as "boring" yet. You would need some high-profile successful examples that have been around for a few years to earn that title
Maintenance mode doesn't mean "this is so mature we don't have anything else to add", it means "we don't want to spend any more time on it so we'll only fix bugs and that's it".
Some notable companies using Replicache are Vercel and Productlane. It's a very mature product.
The Rocicorp team have decided to focus on a different product, Zero, which is far less "offline-first" in that it does not sync all data, but rather syncs data based on queries. This works well for applications that have unbounded amounts of data (ie something like Instagram), but is _not_ what we want or need at Marco.
The majority of the cost in a database is often serializing/deserializing data. By using IDB from JS, we delegate that to the browser's highly optimized native code. The data goes from JS vals to binary serialization in one hop.
If we were to use OPFS, we would instead have to do that marshaling ourselves in JS. JS is much slower that native code, so the resulting impl would probably be a lot slower.
We could attempt to move that code into Rust/C++ via WASM, but then we have a different problem: we have to marshal between JS types and native types first, before writing to OPFS. So there are now two hops: JS -> C++ -> OPFS.
We have actually explored this in a previous version of Replicache and it was much slower. The marshalling btwn JS and WASM killed it. That's why Replicache has the design it does.
I don't personally think we can do this well until WASM and JS can share objects directly, without copies.
You really can't go wrong browsing our list of the best games of all time. https://ifdb.org/search?browse
All of the top-rated games have walkthroughs or other hints for when you get stuck. My top advice for new players: use the hints.
reply