> 1. The number of sets per year increased too much, there are too many cards being printed to keep up
My local shop has an entire wall of the last ~70 sets, everything from cyberpunk ninjas to gentlemen academic fighting professors to steampunk and everything in between. I think they are releasing ~10 sets per year on average? 4 main ones and then a bunch of effectively novelty ones. I hadn't been in a store in years (most of my stuff is 4th edition from the late 1990s) I did pull the trigger on the Final Fantasy novelty set recently though, for nostalgia's sake.
But yeah it's overwhelming, as a kid I was used to a new major set every year and a half or so with a handful of new cards. 10 sets a year makes it feel futile to participate.
One thing that's sorely needed in the official documentation is a "best practice" for backup/restore from "cold and dark" where you lose your main db in a fire and are now restoring from offsite backups for business continuity. Particularly in the 100-2TB range where probably most businesses lie, and backup/restore can take anywhere from 6 to 72 hours, often in less than ideal conditions. Like many things with SQL there's many ways to do it, but an official roadmap for order of operations would be very useful for backup/restore of roles/permissions, schema etc. You will figure it out eventually, but in my experience the dev and prod db size delta is so large many things that "just work" in the sub-1gb scale really trip you up over 200-500gb. Finding out you did one step out of order (manually, or badly written script) halfway through the restore process can mean hours and hours of rework. Heaven help you if you didn't start a screen session on your EC2 instance when you logged in.
Yes but did you have to write your own, or did you pull it from an official repo? I'm all for customizing things but we're a long, long ways from pg8.0, something besides the bare bones official pgdump and pgrestore binaries with their very agnostic and vanilla man pages would be tremendously useful.
Of course that’s preferable, but OP is specifically asking about the cold restore case, which tends to pose different problems, and is just as important to maintain and test.
Offsite replica is only applicable if the cause is a failure of the primary. What if I’m restoring a backup because someone accidentally dropped the wrong table?
nah, on a long enough timeline everything will go wrong. blaming the person who managed to drop the table finally is dumb: if you can't fix literally everything that could happen to it, it's not done.
Postgres is not great with off-site replicas, unless not many writes are done. Replication protocol is very chatty. One of the reasons Uber mentioned when moving to mysql in their engineering blog.
This is oft quoted but if you read the posts, Uber discovered they didn't want SQL (or apparently transactions etc), and implemented a nosql that happened to use mysql as a backend, and that was a much bigger change than moving off PG.
> One of the reasons Uber mentioned when moving to mysql in their engineering blog
If I'm not mistaken, this was in 2016 (that's 10 years next year, time flies when you're having fun) -- which is practically an eternity in IT. I'm no DBA but I'm fairly sure many changes have been made to Postgres since then, including logical replication (which can be selective), parallel apply of large transactions in v16, and so on.
I'm not saying this means their points are invalid, I don't know Postgres well enough for that, but any point made almost 10 years ago against one of the most popular and most actively developed options in its field should probably be taken with a pinch of salt.
> I'm not saying this means their points are invalid, I don't know Postgres well enough for that, but any point made almost 10 years ago against one of the most popular and most actively developed options in its field should probably be taken with a pinch of salt.
Heh, I remember the countless articles after that debacle back then pointing out all the reasons why their migration was entirely pointless and could've been summed up to "devs not knowing the tools they're working with" before starting multi million projects to fuel their cv driven development.
So even if you aren't willing to do so, their rational for the migration was fully debunked even back then
If you can have a secondary database (at another site or on the cloud) being updated with streaming replication, you can switch over very quickly and with little fuss.
While I totally agree here, replication/raid vs backups, I must say that having some weak (in terms of HW resources) replica somewhere in the closet is much much better than system with just single master.
Which is what you must do if minimizing downtime is critical.
And, of course, your disaster recovery plan is incomplete until you've tested it (at scale). You don't want to be looking up Postgres documentation when you need to restore from a cold backup, you want to be following the checklist you have in your recovery plan and already verified.
> in the 100-2TB range where probably most businesses lie
Assuming you mean that range to start at 100GB, I've worked with databases that size multiple times but as a freelancer it's definitely not been "most" businesses in that range.
Linux desktop hasn't changed appreciably since the advent of Windows 2000, perhaps even NT4. That's 1996 or 29 years ago. XP changed the color of the start button and rounded the edges, and Windows 8 had a purple theme but it's been a remarkably consistent design. I think the only reason Microsoft has made any changes to the start bar is so that the marketing department had something visually different to show consumers, since it's such a central part of the GUI. KDE and XFCE are so similar I often forget which one to install on a new computer.
The only improvement I've seen has been for mac they have the command+space launcher which is functionally like the win+type the app you want. Graphical file browsers haven't changed since the original Mac and/or Win 3.1. Mac has never had a good tree view IMO but they do have a version of it.
The only reason UIs would change at this point is to keep UI/UX folks employed and busy, and give the marketing department something new to talk about.
You don't need to use RESTful JSON to get two computers to communicate with eachother, either. You can just implement your own thing, and if someone wants to interface with it, they can write their own adapter! Who needs decades of battle tested tooling? This can be a Not Invented Here safe-space. Why not.
You think you're being sarcastic, but you don't get the point - implementing 3rd party Saas tools in your Saas backend means one MCP server per service, which can quickly add up, not to mention is bad design. I'm not opposed MCP protocol itself, it's just not needed if you're providing a Saas that talks to many vendors, you don't necessarily need MCP for it.
Had the iPad not launched immediately opposite it, I can envision a world where HP goes through two or three revisions and has a solid device with it's own "personality" much like how Microsoft has their "Surface" line of glued-together tablets and "laptops" which sorta compete with the iPad and Macbook Air even if they hardly market them. The fact that Microsoft eventually succeeded in the space seems to indicate HP could have as well. I can see the business case where the new CEO isn't interested in rubber stamping a new product line that's going to lose money for him every quarter for the next three years against the glowing sun that is the iPad. There are better ways to burn political capital as a C level.
The thing with Windows tablets and Android tablets is in both cases the software development only has to justify its net increase in spend over just doing phone apps, but since HP didn't have a good market of phone apps to begin with, they'd basically need to justify the entire software development cost, on lower sales.
reply