Hacker Newsnew | past | comments | ask | show | jobs | submit | userbinator's commentslogin

No, even drives written once have started showing a rise in (correctable) errors after 2 years: https://news.ycombinator.com/item?id=43739028

That's a slight rise in ECC which is entirely expected. Flash storage can be expected to rely on error correction as part of normal functioning; it's not an abnormal condition.

I think you're talking about two different things; "adaptives" are usually stored in EEPROM or the MCU's NOR flash, which is basically better-than-SLC levels of reliability, especially as they're written once and treated as ROM after that.

OptiNAND is a "SSHD" and thus has the same concerns with retention as an SSD. https://en.wikipedia.org/wiki/Hybrid_drive


moving changes from Windows 95 to Windows NT involved manually doing three-way merges for all of the files that changed since the last drop. I suspect that this manual process was largely automated, but it was not as simple as a git merge.

The first release of git was in 2005, around a decade after Windows 95.


You don’t need git to get something “as simple as a git merge” (Diff3 is from 1979 (https://en.wikipedia.org/wiki/Diff3)

Diff3 is from 1979 (https://en.wikipedia.org/wiki/Diff3), so three-way merges (https://en.wikipedia.org/wiki/Merge_(version_control)#Three-...) predate git by decades.


Wow! I am stunned how wrong that feels. I remember adopting git in the first year, and it still feels fairly recent. That it only took 10 years from Win95 to git, and 20 years from git to now, is truly uncanny. Win95 feels like a genuinely old thing and git like a fairly recent thing.

There’s been two main massive shifts that create before-and-after feelings in tech. One is going from “the computer is that super-typewriter that can send mail” to internet culture, and the second is going from online in pcs to always online in smartphones.

Win 95 feels from era1, xp and git was already in era 2.

Once those two changes were done by 2010 though, there’s been no game changer, if anything we've regressed through shittyfication (we seem to have fewer social networks vs the original Facebook for example, as most of them turned single player feed consumption).

Maybe pre and post LLMs will feel like an era change in a decade as well?


Time started moving faster after smartphones began to steal our reflective moments.

I don't know how old are you bit if you are in your 40s it's s just because you were a kid when Win95 came out and time seems longer when you are a kid (less routine, everything new, more attention all the time etc)

Three way merges were a thing before 2005... The author was merely comparing with today's tools.

I wonder what percentage of people on HN have ever used subversion or cvs, let alone older systems.

I'm still using subversion as it servers solo developer needs perfectly.

Only if you don't branch often. The way I code, I branch for every feature or bugfix. Even on my personal projects.

CVS was released in 1990. Subversion was released in 2000.

Google still uses a clone of Perforce internally (and various wrappers of it). Perforce was released in 1995.


Perforce is standard in gamedev currently. As a programmer first and foremost, I prefer git but I've certainly come to appreciate the advantages of Perforce and it's an overall better fit for (bigger) game projects.

I remember the days of NT4 and the guy that would lock a file, leave for the day and you couldn't check it out :D Good times!

Same year I deleted all our customer's websites by simply dragging the hosting folder somewhere into C:\programs or something by mistake... A double click + lag turned into a drag and drop! Whoops!

I was pale as a ghost as I asked for the zip drive.

We had to reboot the file server first, which we did a swift kick to the power button.

At least today we employ very secure mechanisms; like YAML rollouts of config, to keep things interesting.


I remember moving from SCCS to RCS because it was considered superior.

sccs, I was using it as late as the 90s.

But the percentage is probably small, yes.


Sun used SCCS until they moved to Mercurial in the early 2000s.

and even then, it's easy for merges to turn into chaos, git has no semantic awareness (no surprises here) and sometimes similar patterns will end up collapsed as a single change and conflict

Funny how fast Git became entrenched as the way of doing things, though. Around 2010 I said in passing, in a forum discussion about how a FOSS project was getting along, “…you’d think someone could send in a patch…”, and I immediately got flamed by several people because no one used patches any more.

Funnily enough the Linux Kernel still use patches (and of course Git has helpers to create and import patches)

Don’t they get emailed patch from git? Sorry if I’m super ignorant here, it’s interesting to me if they do!

You can use `git format-patch` to export a range of commits from your local git tree as a set of patches. You can then use `git send-email` to send that patch set out to the appropriate mailing list and maintainers (or just do it in one step, send-email accepts a similar commit range instead of patch files). It talks directly to an SMTP server you have configured in your `.gitconfig` and sends out e-mail.

Of course, `git send-email` has a plethora of options, e.g. you'd typically add a cover letter for a patch set.

Also, in the Linux kernel tree, there are some additional helper scripts that you might want to run first, like `checkpatch.pl` for some basic sanity checks and `get_maintainer.pl` that tells you the relevant maintainers for the code your patch set touches, so you can add them to `--cc`.

On the receiving side, as a maintainer, you'd use `git am` (apply mail) that can import the commits from a set of mbox files into your local git tree.


maybe merging patch files was a thing way before git?

As a comparison, CVS is from 1990, SVN from 2000 (and RCS from 82)

> this manual process was largely automated

Priceless.


The title looked like an AI image generator prompt, and I was curious what the output image would be.

One key point about retention which is not often mentioned, and indeed neither does this article, is that retention is inversely proportional to program/erase cycles and decreases exponentially with increasing temperature. Hence why retention specs are usually X amount of time after Y cycles at Z temperature. Even a QLC SSD that has only been written to once, and kept in a freezer at -40, may hold data for several decades.

Manufacturers have been playing this game with DWPD/TBW numbers too --- by reducing the retention spec, they can advertise a drive as having a higher endurance with the exact same flash. But if you compare the numbers over the years, it's clear that NAND flash has gotten significantly worse; the only thing that has gone up, multiplicatively, is capacity, while endurance and rentention have both gone down by a few orders of magnitude.

For a long time, 10 years after 100K cycles was the gold standard of SLC flash.

Now we are down to several months after less than 1K cycles for QLC.


I'm sad that drives don't have a 'shutdown' command which writes a few extra bytes of ECC data per page into otherwise empty flash cells.

It turns out that a few extra bytes can turn a 1 year endurance into a 100 year endurance.


There are programs with which you can add any desired amount of redundancy to your backup archives, so that they would survive corruption that does not affect a greater amount of data than the added redundancy.

For instance, on Linux there is par2cmdline. For all my backups, I create pax archives, which are then compressed, then encrypted, then expanded with par2create, then aggregated again in a single pax file (the legacy tar file formats are not good for faithfully storing all metadata of modern file systems and each kind of tar program may have proprietary non-portable extensions to handle this, therefore I use only the pax file format).

Besides that, important data should be replicated and stored on 2 or even 3 SSDs/HDDs/tapes, which should preferably be stored themselves in different locations.


Unfortunately some SSD controllers plainly refuse to read data they consider corrupted, even if you have extra parity that could potentially restore corrupted data, your entire drive might refuse to read.

Huh?

The issue being discussed is random blocks, yes?

If your entire drive is bricked, that is an entirely different issue.


Here’s the thing. That SSD controller is the interface between you and those blocks.

If it decides, by some arbitrary measurement, as defined by some logic within its black box firmware, that it should stop returning all blocks, then it will do so, and you have almost no recourse.

This is a very common failure mode of SSDs. As a consequence of some failed blocks (likely exceeding a number of failed blocks, or perhaps the controller’s own storage failed), drives will commonly brick themselves.

Perhaps you haven’t seen it happen, or your SSD doesn’t do this, or perhaps certain models or firmwares don’t, but some certainly do, both from my own experience, and countless accounts I’ve read elsewhere, so this is more common than you might realise.


This is correct, you still have to go through firmware to gain access to the block/page on “disk” and if the firmware decides the block is invalid than it fails.

You can sidestep this by bypassing the controller on a test bench though. Pinning wires to the chips. At that point it’s no longer an SSD.


Blind question with no attempt to look it up: why don't filesystems do this? It won't work for most boot code but that is relatively easy to fix by plugging it in somewhere else.

Wrong layer.

SSDs know which blocks have been written to a lot, have been giving a lot of read errors before etc., and often even have heterogeneous storages (such as a bit of SLC for burst writing next to a bunch of MLC for density).

They can spend ECC bits much more efficiently with that information than a file system ever could, which usually sees the storage as a flat, linear array of blocks.


This is true, but nevertheless you cannot place your trust only in the manufacturer of the SSD/HDD, as I have seen enough cases when the SSD/HDD reports no errors, but nonetheless it returns corrupted data.

For any important data you should have your own file hashes, for corruption detection, and you should add some form of redundancy for file repair, either with a specialized tool or simply by duplicating the file on separate storage media.

A database with file hashes can also serve other purposes than corruption detection, e.g. it can be used to find duplicate data without physically accessing the archival storage media.


Verifying at higher layers can be ok (it's still not ideal!), but trying to actively fix things below that are broken usually quickly becomes a nightmare.

IMO it's exactly the right layer, just like for ECC memory.

There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it.

In practice, this is a solved problem, but only until someone makes a mistake, then there will be a lot of trouble debugging it between the manufacturer certainly denying their mistake and people getting caught up on the usual suspects.

Doing all the ECC stuff right on the CPU gives you all the benefits against bitrot and resilience against all errors in transmission for free.

And if all things go just right we might even be getting better instruction support for ECC stuff. That'd be a nice bonus


> There's a lot of potential for errors when the storage controller processes and turns the data into analog magic to transmit it.

That's a physical layer, and as such should obviously have end-to-end ECC appropriate to the task. But the error distribution shape is probably very different from that of bytes in NAND data at rest, which is different from that of DRAM and PCI again.

For the same reason, IP does not do error correction, but rather relies on lower layers to present error-free datagram semantics to it: Ethernet, Wi-Fi, and (managed-spectrum) 5G all have dramatically different properties that higher layers have no business worrying about. And sticking with that example, once it becomes TCP's job to handle packet loss due to transmission errors (instead of just congestion), things go south pretty quickly.


The filesystem doesn't have access to the right existing ECC data to be able to add a few bytes to do the job. It would need to store a whole extra copy.

There are potentially ways a filesystem could use heirarchical ECC to just store a small percentage extra, but it would be far from theoretically optimal and rely on the fact just a few logical blocks of the drive become unreadable, and those logical blocks aren't correlated in write time (which I imagine isn't true for most ssd firmware).


CD storage has an interesting take, the available sector size varies by use, i.e. audio or MPEG1 video (VideoCD) at 2352 data octets per sector (with two media level ECCs), actual data at 2048 octets per sector where the extra EDC/ECC can be exposed by reading "raw". I learned this the hard way with VideoPack's malformed VCD images, I wrote a tool to post-process the images to recreate the correct EDC/ECC per sector. Fun fact, ISO9660 stores file metadata simultaneously in big-endian and little form (AFAIR VP used to fluff that up too).

Reed Solomon codes, or forward error correction is what you’re discussing. All modern drives do it at low levels anyway.

It would not be hard for a COW file system to use them, but it can easily get out of control paranoia wise. Ideally you’d need them for every bit of data, including metadata.

That said, I did have a computer that randomly bit flipped when writing to storage sometimes (eventually traced it to an iffy power supply), and PAR (a type of reed solomon coding forward error correction library) worked great for getting a working backup off the machine. Every other thing I tried would end up with at least a couple bit flip errors per GB, which make it impossible.


You can still do this for boot code if the error isn't significant enough to make all of the boot fail. The "fixing it by plugging it in somewhere else" could then also be simple enough to the point of being fully automated.

ZFS has "copies=2", but iirc there are no filesystems with support for single disk erasure codes, which is a huge shame because these can be several orders of magnitude more robust compared to a simple copy for the same space.


That does sound like a good idea (even if I’m sure some very smart people know why it would be a bad idea)

I guess the only way to do this today is with a raid array?

Because no one is willing to pay for SLC.

Those QLC NAND chips? Pretty much all of them have an "SLC mode", which treats each cell as 1 bit, and increases both write speeds and reliability massively. But who wants to have 4 times less capacity for the same price?


4 times less capacity but 100x or more endurance or retention at the same price looks like a great deal to me. Alternatively: do you want to have 4x more capacity at 1/100th the reliability?

Plenty of people would be willing to pay for SLC mode. There is an unofficial firmware hack that enables it: https://news.ycombinator.com/item?id=40405578

1TB QLC SSDs are <$100 now. If the industry was sane, we would have 1TB SLC SSDs for less than $400, or 256GB ones for <$100, and in fact SLC requires less ECC and can function with simpler (cheaper, less buggy, faster) firmware and controllers.

But why won't the manufacturers let you choose? The real answer is clearly planned obsolescence.

I have an old SLC USB drive which is only 512MB, but it's nearly 20 years old and some of the very first files I wrote to it are still intact (I last checked several months ago, and don't expect it's changed since then.) It has probably had a few hundred full-drive-writes over the years --- well worn-out by modern QLC/TLC standards, but barely-broken-in for SLC.


The real answer is: no one actually cares.

Very few people have the technical understanding required to make such a choice. And of those, fewer people still would actually pick SLC over QLC.

At the same time: a lot of people would, if facing a choice between a $50 1TB SSD and a $40 1TB SSD, pick the latter. So there's a big incentive to optimize on cost, and not a lot of incentive to optimize on anything else.

This "SLC only" mode exists in the firmware for the sake of a few very specific customers with very specific needs - the few B2B customers that are actually willing to pay that fee. And they don't get the $50 1TB SSD with a settings bit flipped - they pay a lot more, and with that, they get better QC, a better grade of NAND flash chips, extended thermal envelopes, performance guarantees, etc.

Most drives out there just use this "SLC" mode for caches, "hot spot" data and internal needs.


Agreed. I have some technical understanding of SLC’s advantages, but why would I choose it over QLC? My file system has checksums on data and metadata, my backup strategy is solid, my SSD is powered most days, and before it dies I’ll probably upgrade my computer for other reasons.

Funny enough I just managed to find this exact post and comment on google 5 minutes ago when I started wondering whatever it's actually possible to use 1/4 of capacity in SLC mode.

Though what make me wonder is that some reviews of modern SSDs certainly mention that that pSCL is somewhat less than 25% of capacity, like 400GB pSLC cache for 2TB SSD:

https://www.tomshardware.com/pc-components/ssds/crucial-p310...

So you get more like 20% of SLC capacity at least on some SSDs


> I have an old SLC USB drive which is only 512MB, but it's nearly 20 years old and some of the very first files I wrote to it are still intact (I last checked several months ago

It's not about age of drive. It's how much time it spent without power.


> If the industry was sane

Industry is sane in both the common and capitalist sense.

The year 2025 and people still buy 256Tb USB thumbdrives for $30, because nobody cares except for the price.


To be honest you can buy 4TB SSD for $200 now, so I guess market would be larger if people were aware of how easy would it be to make such SSDs work in SLC mode exclusively.

Myself wants. I remember when the UBIFS module (or some kernel settings) for the Debian kernel was MLC against SLC. You could store 4X more data now, but at a cost of really bad reability: A SINGLE bad shutdown and your partitions would be corrupted up to the point of not being able to properly boot any more, having to reflash the NAND.

Endurance going down is hardly a surprise given that the feature size has gone down too. The same goes for logic and DRAM memory.

I suspect that 2035 years time, hardware from 2010 will work, while that from 2020 will be less reliable.


I concur; in my experience ALL my 24/7 drives from 2009-2013 still work today and ALL my 2014+ are dead, started dying after 5 years, last one died 9 years later. Around 10 drives in each group. All older drives are below 100GB (SLC) all never are above 200GB (MLC). I reverted back to older drives for all my machines in 2021 after scoring 30x unused X25-E on ebay.

The only MLC I use today are Samsungs best industrial drives and they work sort of... but no promises. And SanDisc SD cards that if you buy the cheapest ones last a surprising amount of time. 32GB lasted 11-12 years for me. Now I mostly install 500GB-1TB ones (recently = only been running for 2-3 years) after installing some 200-400GB ones that work still after 7 years.


Completely anecdotal, and mostly unrelated, but my NES from 1990 is still going strong. Two PS3’s that I have owned simply broke.

CRTs from 1994 and 2002 still going strong. LCD tvs from 2012 and 2022 just went kaput for no reason.

Old hardware rocks.


LCD tvs from 2012 and 2022 just went kaput for no reason.

Most likely bad capacitors. The https://en.wikipedia.org/wiki/Capacitor_plague may have passed, but electrolytic capacitors are still the major life-limiting component in electronics.


MLCC's look ready to take over nearly all uses of electrolytics.

They still degrade with time, but in a very predictable way.

That makes it possible to build a version of your design with all capacitors '50 year aged' and check it still works.

Sadly no engineering firm I know does this, despite it being very cheap and easy to do.


For what it's worth my LCD monitor from 2010 is doing well. I think the power supplied died at one point but I already had a laptop supply to replace it with.

Specifically old Japanese hardware from the 80s and 90s - this stuff is bulletproof

I still have a Marantz amp from the 80's that works like new, it hasn't even been recapped.

As far as I'm aware flash got a bit of a size boost when it went 3D and hasn't shrunk much since then. If you use the same number of bits per cell, I don't know if I would expect 2010 and 2020 or 2025 flash to vary much in endurance.

For logic and DRAM the biggest factors are how far they're being pushed with voltage and heat, which is a thing that trends back and forth over the years. So I could see that go either way.


I also seem to remember reading retention is proportional to temperature at time of write. Ie, best case scenario = write data when drive is hot, and store in freezer. Would be happy if someone can confirm or deny this.

I know we're talking theoretical optimums here, but: don't put your SSDs in the freezer. Water ingress because of condensation will kill your data much quicker than NAND bit rot at room temperature.

I'm interested in why SSDs would struggle with condensation. What aspect of the design is prone to issues? I routinely repair old computer boards, replace leaky capacitors, that sort of thing, and have cleaned boards with IPA and rinsed in tap water without any issues to anything for many years.

Would an airtight container and liberal addition of dessicants help?

Sure. Just make sure the drive is warm before you take it out of the container - because this is when the critical condensation happens: you take out a cold drive an expose it to humid room temperature air. Then water condenses on (and in) the cold drive.

Re-freezing is also critical, the container should contain no humid air when it goes into the freezer, because the water will condense and freeze as the container cools. A tightly wrapped bag, desiccant and/or purging the container with dry gas would prevent that.


A vacuum sealer would probably help to avoid the humid air, too.

What about magnetic tape?

For long term storage? Sure, everybody does it. In the freezer? Better don't, for the same reason.

There are ways to keep water out of frozen/re-frozen items, of course, but if you mess up you have water everywhere.



I definitely remember seeing exactly this.

That's how it has to work. To increase capacity you have to make smaller cells where charge may easier diffuse from one cell to another. Also to make drive faster, stored charge has to be smaller, which also decrease endurance. With SLC and QLC comparison is even worse as QLC is basically clever hack to store 4 times more data in the same number physical cells - it's tradeoff.

Yes, but that tradeoff comes with a hidden cost: complexity!

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

The spread functions that move bits around to even the writes or caches will also fail.

The best compromise is of course to use both kinds for different purposes: SLC for small main OS (that will inevitably have logs and other writes) and MLC for slowly changing large data like a user database or files.

The problem is now you cannot choose because the factories/machines that make SLC are all gone.


The problem is now you cannot choose because the factories/machines that make SLC are all gone.

You can still get pure SLC flash in smaller sizes, or use TLC/QLC in SLC mode.

I much rather have 64GB of SLC at 100K WpB than 4TB of MLC at less than 10K WpB.

It's more like 1TB of SLC vs. 3TB of TLC or 4TB of QLC. All three take the same die area, but the SLC will last a few orders of magnitude longer.


SLC are produced, but the issue is that there is no (I'm aware of) SLC products for consumer market

> Even a QLC SSD that has only been written to once, and kept in a freezer at -40, may hold data for several decades.

So literally put your data in cold storage.


That is literally the origin of the term.

I do wonder why there’s no way to operate qlc as if it were mlc, other than the manufacturer not wanting to allow it.

There is a way to turn QLC into SLC: https://news.ycombinator.com/item?id=40405578


Thanks! I missed this the first time around!

AFAIK all mobile networks use NAT unless you pay a lot more for a special service with a public static IP.

Not that I believe any of this BS in the first place, but I've always found it quite amusing that traditional blown-film plastic bags are being replaced with "reusable" ones... which are also made of the same plastics, except in textile form and thus easily shed fibers everywhere.

You can buy microscopes for pretty cheap if you’d like to look for microplastics yourself. But regardless I’m curious what you think happens to the plastic you use. Where does the little bit you scrape away go when you cut on a plastic cutting board? What happened to the fluffy fleece jacket that’s no longer fluffy? This stuff doesn’t biodegrade so it’s gotta go somewhere.

It's going back where it came from. I really don't give a shit about this new hysterical idiocy.

So it turns itself back into oil and seeps into the well where it originated from? You know this sounds like putting your hands on your ears shouting 'lalala I can't hear you'?

The thing I'm wondering is, if you don't care, why make the effort to comment at all? Clearly you care enough to do so. What are you afraid will happen by merely acknowledging what is the case? Whenever someone presents the finding of facts as hysterical, I'm left wondering who is actually the hysterical one.

The microplastic particles in our air aren't hysterical. They are just there. Research revealing they are present isn't hysterical either, nor is research about the consequences. At most, such research is more or less accurate, or distorted. I'm starting to think you are the one who is hysterical in this matter.

But for what reason? I can only think of only three:

you agree with the dangers but find it so overwhelming that you want to shut it down

you fear losing the benefits of plastic and want to undermine any action on the subject

you just can't take any kind of panic, regardless of the reasons and to maintain your sanity, you vehemently push away anything that might otherwise makes you feel alarmed


Where do you think it came from and how does it get back there?

Due to the inevitable march of entropy, sadly nothing really goes back where it came from. Living things are a beautiful and noteable exception.

> Detection of microplastics in human tissues and organs: A scoping review

> Conclusions

> Microplastics are commonly detected in human tissues and organs, with distinct characteristics and entry routes, and variable analytical techniques exist.

> In addition, we found that atmospheric inhalation and ingestion through food and water were the likely primary routes of entry of microplastics into human body.

https://pmc.ncbi.nlm.nih.gov/articles/PMC11342020/


This seems obvious to me why the heavier bags are better. They don't immediately blow away to the ocean or wherever else. We're also charged $1.50 for them where I am or you get a paper bag so people who want to save $4.50+ on a grocery run (which is a ton of people) will bring their own.

The problem with that is, in places where delivery is ubiquitous, people use the reusable bags the same as they used the single-use bags, and there's no way to return them, so now people are disposing of much more resource-intensive bags the same way they did the single-use ones.

Cloth bags exist.

Okay, feel free to ignore everything about PFAS, etc.

A century of progress is getting destroyed thanks to radical misguided "environmentalism".

Asbestos and lead in petrol and CFCs were also progress. But we decided to progress further to reduce the chance of dying of cancer. And we did!

Asbestos is harmless if not inhaled, and the most dangerous forms of it were banned long ago. White asbestos (chrysotile) is relatively safe especially if used as an encapsulated filler. Here's an interesting study of chrysotile miners, exposed to very high levels of it daily: https://asbest-study.iarc.who.int/

The effects of CFCs are still disputed, but they replaced far more dangerous refrigerants... but somehow people are being convinced into using propane and butane again.

On the other hand, I'll say that leaded petrol was bad, but that's because it was designed to be dispersed into the atmosphere and the effects of lead poisoning quite clear.

This microplastics bullshit has not passed the test of time or (real) science. It's not "progress", it's become radical ideology. Here's something which may enlighten you: a ton of articles which claim to have discovered "microplastics" are really implying to have done so by detecting the decomposition products of long hydrocarbon chains, which are of course present in polymers like polyethylene, the world's most common plastic; but guess what else has long hydrocarbon chains? Fats and oils. As in biological matter.


not in China, progress there is picking up pace, powered by solar. China is also prempting the loss of labour as people get older, and are building out fully robotic factories ,"dark factories" for the fact that the lights are turned off, not on, when they are running, as there are no humans on the floor. On the plastic's side China is by far the largest manufacurer of all things plastic, and buys load after load of US natural gas that gets pumped strait into cracking plants to be converted into polymers, but you can be sure that they, and others are working along sytematicaly to find a ploymer with the right properties for use, but that then iether breaks down comlpletly, or is inhearantly benign, and inert. My main point is that China, and nowhere else will decide how the whole plastic thing goes, and what we are loosing here in the west, is agency and credibility.

But in the end, the 386 finished ahead of schedule, an almost unheard-of accomplishment.

Does that schedule include all the revisions they did too? The first few were almost uselessly buggy:

https://www.pcjs.org/documents/manuals/intel/80386/


According to "Design and Test of the 80386", the processor was completed ahead of its 50-man-year schedule from architecture to first production units, and set an Intel record for tapeout to mask fabricator.

Except for the first stepping A0, whose list of bugs is unknown, and it also implemented a few extra instructions that were dropped in the next revisions, instead of having their bugs fixed, the other steppings have errata lists that are not significantly worse than those of most recent Intel or AMD CPUs, which also have long lists of bugs, for which there are workarounds in most cases, at the hardware level or operating system level.

That's a rabbit hole I didn't know about re: removed instructions. Thanks for that bit of trivia.

https://www.pcjs.org/documents/manuals/intel/80386/ibts_xbts...


They thought the camera’s file system was unencrypted, when it was encrypted.

Unfortunately this situation is likely to get more common in the future as the "security" crowd keep pushing for encryption-by-default with no regard to whether the user wants or is even aware of it.

Encryption is always a tradeoff; it trades the possibility of unauthorised access with the possibility of even the owner losing access permanently. IMHO this tradeoff needs careful consideration and not blind application.


This is why I always shake my head when the Reddit armchair security experts say "The data wasn't even encrypted!? Amateur hour!" in response to some PII leak.

Sure, sure buddy, I'll encrypt all of my PII data so nobody can access it... including the web application server.

Okay, fine, I'll decrypt it on the fly with a key in some API server... now the web server had unencrypted access to it, which sounds bad, but that's literally the only way that it can process and serve the data to users in a meaningful way! Now if someone hacks the web app server -- the common scenario -- then the attacker has unencrypted access!

I can encrypt the database, but at what layer? Storage? Cloud storage is already encrypted! Backups? Yeah, sure, but then what happens in a disaster? Who's got the keys? Are they contactable at 3am?

Etc, etc...

It's not only not as simple as ticking an "encrypted: yes" checkbox, it's maximally difficult, with a very direct tradeoff between accessibility and protection. The sole purpose of encrypting data is to prevent access!


I like the approach of mega.nz...

Server stores encrypted blobs. Server doesn't have the keys.

  Entire application is on the client, and just downloads and decrypts what it needs.

Obviously your entire application stack needs to be developed with that approach in mind, and some things like 'make a hyperlink to share this' get much more complex.

Re: encrypting data that would be served via web server: why would anyone bother to encrypt data meant to be shared externally worldwide? It makes no sense to begin with…

Nah bro, you just gotta use homomorphic encryption! /s

That said, encryption at rest is still good in terms of theft or mis-disposal.


This has already happened to Windows users when BitLocker disk encryption is enabled by default and they do something that causes the encryption key to be lost.

You can have the key saved in your Microsoft account.


You can have the key saved in your Microsoft account.

I find it very hard to believe that those who want their disk encrypted also want Microsoft to have the key.


Microsoft isn't going to release it without a warrant. But you have to trust their security not to leak it.

Unless the JeDI contract is up for renewal

What does that have to do with bitlocker?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: