Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen 6000 Series Mobile CPUs Feature Microsoft's Pluton Security (phoronix.com)
107 points by no_time on Jan 5, 2022 | hide | past | favorite | 152 comments



Extremely disappointing.

Also the CES didn't say anything about the next Threadrippers. :(

They did tease the next Zen architecture but not much details sadly.

But having Microsoft-designed hardware as a backdoor in my system? Absolutely not.


While listening to the podcast "Moore's law is dead", I heard repeatedly that AMD has no interest in launching new Threadrippers, ever again.


That is almost certainly wrong. Threadripper is what AMD is also using for its workstation line of products (eg, https://www.lenovo.com/us/en/thinkstation-p620/ ).

So unless AMD just doesn't want a piece of that pie anymore despite only just entering the market, there's absolutely more Threadrippers coming. No doubt about that at all. AMD isn't going to leave the Xeon-W just entirely unchallenged when AMD is otherwise firing on all cylinders.

What they probably won't do is continue to have different sockets for the "Pro" and "non-Pro" threadrippers, though. But that'd be different from them just not having anymore Threadrippers at all. It'll probably just end up in the same category as Ryzen & Ryzen Pro where it's the same socket, just one is sold to consumers and one is sold to OEMs.


I don't think MLID has said anything besides that there won't be a Zen 3 Threadripper, because Intel still doesn't have anything that comes close to Zen 2 TR.

Now, a Zen 4 Threadripper is extremely compelling because of DDR5 and PCIe 5.0.


I could definitely see AMD not bothering with a Zen 3 Threadripper. The current TR socket mess doesn't look fun to update, and there's a new socket right around the corner anyway.

But AMD is going to want to establish consistency with OEMs still, so Intel not having anything competitive right now is merely a reduction in pressure on AMD, not something that eliminates it entirely. A Zen 3 Threadripper Pro as a result still seems more likely than not, even if consumer Threadripper doesn't get a Zen 3 treatment. I'm sure AMD would also prefer to simplify the chiplets its fabricating as well. Why keep making Zen 2 chiptlets for TR Pro specifically and nothing else?


To be fair I'd be completely okay with them basically making TR and TR Pro with even more L3 cache and leave it at that. I'd still buy.


That's definitely not happening. The L3 is part of the chiplet, so they aren't going to take Zen 2 and rev it to have more L3. Rather they'd just swap it out with a Zen 3 chiptlet instead, which is far less work for AMD to do. Especially since TR & TR Pro already have way more L3 than anything else (128-256MB L3)

I highly doubt we'll see a vcache version of TR/TR Pro prior to the jump to Zen 4, especially since so far AMD has only officially announced a 3d vcache 5800X which is a single-chiplet product ( https://www.anandtech.com/show/17152/amd-cpus-in-2022-ces )


Yep, I've definitely fallen behind on the info there. Looks like you are right.

I do wonder though, what improvements will they bring to the next Threadrippers? There are some rumours about the start of March 2022, I am very curious what can they unveil. Any theories?


A March 2022 Threadripper product would all but certainly just be a Zen 3 update. So compare the performance gains going from a 3950X to a 5950X or from Epyc 7002 to 7003. Just that but applied to Threadripper (so something like +15% performance)


Thank you. In the HEDT space a 15% improvement is quite good though.

And I am not sure Zen 4 Threadrippers would even be available this year but who knows. Plenty of rumours. I better start saving up money because I want to invest in a workstation that will last me at least 10 years (minus SSD wear). And Zen 4 / PCIe v5 / DDR-5 look preeeeeeeetty good.


I don’t understand the logic of waiting so long to launch an already outdated platform when Zen 4 is nearly ready to go. It’s just so late in the product cycle.

I agree that a March launch would mean it would be on Zen 3.


Zen 4 is on TSMC 6nm and so is the new line of laptop skus running Zen3+, so I'd guess that they are just waiting for sufficient volumes to release another 7nm Zen 3 product. Epyc & Ryzen almost certainly pull much higher volumes for AMD than TR/TR Pro, and although Ryzen at least has been mostly available for the past couple months it was in short supply for quite a while there.

I'd guess they planned on TR / TR Pro being a quicker followup after Epyc 7003 launched, but then chip capacity just wasn't there to justify it. And now it probably is, especially as their new stuff is on a different node.


But from a product perspective if there are still no competitors to the extant 2019 Threadripper SKUs, what's the point of such a minor bump today? I don't buy the capacity argument for the exact reason you mention: it's low volume. That's the whole beauty of the Zen architecture, right? They can shift capacity around after the wafer stage.

The way I see it, Zen 2 Threadripper is still a good enough product for its goals of a shit load of cores. Zen 3 didn't increase core counts, but Zen 4 does. My suspicion is that AMD probably overproduced Zen 2 TR and is letting its supply run down until it can launch another category redefining product built on Zen 4. Plus, it would look even better in their comparison charts.


I occasionally watch the same YouTube channel, and I have not gotten this same impression. Just a month ago, he said that at least Zen 3 Threadripper Pro and Zen 4 Threadripper were still planned[1]. Can you point to the piece of content where he suggests this?

[1] https://youtu.be/TX6bREO3Nd4?t=380


I'm very sorry but I can't. I listen to podcasts while I do chores so ... yeah.

edit: So, I'm sure there's some other older thoughts/info presented on his podcasts, but it's the latest thing I heard from him talking about the subject:

https://www.youtube.com/watch?v=b8gVyjvSZkg&t=5713s (12 minute section )


Sorry if this came across as me saying he is leaking there's no desire to launch Threadripper. To my knowledge he did not leak such a thing, but he was talking about Threadripper not making sense https://www.youtube.com/watch?v=b8gVyjvSZkg&t=5713s in a new podcast episode. I have not conveyed this well enough in my original comment.


...What? But why!

Sigh, less and less choice every day.

EDIT: What about this potential leak? https://www.digitaltrends.com/computing/amd-to-release-ryzen...


Probably because they're server cpus that can be sold for more, it's too bad if this is true, I really want more PCIe lanes, though the threadripper 128 lanes are overkill, I'd be fine with 40.

Other than that, performance is plenty on a 5950x for all my imaginable use cases.


Silicon supply and engineering resources are a major issue for HEDT platforms.

Thread Threadripper uses a massive amount of high quality silicon, Epyc has much better margins.

Supporting HEDT is living hell for Intel/AMD. You're often using all kinds of consumer hardware with otherwise enterprise grade gear. Addressing every little PCIe device issue and catering to a very very small market has very little ROI.


Not really, that's not how Threadripper works thanks to the chiplet design. It's not necessarily "high quality silicon" since it has a higher TDP budget. But regardless remember that in this case the 5800X and 5950X are also competing with Epyc for that same 8c chiplet silicon.

For something like Intel's HEDT then yes, you're absolutely correct. That involves HEDT-specific binning and it eats into Xeon-W profits very directly. But for Threadripper it's not nearly as clear cut. In fact it seems more like it's silicon that failed to validate for Epyc since it's half the IO die of Epyc (for the non-Pro Threadripper anyway).

Otherwise you're talking about AMD just taking Ryzen-quality chiplets, putting more of them on a substrate, and selling them at a huge markup.

Take the Threadripper 3970X as just a simple example here. It's 4x 8c zen2 chiplets that can hit a peak of 4.5ghz at 280W TDP. Meanwhile the Ryzen 3950x is a 2x 8c zen2 chiplet that hits a peak of 4.7ghz at 105W TDP. So 3970x is 2x the silicon of the 3950x, but lower quality silicon, and AMD charged more than 2x for the 3970X.

That's the brilliance of chiplets and Threadripper. It's Ryzen-class chiplets with half of an Epyc IO die (the IO die being 12nm GloFlo means it's not really competing for high-end prices anyway). It's binning they're already doing for they're primary product lines, not a specialized HEDT-specific process.


Sure but I'd be paranoid about not having ECC RAM whereas Threadrippers support it officially. But I am sure there's a curated list of Ryzen motherboards that do in fact support ECC...


The Gigabyte B550 VISION motherboards claim to support ECC both on box and on the "key features" page[1]:

  Reliability
  ECC Memory
  To protect against data corruption
  Error Correction Code (ECC) memory corrects errors in your data
  as it passes in and out of memory to ensure reliability for
  critical applications.
[1] https://www.gigabyte.com/Motherboard/B550-VISION-D-rev-10#kf


I just tried out a Vision D-P (B550) and the IOMMU groups were terrible. All PCIe cards were in one big group, so good luck passing through specific hardware to a VM. It might not matter in a workstation or it might be fixed by a BIOS update, but shrug.

I went with an Asus instead. The Asus manual says "ECC memory support varies by CPU" which is questionable for the 5700G APU I went with, and I didn't feel like digging to find out if harder to obtain ECC UDIMMs would be beneficial. (This was for a router / personal server. My workstation is ECC on a libreboot/KGPE)


> My workstation is ECC on a libreboot/KGPE

Is it all-AMD? If it is, could you link to your setup? I am looking to build an all-AMD workstation this year and I need all the PCIe lanes that I can get.


This would usually be a tangent, but it's quite apropos to the main topic. My workstation is the last amd64 hardware without ME/PSP. It's a few generations old - dual Opteron 6380's with 112G of RAM. I couldn't get it to train with 8 sticks of RAM, but it's solid and reliable with 7. Last time I stuck a Kill-a-watt on it, I think it draws around 140W at idle. An alternative to it would be something like the Raptor Talos.

https://libreboot.org/docs/hardware/kgpe-d16.html

https://www.coreboot.org/Board:asus/kgpe-d16

It's got 5 usable PCIe slots (including the physically-flipped "PIKE" one), I think a total of 40 PCIe lanes from both sockets that I believe you could theoretically split out with the maximum amount of bifurcation, since the BIOS is Free Software.

My new Ryzen build is for a personal server that resides at a lower trust level, and targets around 30W draw. B550 motherboards generally have around 28 PCIe lanes going to expansion slots (20 from the proc, 8 from the chipset). The best you can do for number of slots is to bifurcate the main "graphics" x16 into 3 x4 slots (maybe 4 with a non-APU) using a "VROC" card, but the BIOS has to support that (Gigabyte and Asus seem to generally, but check manual). I believe X570 motherboards have a few more PCIe lanes coming from the chipset.

If you really need more PCIe lanes, I hear Threadripper/EPYC is the way to go. But I don't have any personal experience. If you just need more PCIe slots, you can find PCIe x1 -> four slot switch-based splitters inexpensively on ebay/aliexpress.


I see. Thanks a bunch, definitely bookmarking your comment. I have a bit more requirements for hardware but your setup might actually turn out awesome for a server.

> Last time I stuck a Kill-a-watt on it, I think it draws around 140W at idle

Another tangent (sorry!): do you know of good power meters that don't have to sit right at the outlet? I really need several but I want them to have extensions because I want to glue them on my wall and monitor them in real time, not having to stick my ass in the air while trying to crawl under my desk (where all the outlets are), just so I can see the measurements.


I'd probably just get an appropriate power strip and some Kill-a-watts. Alternatively you could look at smart devices with energy monitoring. The TP-LINK HS110 connects to wifi and measures current, power, and total energy with a community-documented local-network protocol [0], but they have been discontinued and prices shot through the roof. The replacement is KP115 but I have no idea if it still uses the same protocol or not. And I have no idea how accurately any of these devices handle weird (eg non-resistive) current waveforms.

[0] it wants to connect to "cloud" as well, but works fine without giving it Internet access.


Thank you. Any observations on whether the CPU performs worse? It's a widely circulated meme that AMD CPUs hugely benefit from as fast a RAM as you can put in, whereas ECC is a bit slower.


ECC is not inherently slower. Registered modules are slower (by the added latency of the register). ECC-or-not and registered-or-not are two orthogonal features of the DIMM and in theory all four combinations are possible. On the other hand non-registered ECC is somewhat rare (and for some reason often ridiculously expensive) and registered non-ECC probably is not manufactured at all (it does not make much sense).


I see. So I should chase the fastest unregistered ECC modules I can find then? I'll do that when the time comes.



Much appreciated and I bookmarked it. Thanks.


I honestly don't care if my machine miscalculates, I don't run any important workloads. All content is in some cloud, all code is on github, all configuration is on github and by definition everything is hashed and stored in multiple locations.

EDIT: miscalculates as rarely as a computer normally does


Well, that's a solid point. I will still consider ECC RAM for my home servers but you are right that for a workstation it is a bit too paranoid.


Cheers, though I would say that I don't like that ECC isn't the standard, there's nothing but segmentation stopping ECC from being mainstreamed other than that it would make a lot of server gear redundant for a lot of applications, but as I said, it really doesn't matter for my application. Something that should be online for extended periods of time though, go ECC


My understanding (I got this from listening to The WAN Show, but I don't have a timestamp off hand) is that they want to stop selling them to consumers directly (rather than to OEMs), presumably because volume was too low to support it.


> But having Microsoft-designed hardware as a backdoor in my system?

I think there is as of yet insuficient data to support that claim. A security processor that guarantees safety for critical data is a good piece of hardware for those that require such functionality, as long as it's optional, not-backrooted, not controlled by a remote corporation, has an open API and interactions with the system can be audited:

Windows devices with Pluton will use the Pluton security processor to protect credentials, user identities, encryption keys, and personal data. None of this information can be removed from Pluton even if an attacker has installed malware or has complete physical possession of the PC.


"A security processor that guarantees safety for critical data is a good piece of hardware for those that require such functionality"

I am not the first to point out that malware writers are a big beneficiary of such features -- it's a great way to avoid inspection/detection of their code.

"as long as it's optional"

DRM systems ultimately become de facto mandatory, even if they remain optional in theory.

"not controlled by a remote corporation"

Hardware DRM systems ("trusted computing") are inherently controlled by remote corporations; at the end of the day someone has to certify hardware-bound keys, and someone has to revoke leaked keys. Companies routinely try to obscure this crucial detail but if you dig deeply enough in the documentation you will inevitably find it.


> Hardware DRM systems ("trusted computing") are inherently controlled by remote corporations; at the end of the day someone has to certify hardware-bound keys, and someone has to revoke leaked keys.

And that is not bad technology in itself to have, the question is who gets to sign, and it's a political one. There exist in principle an acceptable threshold that various small OS vendors and Linux/*BSD distributions can pass and virus authors cannot, while allowing for some sort of hardware three finger salute that enables custom self signed roots for organizations and developers that can handle them safely, i.e. a special use case not required by the majority of the population who simply trusts a vendor.

I don't claim the Microsoft tech allows these (most likely not), but I believe these are the correct demands and criticism we should make, approach it as a political issue.

Fighting ideologically against a technology and ignoring the larger political objectives is foolhardy. If the technology delivers value, some vendor will bundle it with pretty pink buttons and corner the market, forcing you to use it too because every body else does and you have no other option. It's how we ended up with solid blocks of DRM from Apple in the hands of billions of consumers that won't even allow you to run your own choice of software, let alone alter the operating system.

I just removed a crypto-miner malware from an IT-illiterate friend's windows computer. There are great many people in this category. If we can't fix their computers by flipping a TPM switch that ensures some level of platform integrity, they will simply go out and buy an Apple device, "because it works better". That's their subjective view, good luck teaching them to value software freedom and learn good security practices.


"If the technology delivers value"

The technology does not deliver value for users. It has always been intended to benefit Microsoft and their media partners, with a bit of window-dressing meant to trick users into believing that they somehow gain from it. This is possible only because the current market for personal computers has almost no meaningful competition.

"I just removed a crypto-miner malware from an IT-illiterate friend's windows computer"

...and malware authors will use DRM systems like this to make it harder to detect their malware. Instead of, "Hm, CRYPTOMINER.EXE is spinning the CPU" it will be "Something seems to be spinning the CPU but the platform DRM is preventing me from figuring out what is happening."

"some level of platform integrity"

Except that this system does not ensure platform integrity. Sure, firmware and bootloader signing can protect against malware or at least give users the ability to reset their system to a good state, but we are talking about a DRM system and that is a very different story.

Of course, Microsoft's track record on bootloader security is mixed. They have been willing to allow major Linux distros to get a signed "shim" that can be used to bootstrap grub, but they also made a deliberate and arbitrary decision to forbid vendors of ARM systems from allowing users to disable secure boot. The result is that users who want to run their own bootloader, with whatever risk that entails, have less choice in hardware and are forced to spend more. So while UEFI was generally a win for end user security, it came with a strategic effort by Microsoft to exert greater control over user devices -- something which only benefits Microsoft and which was done only as part of their long-term effort to sell DRM to media companies.

The end of all this will be a world where everything looks like the "mobile" ecosystem or video game consoles -- users will not be allowed to run any software that Microsoft did not approve of unless they pay 4-5x more for a computer that has fewer restrictions. Sure, it will make life harder for malware writers -- assuming the approved software does not have tons of exploits -- but it will also mean that Microsoft's interests never get challenged. The only reason anyone will be allowed to run Libreoffice will be competition authorities, and in all likelihood software will be made available based on a user's region (so EU users get to run Libreoffice, but not US users). It will be a net negative for users and for the next generation of developers, who, rather than learning by staying up late with their own computers, will at best only be able to program on their school's computers and only if they are in a wealthy enough school district.


> I think there is as of yet insuficient data to support that claim.

I don't disagree per se but:

1. I don't give corporations the positive benefit of the doubt. They have done way too many attacks on privacy and I think it'd be naive to think they won't get tempted to have a below-the-kernel backdoor access to everyone's systems. They absolutely would, given the chance. Let's not give them the chance is what I am advocating for.

2. It's not about the advertised good use of the tech. We should first and foremost look for the abuse potential. If it's there then the tech should be dropped. Failing that (because it's too idealistic, I realize that) then we should make double and triple sure our routers and switches are secure and will stop suspicious traffic (if that's even possible for a home user...). Or have systems in place that don't let in every single firmware update.


What if the backdoor is designed by Intel instead of Microsoft?

Spoiler alert, if you have any Intel CPU released after 2006-2008, you have one.

https://libreboot.org/faq.html#intel


Oh I know. That's why I am moving to AMD gradually. But I hear they are not much better. Scary stuff, dude.


AMD's PSP is pretty much the same thing.

In the past few years AMD has started including a BIOS option to disable it. However, I have never seen a convincing explanation of how exactly that option works. The only thing I know is that Linux complains about it at boot on my B450M (from 2019):

    Aug 30 23:52:07 kobold kernel: [    4.811829] ccp 0000:07:00.1: ccp: unable to access the device: you might be running a broken BIOS.
    Aug 30 23:52:07 kobold kernel: [    4.811831] ccp 0000:07:00.1: psp: unable to access the device: you might be running a broken BIOS.
Intel includes no such option, but on the other hand there is stuff like https://github.com/corna/me_cleaner/. In the absense of any detailed information about how AMD'S PSP disable option actually works, I guess I would trust this a little more. However it requires getting your hands dirty, attaching a programmer directly to the chip (on the motherboard), and is not without risk.


Yep, and now I am bitterly regretting never picking up electronics skills. :(

I entertained the idea of me_cleaner many times but yeah, I really can't afford to lose any of my machines and I can't trust almost anyone to do it properly except me -- and I can't do it.

Can you clarify on AMD's PSP? Did you start getting the message after you (supposedly) deactivated it in BIOS? Or is the message only visible when it's active? Or is it always visible?


> I really can't afford to lose any of my machines

You might consider picking up a cheap used motherboard from ebay or the like to experiment with, if you're really interested.

> Did you start getting the message after you (supposedly) deactivated it in BIOS?

Yes.


Pretty weird. I'd think the entire point of Intel ME and AMD PSP is that they are completely invisible to the user of the machine. Putting a BIOS option... kind of sounds like the failed Do-Not-Track HTTP header which a lot of advertising networks ironically used as a one more data point to pinpoint who you are. (lol)


Nah, they're not invisible. They provide a wide range of anti-features that are visible. TPM, DRM, AMT, stuff like that. Of course, the scariest parts are the ones that are invisible, and I have no idea if the BIOS flag disables everything, or just the parts you can see.


Do you have a link that does a deep dive on all the features that the management engines are supposed to provide?


Nope. Wikipedia lists a couple of them... then the other 90% of the article is security vulnerabilities, lol.


Not much better in theory, and the implementation details are not very well documented to say the least, same with Intel ME. Sometimes there are uefi settings that let you turn these off. Me_cleaner can at least cripple the ME. So strange that we have to deal with this, along with many hardware having an independent system running god knows what.


Yeah, I know about it. If I only could do it... :(


I wonder how this is similar/different to the Qualcomm take on it? Semis curate seems to not be a fan: https://semiaccurate.com/2021/12/01/qualcomm-8cx-gen-3-too-d...


No wonder they are not a fan. We are rapidly accelerating towards a worst case dystopia where a handful of companies backed by state power control what you hear,say and execute on your machine.

I predict atleast one major world power will mandate this technology in consumer electronics to be able to connect to the internet by 2030 in its current rate of adoption.


They say it is remotely accessible but present no evidence.

They present device-unique keys as impossible with no evidence despite that being exactly how TPMs have worked for decades.

They didn't even mention what is the biggest concern: firmware. This thing is basically a secure microcontroller, and MS has implied that it can run different software to achieve different use cases (at launch it emulates a TPM). The important question is: does it only load firmware signed by Microsoft. If not, I don't have a big issue with it.


Not much of a security processor if you can throw whatever firmware you want on it and trick the host OS into thinking it hasn't been tampered with.


Tricking the host OS into thinking whatever the user wants it to think is a feature of a security processor.

How to design secure hardware: The system board contains two copies of the firmware, for which the source code is published. The first copy is fully read-only and cannot be modified after manufacture. The second copy is in flash memory. A switch or jumper on the system board determines which copy is used at boot.

Now you can trust your software/firmware (if you could trust it from the factory), because the read-only copy can be used to boot the system in order flash the read-write copy to a trusted (possibly newer or third party) firmware version.

This also means that a bad firmware update can't brick your hardware, because you can recover it by using the read-only copy.

There is no requirement for the hardware to prove anything to the OS. If the hardware is compromised, this is not actually possible anyway, because it could lie. What you need is for the hardware to be able to prove something to the user, i.e. that the firmware update or operating system they just installed was actually installed and not ignored or modified by some already-compromised read-write firmware.


nit: in order for something to be read-only it's got to be a mask ROM or otherwise burnt-in (rather than merely undocumented flash), which implies being small. So the read only part really just needs to be a well-documented bootloader rather than a whole BIOS partition.

Also whenever you hear "security" you need to think "who's security?". Like when you go to the airport they're all on about "security" but it's not your security - you yourself are being made less secure by having to disassemble your person at the checkpoint for their security.

Similarly a "security processor", as defined by Big Tech [0], has the goal of making them more secure at the expense of your own security. They can't have you running whatever pesky user-empowering software you want and all that. You might end up believing some of those radical ideas from the 90's!

[0] which includes Big Hardware, which we're unfortunately reliant upon due to economies of scale.


You can totally use a multi-stage loader, storing stage 0 and encryption keys on mask ROM. You have to make sure you don’t use a shoddy crypto implementation (or else you end up with something like Wii boothax), but the general concepts aren’t too out there:

1. Load stage zero from maskrom

2. Stage zero loads stage one from flash along with a public signing key from maskrom

3. Stage zero verifies the signature of stage one from vendor. If it’s not valid, halt.

4. Stage zero jumps into stage one.

5. Stage one and later perform the rest of the attestation and booting process.

The hardest part is bootstrapping, but it’s really not THAT hard…


Storing a public signing key in maskrom is the exact opposite of user empowering hardware! The public key will end up being defined by the manufacturer or the system integrator, leading to the very loss of control we're trying to avoid.

The only two solutions I've seen for preserving user freedom while preventing evil maid attacks are either making it so that all system state can be easily read out and verified by external tools, or allowing for a mutable signing key that can only be changed by waiting an appropriate time delay (say on the order of days).


> in order for something to be read-only it's got to be a mask ROM or otherwise burnt-in (rather than merely undocumented flash), which implies being small.

Not necessarily. Suppose you use ordinary flash but the write lines are physically not connected to it so there is no way to write to it from software.

In theory someone could desolder it and replace the contents, but if the attacker is desoldering your hardware, you've already lost.

> Also whenever you hear "security" you need to think "who's security?".

They try to sell people the "security processor" as a feature. It's important to make everyone aware of what it really is.


> The first copy ... cannot be modified after manufacture

A second flash chip with a write disable line would also work, but that's not what you described. For one difference, it allows for evil maid attacks with an external flasher.

> But they try to sell people the "security processor" as a feature. It's important to make everyone aware of what it really is.

I agree, but it's not productive to just unilaterally define "security processor" the other way and talk past the problem. Either qualify it ("user representing security processor"), or otherwise make it clear that "security" doesn't always imply something good for the user (in fact in consumer marketing material it's more often harmful than good).



> secret.club

> Imagine you want to watch your favorite show on Netflix in 4k, but your hardware trust factor is low? Too bad you’ll have to settle for the 720p stream. Untrusted devices could be watching in an instance of Linux KVM, and we can’t risk your pirating tools running in the background!

Oh no, video platforms don't want me pirating their stuff. Video game companies don't want me to cheat with aimbots and wallhacks. And all it needs is "has this user tampered with the OS to the point that I can't figure out if they've tampered with my process". This is 1984 at long last.


I find this to be missing the point.

No one is arguing against all form of verification of trust, hell even fully open public domain content would benefit from simple verification, what people are (rightfully) angry over is the fact that: A: It's not open for anyone to audit/modify B: It's a forced dictation upon the customer, not the bad guy.

In essence in the name of "trust" don't trust the customer, even though customers would gladly see individual implementations such as anti-cheat be implemented.


Yes, as a consumer and gamer i would love to see anti-cheat measures that work.

But i will never consent to having the gaming session recorded by a camera or using Microsoft Pluton for that matter.

That is simply a step too far for comfort, and not being able to purchase a CPU without Pluton tells me that this will be forced upon the consumer.


I admit that's the weakest article out of the ones I posted but still makes still makes a few good points defusing commonly repeated sentences like "I can just patch out the bad stuff" like we all read during the W11 announcement.


I think it's the point that trusted computing is optional. If you don't want to provide that trust by running W11 with the appropriate chain of cryptographic assurances, you're not going to be able to experience services and products that require it - why should Netflix give you a full 4k stream when you've chosen to disable the widevine extension in your browser? They can't tell their media provider "we try our best to prevent piracy" if they do that. The same goes for things like Valorant and the inevitable age where online games require TPM attestation - if someone runs their computer in an untrusted state that makes it nigh impossible for the anti-cheat to figure out they're running aimbot/wallhacks (or other cheats that do best by reading the process's memory), why should they let that computer play, when it could ruin the experience for other players?


> I think it's the point that trusted computing is optional. If you don't want to provide that trust by running W11 with the appropriate chain of cryptographic assurances, you're not going to be able to experience services and products that require it

So, it's important that it's optional, so everybody will do their best to make it mandatory.

> why should Netflix give you a full 4k stream when you've chosen to disable the widevine extension in your browser? They can't tell their media provider "we try our best to prevent piracy" if they do that. The same goes for things like Valorant and the inevitable age where online games require TPM attestation - if someone runs their computer in an untrusted state that makes it nigh impossible for the anti-cheat to figure out they're running aimbot/wallhacks (or other cheats that do best by reading the process's memory), why should they let that computer play, when it could ruin the experience for other players?

Ah yes, of course; just like, in the name of preventing anything that they possibly could, it's totally reasonable for them to demand that you leave your webcam on and stream a screen share and the webcam to their servers at all times, since that will make it extremely difficult to cheat. Privacy and security concerns are irrelevant, since after all they have to do their very best possible job, right?


> it's totally reasonable for them to demand that you leave your webcam on and stream a screen share and the webcam to their servers at all times, since that will make it extremely difficult to cheat.

This is already how tests for school (including higher education) have been done, especially since the pandemic started[0]. In most situations, like playing a game, you can choose to play a different game (perhaps one not tied to being internet connected), but with higher education it's effectively forced on you to get a degree.

0: https://web.respondus.com/using-lockdown-browser-with-a-webc...


"I think it's the point that trusted computing is optional"

It is optional the way the Internet is optional; in theory you can live without it but no longer practical. If companies keep pushing these systems, eventually they will become a requirement all over the place. Microsoft's track record with UEFI bootloader restrictions makes it pretty clear what that future looks like: cheap devices used by the masses will be more heavily restricted, while those who can shell out 3-4x the money can get a computer that at least allows them to run whatever software they want to run.

"why should Netflix give you a full 4k stream when you've chosen to disable the widevine extension in your browser?"

Why should Netflix dictate what hardware and operating system I get to run? A basic design principle of the Web, which Netflix relies on to avoid having to provision set-top boxes for their customers, is that anyone can implement a client without first seeking permission. In an ideal world Netflix would have to respect the basic design principles of the Web and of the Internet in order to benefit from those systems, but we obviously do not live in an ideal world.

The largest companies in tech and media are trying to rewrite the rules of the consumer markets they do business in for their own benefit. For the entertainment companies DRM is a convenient way to avoid copyright laws (i.e. the part where copyrights expire and where fair use is a defense against infringement claims), and for tech companies DRM is a strategic play that allows them to control the devices they sell to users and monetize that control by selling DRM features to media companies. The legal and financial structures in place today encourage this behavior, and the concentration of power and lack of effective competition are making it possible.


This is a balance of power issue, and corporations will always try to shift more power to themselves when the technology to enable that is available.

I don't want Netflix to have the ability to verify the software I'm running before it gives me the stream I paid for. None of the things they could do with that power, such as forcing me to watch ads before the stream I've already paid for, or charging me different prices based on the kind of device I'm using benefit me.

We as technologists should be wary of technologies that further shift the balance of power away from users.


> This is a balance of power issue, and corporations will always try to shift more power to themselves when the technology to enable that is available.

Exactly, This is why I said below that this is a political issue. The question is whether the politics will see big tech as a threat and restrict them or as a potential ally and merge with their power.


You are right, Netflix and Valorant can choose to require this and the world will go on as before. However this tech has the potential to put you at the mercy of a handful digital lords that may or may not let you access your bank's(If they decide to enforce attestation, just like on phones) website or just participate on the internet if you piss them off.

I certainly wouldn't want to live in a world where this is a possibility.


Why doesn't Netflix and Riot just sell me their own box/console that they trust instead of requiring TPM devices in in my machines?


Because they can. And people won't do anything about it.


> why should Netflix give you a full 4k stream when you've chosen to disable the widevine extension in your browser?

... beause we are a paying customer, want stuff in standard formats instead of proprietary ephemeral garbage, and we can just torrent it in 3 minutes? ...


So we're presumed guilty of stealing and cheating unless we can prove to The Company that our computers are obedient consumers. And if our own computer makes a mistake and testifies against us, it's our word against a black box.


  - This (whatever it is) will lead to vendor lock in and more incompatibilities as usual
  - It will cause privacy problems, despite little hip kids pretending that caring about privacy is not cool, then again video cards are already full of privacy problems, and just connecting to an online game uniquely identifies you. These are not the end of the world but they are a misfeature.
  - It will cause security problems. Every time someone implements some trash like this that mentions it "has security" in an abstract content-free marketing description, it causes vulns. Cloudflare leaking bank passwords to other websites and the RCE in Intel ME come to mind
  - This is the 1000th time an OS/hardware vendor has proposed a "security" gimmick that we are all forced to put up with because of mindless consumers. The previous 999 things have failed to even remotely achieve their goal.
  - Whatever it is does not conceptually solve game cheating; the multiplayer design meta is broken and in flux for the last 30 years. Right now anyone can make their own client for a game. Is this allowed? Oh no it renders shadows with a different set of pixels, what will we do? Oh wait, so do video cards probably (not into 3D so no idea).
> Oh no, video platforms don't want me pirating their stuff.

Video piracy is literally impossible to prevent, why even bring that up.


> > Imagine you want to watch your favorite show on Netflix in 4k, but your hardware trust factor is low? Too bad you’ll have to settle for the 720p stream. Untrusted devices could be watching in an instance of Linux KVM, and we can’t risk your pirating tools running in the background!

OT: sounds like Emule.


Well, viewers will have their 4k shows one way or the other, so it's not the strongest of arguments. This system deals better with cheaters, that's for sure, but even then, I'd like this trust to work both ways: if the software doesn't trust my computer, that's fine, I could give control, but then my files, network shares etc should be locked away from that software too. In fact this is what I'm currently doing by dual-booting, and separating my Windows network into a separate vlan. It's fine for games to not trust my gaming system, but then I'm not trusting it either with my private stuff.


> NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information.

ugh


I think everyone should pump the brakes on being upset until the details actually come out. Presumably this chip is the same that they use on their "cloud" servers, including the stuff that runs bare metal linux so there should be drivers for Linux.

That's a ton of assumptions, all probably wrong, but do we always have to immediately jump to the negative assumptions? Especially when it's meant to interface with the cerberus OCP project.


I suspect most Linux users aren't looking for drivers but rather a way to disable the functionality entirely. The issue with these types of features is that they tend to have unexpected surprises (i.e. back doors, bugs) while not providing enough documentation or access to said functionality to assess them. So it comes down to 'trust us' and it's functionality specified by Microsoft, so... no.


Given Microsoft's history I think it's safe to assume the worse until proven otherwise.


You don't have to assume anything. Take a look at SafetyNet on android and apply the exact same thing to PCs.


I think it's quite obvious what "chip-to-cloud security" [1, official MS Blog] and "remote attestation" [2, Azure blog] is.

To quote Microsoft:

> However, AS3 must also authenticate the device itself. It does that via a protocol called remote attestation:

> [...] 2. The device signs these values with Pluton’s private ECC attestation key and sends them back to AS3.

> 3. AS3 already has the device’s public ECC attestation key and can therefore determine whether the device is authentic, was booted with genuine software, and if the genuine software is trusted. [...]

It is OBVIOUS exactly what this is. This is technology that will progress as follows:

1. Your streaming platform will only work if you are using a Microsoft Pluton hardware-backdoor agent. The blog post literally talks about verifying "genuine" and "trusted" software. That's as close as you can say 'DRM' without explicitly saying 'DRM'.

2. Your banking apps, and eventually websites, will not work unless your device features a Microsoft Pluton hardware-backdoor agent; just like Google's "SafetyNet".

3. Your government is going to introduce QR code check-ins (like those here in Sydney, Australia; which technically carry a prison sentence if you don't comply), and the government QR scanner app will only work on devices with SafetyNet, Microsoft Pluton, etc.

4. Eventually, it will technically be impossible to connect to the internet, or your ISP, unless you are running ring -1 backdoored hardware. Your ISP will 'remotely attest' that your hardware and software is 'trusted'.

5. Think Apple's client-side CSAM scanning is bad? Just wait for governments to require all OS vendors scan local content (photos, soon text) are scanned against a government-controlled secret database of 'illegal' material. To protect the kids, and stop terrorism, or course.

6. Much like DMCA's anti-circumvention clause, attempting to disable Microsoft Pluton, or comparable agents, will become a felony, because you are circumventing a 'control measure'.

[1]: https://blogs.windows.com/windowsexperience/2022/01/04/ces-2...

[2] : https://azure.microsoft.com/en-au/blog/anatomy-of-a-secured-...

P.S: 'The Pluton security processor’s firmware will be updateable through Windows Update along with standard industry control'.


I read this as:

Step 1: Microsoft Pluton

Step 2: ...

Step 3: Eating Babies

You've taken a perfectly legitimate requirement in modern computing environments: The need to secure against attacks end to end, from the software you're running, through the entire ecosystem, even to the supply chain, and extending it to something nefarious.

Sure, it could be used to create a world where no one can run unapproved code. Or it could be used to create a world where you can OPT IN to running approved or unapproved code. e.g. iOS vs Android side-loading.

Consumers will likely retain the choice, on Windows, to override the system and run unsafe code. However enterprise / managed computer systems likely will prevent it. I mean what security team or sysadmin wants office workers to be able to install and run arbitrary code on corporate machines or networks?

Let's try to avoid catastrophizing everything.


> You've taken a perfectly legitimate requirement in modern computing environments: The need to secure against attacks end to end, from the software you're running, through the entire ecosystem, even to the supply chain, and extending it to something nefarious.

If the way to achieve end to end security is by giving full control to someone else then I don't think that's worth it.

It is a catastrophe if people end up losing control of their systems like they have with phones and consoles.


A system that you have total control over is a system that you are totally responsible for administrating.

Most people don't want that. Most people don't have time for that. For them, an Xbox that can browse web , run Office, and play streamers, remotely administered by a company with name recognition and trust like Microsoft, is enough. And that even goes for many Hackernews; if I had a nickel for everytime I've heard something like "Oh, desktop Linux? No thank you, that's for college students with plenty of time on their hands. I'm a professional dev, but I have kids and a mortgage so I need something that Just Works." I could buy a fancy meal for me and my girlfriend with the proceeds.

A big part of Just Works in today's pervasively interconnected world involves delegating the security of your device to a third party, either your company's IT or Apple, Microsoft, or Google for personal devices.


Remote attestation has been hypothesized about for decades, and its implications have been thoroughly pondered. The entirety of its functionality is to allow remote parties you're interacting with to know exactly what software you're running. This totally destroys the idea of independent parties each running software that represents their own interests, with neutral protocols mediating between them. So yes, it is perfectly reasonable to indict the technology based on its straightforward implication of further exacerbating existing power imbalances.


You're assuming the host/client relationship is one way, and that the parties are "independent". In many cases I've seen proposed in research, remote attestation is used to secure a single-party app that is spread over multiple devices that the runs their app on.

For example, if I run an app, which also requires me to run some code in a container in a data center, I as the client, want to remotely attest that the code in the data center is not running in a tampered fashion.

Or perhaps I have my watch, AR glasses, and phone are collaborating, and these three devices come from different manufacturers, and are running different software, and I want to ensure that none of them has been tampered with.

And let's not forget IoT devices, which are the killer application I've seen suggested for remote attestation.

"with neutral protocols mediating between them". Protocols aren't implementations. There are cases when I am interacting with someone else and I want them to prove their identity to me, including the identity of their software. Maybe I'm running a supposedly secure chat client in an authoritarian state, and I want to know that the chat client they're running isn't a trojan they got from a government security service.

You don't need remote attestation to get to the bad places you suggestion. App Stores have been like that for years without attestation. Consoles have been closed down for decades. Relatively simple, unsophisticated copy protection can prevent most people from running what they want, no fancy crypto protocols needed.


> You're assuming the host/client relationship is one way, and that the parties are "independent".

Yes I am, because that is the alarming case. A main use of the "end to end" integrity you talk about will be from powerful centralized companies right to people's eyeballs and fingertips, thus computationally disenfranchising individuals.

It's cute to be able to get better security properties out of a cloud host [0], but not really germane to the larger societal effects. And to be honest these things sound like a solution looking for problems. Clouds are for non-personal data only. Devices from different manufacturers should still be representing me and therefore not mutually suspicious. The Internet of Trash is a problem due to lack of updates and open protocols, not a lack of centralized control over those devices.

> You don't need remote attestation to get to the bad places you suggest

The ill effects you're describing are created by the original "innovation" of treacherous computing, the locked down platform. To solidify that regime of control, you need to keep non-locked-out devices out. That is where remote attestation comes in.

Right now if a powerful party that I need to interact with demands to know what software my device is running, I can lie, and make sure my custom client follows what they're expecting so they don't know the difference. They can up the game and lock their client down more and more, but ultimately once someone gets it to spill its secrets, it's fair game for open access. Remote attestation is then a security vulnerability such that I cannot use a third party client and lie about it.

In general authoritarianism always looks appealing and will always be framed in terms of the benefits it brings. But it's an anathema to open society that will inevitably be abused by bad actors.

[0] furthermore you can never discount that as power coalesces, large businesses won't get exceptions to break various security properties for various necessary-sounding reasons. So independent actors won't even be able to rely on the claimed security properties.


But that is EXACTLY what we're going to end up with. In this case, past performance really does imply future actions.


Really? So I can't run unsigned software on Windows these days?


There was at least 3 attempts from Microsoft in the past to do just this (Windows RT, S and X).

So while you can still run unsigned software on x86 on Windows today, if Microsoft previous attempts at ARM machines had succeeded, it would have been indeed impossible to run unsigned software on some Windows installations.


In agreement here, on-chip functionality with support from Intel, AMD and Qualcomm.

This makes all the red flags go off, and yes sure we can tell ourselves that it will be ok.

Without any mention on the consequences of disabling this, i think it's quite reasonable to assume the worst.

This is functionality that in no way is a net benefit for the consumer, period.


Well, this just sucks. This means Microsoft owns my computer. They can access parts of my CPU without my knowledge. WTF, AMD!



> They can access parts of my CPU without my knowledge.

Let me suggest you look into something called "Windows Update" which seems to be an intentional backdoor/RCE that Microsoft has baked into their operating system.


WU has no hardware backed way forcing you to comply. At the absolute worst case you could load a driver or patch the OS from linux to get it to function as you wanted. Pluton gives MS leverage to make you comply or you fall out of their favors (store,drm,office)


Depends on the manufacturer. Many of them already include virtual hardware in the BIOS that will install some form of manufacturer spyware from Windows Update.


Most of us concerned about this slow tech-authoritarian march haven't been running MS Windows for decades. The problem is that by treacherous hardware, we'll get forced back into running more proprietary software. Instead of being able to run a Free web browser that renders content according to our own interests ("user agent"), remote attestation will allow websites to force us into running the locked down browser of their choice on a supported platform of their choice.


You have a degree of control over that; you know when updates are installed, the list of updates, you can disable updates. Even if it is not 100% in your control, you have a fair amount.


> This means Microsoft owns my computer.

Yes, that's what Operating Systems tend to do.

Why are you running an OS from a company you don't trust?


Who said they are running windows? The security processor will be present no matter what OS you use.


That's very different from the security processor doing anything if you're not using Windows, though.

If this processor can be disabled in the BIOS, like existing TPMs typically can be, then does this matter? If Microsoft's "hardware backdoor" is only powered on when using Microsoft's OS anyway then what's the problem?

If this can't be easily turned off, then yes that's absolutely a problem. But there's no evidence that's the case, and would be a divergence from existing TPMs in that regard.


How could this co-processor, designed by Microsoft, be able to spy on you while you're running Linux, an OS it doesn't understand, and Linux itself is actively trying to keep itself hidden from such a co-processor.

I'm disappointed in the HN crowd going into hysterics when the reality is simple; no hardware module can spy on an OS that knows exactly how the module works and can actively deny it any access.


Very easily - for example it can just dump any sequences in memory that look like encryption keys and send them to the mothership.

This can be initiated through the actual "owners" of this chip or the next malware authors that exploit this undocumented black box that received zero scrutiny by the public security community.


That's not "very easily" - how is it sending them to the mothership? It needs a network connection, and unless it has a wifi stack & WPA keys to connect to things, that's not very likely for it to have.

The typical backdoor concern for things like Intel's management engine is via ethernet, which it definitely does have access to and is a lot easier for it to stand up & make a connection. But wifi requires credentials or some way for this security chip to inject additional packets. The former isn't really plausible without OS participation, and the latter would be very detectible if it happened.

The owners of the chip can't exactly magic commands directly to the chip, after all. And communication with the outside world is trivially detected unless they go through the extreme cost of giving it its own cell modem or similar.


What you refer to "former", completely bypasses the OS, by having direct access to the ethernet controller.

https://en.wikipedia.org/wiki/Intel_Management_Engine

The WiFi could connect to a neighboring device supporting this function. Amazon developed Sidewalk[0] for example. Or my country has an ISP that lets you connect from any of the subscriber's wifi, as long as you pass your credentials. The subscribers can turn this off, but it's opt-out.

So this is two, actual, already deployed mechanism that solves the "how it's sending to the mothership" problem. And I expect more to come - looking at 5G for example.

[0] https://www.theregister.com/2020/11/24/amazon_sidewalk_opt_o...


>no hardware module can spy on an OS that knows exactly how the module works and can actively deny it any access.

I don't think so. If you skim through the Intel ME wiki article, which is one of such systems, it works almost completely independently from the computer, having separate and privileged access to different hardware, like the ethernet card, and doing whatever it pleases - including working while the computer itself is turned off.

https://en.wikipedia.org/wiki/Intel_Management_Engine


It's not "hysterics" - you're misunderstanding the problem. The problem isn't the presence of such functionality on your own computer. As you state, you're free to ignore it [0]. Or even use remote attestation for your own purposes, which could be handy.

The problem we're reacting against is that when these chips exist widely enough, it becomes plausible for others to assert that everyone's computing device has one. At that point it's possible for third parties you want to interact with (eg your bank, government, entertainment provider, or even mundane web service) to start insisting that you use such chip's functionality to prove exactly what software you are running, for their "security" [1]. At which point it becomes impossible to run anything besides approved locked-down proprietary environments when interacting with such services.

As others have said, look at "SafetyNet" on Android. It's new enough that you can still pick banks that don't insist on dictating your computing environment, but with our moribund markets and monkey-see-monkey-do executives, the trend can only go in one direction.

[0] assuming no side channel between it and the network card(s).

[1] which they'll Orwellianly brand as "your security"


Have you ever considered the possibility that SafetyNet, for example, exists because banks want to guarantee the transactions are indeed done by the user?

It all goes back to the concept of the chain of trust. If a company can vouch for the code all the way from them to where it is running, that is an incredible improvement to security. Apple does it for its encryption and startup sequence, having delegated macOS to a read-only, signed volume on your disk that they can cryptographically guarantee has not been modified one bit. That, to me, is a great improvement in the security of our lives, and our banking.


Yes of course, that is their obvious goal. Desiring ever more power to assure safety is one of the common motivations of authoritarianism in general. It's incompatible with a free society where power is distributed throughout many independent parties.


Fundamentally, I agree with you. However, the inability to access a banking system, or any system, without a cryptographic proof of your OS' source is such an overreach. When a company stops accepting faxes to conduct business, do we decry the slide to authoritarianism?


My reference to authoritarianism isn't just a general frustration or condemnation, but rather because this technology specifically facilitates authoritarianism. It increases the ability of the more powerful party in a relationship to dictate the behavior of the less powerful party in a relationship. Sure, more powerful parties can always attempt to dictate such things, but the degree to which they succeed depends on how enforceable those terms are. Remote attestation directly enables such enforcement.

Since central planning is unscalable, this inherently leads to mandates that are ignorant, arbitrary, and capricious. For example, perhaps a bank makes it against their terms to access your account through a virtual machine, due to a lack of understanding technology and a general worry - it's one more moving part and this user is doing something nonstandard that they don't understand, so to them risk goes way up. Yet I myself do all my banking access in a VM, for various security and practical reasons. But still I don't have to worry about any bank actually attempting to enforce such a clause, since unless they really start digging through my system stats with javascript and/or raw sockets, they won't even have any indicator of such. Whereas remote attestation would allow them to cut right through all of that and enforce such a thing (actually it would prevent it by default), thereby (increasingly) preventing me from operating my personal computing environment how I see fit.


You're decrying a technology not for its current benefits, but for its future potential unrealized risks. I doubt banking through a VM will ever be disallowed. There are too many benefits.

Banning banking through a non-signed OS, sure.


It's not a "risk". It is a situation implied by the capabilities of a technology, with straightforward market incentives that will make that situation highly relevant. Your position is akin to asserting that a large asteroid headed directly at earth is merely a "risk" without any specific argument for why the projections are wrong.

And unlike abstractly debating this stuff in the early 2000's, we have already seen what has occurred with platform signing keys! Increasingly locked down proprietary OS's, arbitrary top-down restrictions on what software can be run, and everpresent centralized control. All in the name of "security" which really means corporate predictability (aka "authoritarianism"). So really your comment is like another asteroid has already hit and obliterated the moon, as was predicted by the same projections, and yet you're still treating the straightforward implication as a hypothetical.

And was your response to my example supposed to be a counterargument? I don't see how general VMs would ever be allowed [0]. But sure assuming they are, go through and replace "VM" with "non signed OS" if you'd like. So I'm supposed to buy and setup MS Windows [1] to access an online bank - simply to appease some out of touch risk assessor?

[0] it would mean verifying arbitrary dom0 OS's (eg NixOS) and arbitrary hypervisors (eg libvirt+kvm)

[1] for which my current security policy is to disallow it from connecting to the Internet after it has touched any personal information


Because Microsoft is not a company that can be trusted; while they limit themselves to their OS it is fine, it is not our business. But once they start to tamper with the hardware the competing OS-es are running on, they can't be trusted. History shows they will use all available measures to take advantage to harm the competition.


Exactly.


No, that happens only if you reward them with your money (buy that CPU).


You literally have no other option. Intel,AMD,Qualcomm all announced they will be adopting Pluton. Apple devices had black boxes dubbed "security chips" that could very well function the same way if apple decides to utilize it's features to control users.

The world can't run on nerd vanity hardware like RaptorComputing POWER workstations and Honeycomb LX2's


> The world can't run on nerd vanity hardware like RaptorComputing POWER workstations and Honeycomb LX2's

It... Totally can? The only problem with Raptor is that they don't make any cheap hardware. In terms of performance and functionally, they're good.


Ask marcan for more info :) but IIRC Apple's secure element does not do anything by itself and is behind an IOMMU.


Companies run the world then, not Govt's!


Don't they own it already by taking advantage of TPM? Or well, at least control what software it can run, and whether your installation is "legit".


The TPM is a step in that direction but still misses a couple of puzzle pieces to truly control your machine. For example it may sign the boot measurements but has no introspection into what runs on the machine afterwards.


What about the Intel Management Engine that is more than ten years old? It is a full OS with maximum privileges.


ME was always sold of as a sort of Premium enterprise feature with no uses on the consumer front.

However Intel dabbled with technology much more sinister (SGX) that did exactly what it said exactly on the tin, protecting data from the user of the machine. Unlike MS's track record with Pluton, it was always horribly broken,required a license to attest victim machines and only covered the Intel half of the PC market. The only commercial software I'm aware of that ever mandated SGX is some blu ray player for windows


Previously on HN

https://news.ycombinator.com/item?id=25191319 “Microsoft Pluton Hardware Security Coming to Our CPUs”: AMD, Intel, Qualcomm (156 comments)


Anytime Microsoft does something for "security" you know it's really about control and surveillance.


Yeah, definitely not the end user's security.


I don't want any hardware designed by Microsoft in my machine. The concept might be good, but I don't trust them not to ruin my experience when using Linux. This will force me towards using a smaller number of niche vendors (good, if they survive that long) or just switching to a Mac (bad).


If I recall, a good portion of the UEFI Secure Boot system is entirely of Microsoft’s design, albeit this is thankfully an optional system in most cases but one that has definitely caused headaches for Linux users.


What were the headaches? That you had to sign your kernel and add pubkey to BIOS to use the feature?


The usual Microsoft/Intel collusion/mono-culture. I am so tired of this duopoly. AMD just tows the same lines to sell hardware.


Would KVM with virtual TPM isolate Win11 enough?


Even if you ignore the fact that they'll find a way to shit on Linux[to push WSL] I simply do not trust Microsoft to get anything right in terms of security.[1]

[1] Being alive and security aware for the last few decades.


i don't want microsoft in my cpu!


Should've made some noise a year ago when this was first announced lol

A similar story unfolding in the early 00's was thwarted by the tech community shaming microsoft into dropping the plans (name Palladium back then). I have a feeling people were much more vigilant and better informed back then.


It was simply a smaller field at the time. Easier to stay informed when there is just "less" all around you.


So it is like next gen TPM to lock down what OS is blessed to be booted ???.

I guess mainstream Linux distro's will be okay but probably not homebrew stuff you see pop up on HN every now and again.

Apple does the same for the T2 so why the pitchforks ???.


Oh well, looks like I'll be going for an ARM machine.


We are past the stage where it can be solved by technological means. This issue is strictly political and economical.

Once this tech reaches critical mass of adoption, you might have trouble participating in the modern world since your device is not "trusted". Much like how Android's safetynet prevents running banking apps if boot time signatures do not mach an approved list.


The Raspberry Pi foundation is working on adding a signed bootloader and trusted computing to their next chip. I'm looking for the source I found, and I thought it was [1] but don't see the text there anymore.

[1] https://www.raspberrypi.com/news/bullseye-bonus-1-8ghz-raspb...


Wayback machine does not show any such changes on a cursory glance.


Qualcomm's ARM machines will have this too.


Qualcomm said they going to be partner with Microsoft.

RISC-V is the future for no backdoor.


What's to stop RISC-V CPU manufacturers doing the same?


> AMD is refreshing it’s mobile processor lineup with the launch of new chips including high-power models aimed at high-performance gaming laptops and mobile workstations and more energy-efficient chips designed for thin and light laptops.

Is it just me or did “mobile” get redefined while I wasn’t looking? To me, a mobile device is a phone. This seems to imply that a mobile device is anything that isn’t a desktop. Above quote from another site btw.


In the context of PCs, mobile processor means laptop processor, for example: https://www.intel.com/content/www/us/en/products/docs/proces...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: