Hacker News new | past | comments | ask | show | jobs | submit login
“Microsoft Pluton Hardware Security Coming to Our CPUs”: AMD, Intel, Qualcomm (anandtech.com)
201 points by vanburen on Nov 23, 2020 | hide | past | favorite | 156 comments



I worked extensively with Pluton when I was employed on Azure Sphere (an IoT platform marketed as highly secure and composed of a linux-based OS, ARM SoC, and cloud service). I might be able to answer questions about this.

Here’s a blog by the engineer lead on Azure Sphere that discusses Pluton: https://azure.microsoft.com/en-us/blog/anatomy-of-a-secured-...

Disclaimer: I still work at MSFT but in a different org.


The pressing concern for me: what does this mean for non-Windows operating systems running on Pluton-equipped systems? Will there be a possibility for non-Windows software to use Pluton's features?


I can only comment on the technical details I know of, not the business objectives of the parties involved.

From a technical standpoint, Azure Sphere's OS was built on Linux. As far as I know, there isn't anything Windows specific to Pluton. Pluton was a separate (heavily-modified) ARM M4 core which we interfaced with from the main A7 core via a secure mailbox channel, which was again OS agnostic.


Frustrating that the announcement was made with so little technical details easily findable.

This kind of decision- to use an ARM core- seem pretty questionable. That's how things always were done, but it feels like another UEFI/FAT32 situation, dragging in old encumbering legacy baggage with big IP implications, when there are available other options (RISC-V).

It feels like this decision is being made literally one year too soon. Fixing the old, archaic in to place.


If this is intended to be a secure enclave, then it need to be pretty much a black box with a mininal interface to the outside world. Changing what is inside the box should be possible at any time then. On the other hand, why does it even matter that you know the CPU instruction set that's used inside the box when you can't ever have direct access to it?


It means that once most CPUs have the chip, they'll execute order 66 and other OSes will be blocked.


A TPM integrated into the CPU makes sense (and I am puzzled why TPMs aren't a standard feature of all MB given the modest cost). But what about that diagram in the article with a link to the cloud? Will this thing phone home outside of the control of the OS?


In Azure Sphere, Pluton didn't do any direct network communication, that was all handled by the main core. Also there was no cellular so the whole system depended on user interaction to get online.

When the main core wanted to talk to the Azure Sphere cloud service (from Linux user land), it would go through a remote attestation process that involved Pluton. Pluton can securely track what software was booted on the main core (called "measure boot") and it basically sends a hash of that to the cloud to prove to the cloud what software is currently running.

So I imagine the chip-to-cloud thing they're talking about is this remote attestation protocol.

Also, it's possible the term "Pluton" has been expanded to refer to more than just the M4 chip we used in Azure Sphere.


" it basically sends a hash of that to the cloud to prove to the cloud what software is currently running."

oh dear


I guess I’m wondering how Pluton and SGX coexist...


From what I understand, Pluton is more in the vein of TPMs, so able to store and handle your cryptographic keys and to do measurements during your boot process. SGX is a hardware feature that allows an app to run securely (i.e. Without any possible interference from other apps or even the OS) in a hardware-enforced enclave. Completely different tasks and use cases, which might overlap - maybe you can use keys with Pluton from within an SGX enclave?


I think you should assume Pluton will be used instead of SGX, which is so broken nobody has any confidence in it any more.


fTPMs have been standard for a few years now. As I understand it, this Pluton thing is mostly about having a fully "hardware" TPM inside the SoC instead of a "firmware" one.


It's a mix. The keystore is apparently in hardware but there is Pluton firmware to manage it.


Yes, of course management firmware is everywhere. Having keys in special memory that is literally only connected to fixed-function crypto HW blocks is what makes something a "hardware" security thing.


Greetings!

- Was Pluton based on an RTOS or is it running on bare-metal on top of the M4? - Is the architecture on the i.MX8-based Sphere the same as the one on MT3620? - Does the Security Subsystem running on the Cortex-A's secure world have any relationship with Pluton? Is the Security Subsystem running on top of the Sphere's modified Linux kernel like the normal world is?

Thanks, cheers!


1. Bare-metal 2. I only worked with the MT3620 so I cannot comment on others 3. Pluton would boot the A7’s Secure World which would the boot A7 normal world. Secure World and Pluton interfaced regularly but they’re fundamentally different code and purposes.

Hope that helps!


Will this be virtualisable so multiple VMs sharing a host will see separate, independent devices?

On desktops and laptops, will this device have a hardwired user-presence sensor, like Yubikeys do?

Would this device be performance-oriented enough to, for example, terminate SSL? I gather TPMs can, but only unhelpfully slowly [1]

Would it be performance-oriented enough to perform disk encryption? What about memory encryption?

[1] https://blog.habets.se/2012/02/Benchmarking-TPM-backend-SSL....


I’m pretty sure Azure Sphere used Pluton to do encryption for SSL. I don’t have any numbers, but one of the goals of Pluton was to accelerate crypto operations. But this was for a microcontroller context so I’m not sure about desktop/laptop class performance.

I don’t think pluton was used for disk or memory encryption, in Azure Sphere but I believe the possibility was discussed.

I’m afraid I don’t have anything more than speculation for the rest.


1. Is this specific to Azure Sphere CPUs? Or are general purpose intel CPUs going to have this capability.

2. If the latter, "Every piece of software on an Azure Sphere device must be signed by Microsoft." what does the OS interface look like?


1. I assume this is not just for Azure Sphere since the goal when I was there was to be a low-cost, low-energy platform. Also the announcements make it sound like a general offering.

2. Pluton can check the signature of software before booting it on the A7 core.

Hope that helps!


Can you explain why an end-user would need this in practice?


So.. this basically means swapping your CPU gets rid of anything you stored on its "TPM", or can it be backuped up to the TPM of your Mainboard and restored to the new one you install?


I think the whole idea here is that Pluton will be integrated inside of the same physical chip as the CPU. So physically swapping CPUs would swap your Pluton core too.

But the Pluton I know of didn't really have any writeable storage. It had some special ROM and fuses that it uses internally for its private keys but that's basically it.


Did all units share the same set of keys? Isn't that an attack vector?


A previous HN link is here -- https://news.ycombinator.com/item?id=25131431 -- which links to MS's original press release -- https://www.microsoft.com/security/blog/2020/11/17/meet-the-....

That article explicitly states that it was designed originally for the xbox. I worry that going to be a very anti-consumer, anti-free-speech, DRM heavy chip that MS want to popularise as an alternative to the (still hated in some circles) TPM. Why else would they design it for the xbox, of all things? Is it aimed to stop speculative execution attacks on a cloud server, or provide Level 4 DRM to Widevine's as-yet-unannounced competitor?


So, the Trusted Platform Module itself isn't a DRM solution. It's a chip that hangs off the LPC/ISA bus and holds a crypto key generated from boot stage hashes that your BIOS, bootloader, and operating system provide to it. The idea is that all of those hashes together form a key that would change if any stage were tampered with, and that by encrypting things with the key you can prove that those particular things haven't changed.

It's not particularly practical to build a DRM scheme out of a Trusted Platform Module, notably because the key attestation the TPM provides audits a particular combination of boot stages, not a particular piece of hardware. DRM vendors don't care about you updating your firmware, but they do care about videos being locked to a particular authorized piece of hardware. If you had a TPM-based DRM, you'd deauthorize your video downloads by just updating your BIOS, while videos you passed from one person to another on the same OS version would play just fine.

I imagine Pluton is trying to be a competitor to Intel ME or AMD PSP, which are things you can use to isolate software running on shared hardware. For example, Intel ME provides hardware support for Intel Software Guard Extensions, which is used to isolate DRM from the host operating system. AMD has something similar with Secure Encrypted Virtualization, which uses the PSP to set up different memory-encrypted containers for each VM that higher security rings can't access. In this case, locking down PCs from arbitrary code, like an Xbox, isn't really on the menu. What they're looking to do is carve out space in Ring 3 that Ring 0 can't touch.


> It's not particularly practical to build a DRM scheme out of a Trusted Platform Module ... What they're looking to do is carve out space in Ring 3 that Ring 0 can't touch.

Which is precisely what you would want if you were building a DRM scheme - you just aren't being imaginative enough. It's always important to keep in mind that bad actors are typically just as smart and capable as you are.

User hostile practices across the board benefit greatly from the ability to attest to the precise combination of binaries that were booted. Locked down devices are built upon that foundation - no custom ROMs, no jailbreaks, walled garden app stores, and DRM.

Unfortunately, those capabilities are a fundamental building block for securing devices in general. The same technology that can be used by an abusive manufacturer, publisher, or government to secure a device against the user can also be used by the user to secure the device against others. The key difference is in who holds the keys for the root of trust.

(To that end, some modern secure boot implementations manage to get this bit right by allowing you to specify your own set of public keys before locking down the UEFI interface with a password.)


Right, and this is why people are concerned: it puts DRM in a black box that is "untouchable", which is kind of the holy grail of the obfuscation that DRM requires.


But that would also make your device even more uniquely identifiable which is a massive security flaw in my opinion.

edit: I think it is plainly incorrect to brush off fears about DRM deployment and device lock down. This technology was specifically invented for it, there is evidence and direct statements from manufacturers about this.


I see Pluton more as a "competitor" to Apple's Secure Enclave Processor and Google's Titan chip, and getting rid of the nightmare that TPM was/is.


I thought that specific implementations had issues in the past but that the concept of a TPM in general was fine?

The Intel ME and AMD PSP, on the other hand, are proper nightmares. For that matter, so is any other "security co-processor" that operates as an unauditable black box below ring 0 (presumably this applies to both Apple's and Google's solutions).


Problem of a TPM is that it’s not an integrated chip, you can easily intercept messages going to and coming from the TPM


Discrete TPMs have been going out of fashion, fTPM (firmware tpm, i.e. soft TPM located in ME/PSP) has been standard for a few years now.


Yeah I've read the fTPM paper that made use of SGX, but it sounded like it had some limitation (needed fuses to prevent rollbacks, etc.)

From the article it also looks like Pluton will implement the TPM API, but I guess that's just to remain compatible.


Why does MS full disk encryption require TPM?


To solve chicken and egg problem. In order to enter password, you need to boot full OS (with keyboard and display drivers, for example); but to decrypt OS boot drive you need a password (or more precisely, some key material derived from the password). So a copy of that key material is stored in a TPM, and provided to software only if measured boot goes as usual (i.e. this is not a Linux booting from USB thumbdrive).


It doesn't (it does heavily push you towards it, but you don't need it), you can edit a group policy in order to be able to store the key on a usb drive or to use a passphrase.


I believe the TPM is where the private encryption key is kept.


Boiling frog stuff... We complained about this 20 years ago, all these moves were known back then, yet here we are.

More to come - TPM required to connect to the Internet and access news sources without any ability to store information on our own devices. Followed by rewriting historical articles to properly "sanitize" content.


It's Fahrenheit 451 without actually having to burn the books. Commerce will turn them into toilet paper and soon there are no original works. Just digital content that can be changed to meet the narrative at the moment. Interesting times.


TCPA again.


User experience is everything, and without a good profitable business model FOSS can't afford the massive investment of time and effort required to bring a competitive user experience. Making stuff work is only maybe 20% of the work required to build a product... sometimes less.

I've been yelling about that for years and years and very few people seem to get it. Free as in freedom got conflated with free as in beer years ago, and FOSS today is not much more than "waaah gimme free stuff!" It's not a gift culture. It's a "take culture."

Anyone who suggests any change to FOSS culture to remedy this problem gets shouted down. Any license that tries to remedy it gets attacked as "not OSI compliant" and restricting peoples' rights.

Meanwhile commercial closed source and SaaS vendors have the resources to leave FOSS and open ecosystems in the dust in terms of features and user experience. While the license purists yell about "restrictive licenses" taking away rights, the gravity of the walled gardens becomes more and more powerful.

The choice is between free and freedom.


People complaining the loudest about "not OSI compliant" etc. are not complaining about the software being non-FOSS. They're complaining about non-FOSS software masquerading as/lying about being FOSS.

If the solution to making FOSS more competitive with non-FOSS is to make FOSS non-FOSS, well, that's not much of a solution.


> User experience is everything

It's very important, but it's not "everything".

> and without a good profitable business model FOSS can't afford the massive investment of time and effort required to bring a competitive user experience.

By that logic, most FOSS should not have existed at all.

> Making stuff work is only maybe 20% of the work required to build a product... sometimes less.

That's often true. It's certainly true for some of the software projects I maintain.

> Commercial closed source and SaaS vendors have the resources to leave FOSS and open ecosystems in the dust in terms of features and user experience

That's not possible even with infinite resources, because FOSS software is sometimes great, often good, and also often the only thing available.


User experience is definitely not everything. Plenty of successful "enterprise" software products have terrible user experiences.


There's a lot of open source software that provides a fantastic user experience to developers, perhaps the solution is somehow getting some users to work on your open source product when developers aren't your intended users.


Even as a developer the things I want just don't work reliably:

1. Bluetooth; Audio especially, but all BT is flaky.

2. Low Latency audio; I have tried Jack on numerous machines and always find myself staring at high latency buffers because the kernel audio driver can't perform any better, and then there's how often it just ... goes silent without any trace in the logs.

3. Suspend and battery usage are, in general, still a decade behind the competition.


> 3. Suspend and battery usage are, in general, still a decade behind the competition.

For suspend there's been some regressions a few years back, but I use it all the time now on both Dell Latitude and Thinkpad X without any issue.

Battery usage is way better with Linux than with Windows, at least on the Latitude where I can easily compare with my Windows 10 using colleagues. It's not even close, and I also avoid the constant fan noise ;)

Now there's one thing to keep in mind: if you don't use a pre-installed Linux distro (which I don't, I use Debian stable) then you are the system integrator ;) No way around this.

But on well supported models like the Latitude and Thinkpad at least this integration is very easy: for me I just install the "tlp" (The Laptop Project) package, and because I only use SSD I aggressively idle the disk. This configuration I did years ago and simply reuse it. Done.


Driver issues are generally down (AFAIK) to proprietary hardware being shrouded in secrecy. Being forced to reverse engineer a particular component because the manufacturer won't share the spec sheet is a significant drain on the already comparatively limited resources of FOSS software development.


Those first two don't work very well on my proprietary systems either.

And, come to think of it, hibernation's been broken on my Windows install too lately.


> Why else would they design it for the xbox, of all things?

If you really want to know the answer, here's the lead engineer explaining it en detail: https://www.youtube.com/watch?v=quLa6kzzra0


He says in pretty much literally the opening sentence that it's for DRM: "we want to prevent the piracy of games", and then goes on to justify that their business model involves making a loss on each xbox sold, and wants to ensure that the CPU only runs Microsoft code against the wishes of the Xbox owner. A later direct quote is "the fundamental difference between Windows security and Xbox security is that the owner _is_ the bad guy".

That's not something I want in my general-purpose computing device where I am the owner.


> That's not something I want in my general-purpose computing device where I am the owner.

Consoles aren't general computing devices, though.

Apple disagrees with your idea of ownership, too ;) and so do the customers who Pluton is targeted at - https://www.microsoft.com/en-us/windowsforbusiness/windows10...

The whole project isn't targeted at end-users. It's IoT, businesses, hospitals, government agencies, utility companies, etc.

We need to stop seeing us (as private end users) as the centre of the world and start to acknowledge that there's hundreds of millions of PC devices out there that don't serve private end users. It's the security needs of these organisations that are addressed by this technology, not yours, not mine.

The unfortunate truth is that Windows is still the backbone of many government agencies, power plants, hospital IT, businesses and so on.

It's also a fact that most of these machines are not well managed, lack updates , aren't hardened or secured in any way and are targeted by cyber criminals on a daily basis; sometimes with grim consequences. It gets even worse when you look at IoT and the mess that manufacturers get us into (default passwords, unsecured data transfer, ...).

I see this chip in the same area as Intel's vPro, TPM 2.0, AMDs ASP (in their Ryzen PRO line), and so on; not necessarily aimed at end users (aside from the occasional buzzword) and more aimed towards businesses and government users (as part of their Zero-Trust initiative).


> The whole project isn't targeted at end-users. It's IoT, businesses, hospitals, government agencies, utility companies, etc.

> It's the security needs of these organisations that are addressed by this technology, not yours, not mine.

It's perfectly fine to let a sysadmin lock down a computer to reduce what the end user can do.

None of these use cases or security benefits require taking power away from the sysadmin. And that's what the argument is about: not whether the end-user is losing control, but whether the sysadmin is losing control. With the obvious note that lots of home users are their own sysadmins.

> I see this chip in the same area as Intel's vPro, TPM 2.0, AMDs ASP (in their Ryzen PRO line), and so on; not necessarily aimed at end users (aside from the occasional buzzword) and more aimed towards businesses and government users (as part of their Zero-Trust initiative).

Those are basically fine, as long as they can be disabled when not needed.

But if I'm forced to give someone else special beyond-root access to my device for DRM purposes, that's not acceptable.


> None of these use cases or security benefits require taking power away from the sysadmin.

Yes, they do! That's the whole point of the product. Why would you even trust the sysadmin in the first place? The fact of the matter is that a lot of data leaks have been caused by insiders - either willingly or via social engineering.

This technology provides a method of closing this loophole and aims to enable users (not private people) to have a secure domain that not even someone with physical access to the system and all administrative privileges has access to.

Whether it works as advertised is another story of course, but the gist of it is that no one is to be trusted; especially not the sysadmin.

> With the obvious note that lots of home users are their own sysadmins.

Again - this is not primarily targeted at home users. Plus the vast majority of home users don't even know what administrating a system even means. And TBH - why should they? "It just works!" has been a very successful mantra for this one company what sells iPods and such... This might be hard to grasp for some greybeards, but hardware security by design is worth more than security cameras, NDAs, background checks and good work ethics.

> But if I'm forced to give someone else special beyond-root access to my device for DRM purposes, that's not acceptable.

And that's fine and you are free to not use these products then because they're not made for you anyway. This is not consumer level hardware (at least not yet).


Its fine to say dont buy such hardware. The concern is what happens if thats all AMD, Intel and qualcomm sell to people. Apple already does this with the iPhones and tablets, and unless you find a bootrom exploit good luck running an other OS on the device.

You also start running into problems where more software and content may require such hardware.


> Apple already does this with the iPhones and tablets, and unless you find a bootrom exploit good luck running an other OS on the device.

These devices are not general computing devices (according to Apple), so in their mind that's fine. It also makes no difference to the customer since alternatives exist.

The fact that pretty much all other products in the smartphone and tablet market are inferior in terms of hardware, quality and software doesn't matter.

> You also start running into problems where more software and content may require such hardware.

So? If anything, this opens a market for software and hardware that doesn't require it, don't you think? For every Steam and Epic Game Store there's a Good Old Games [1] is what I'm saying. Just another great reason to support and use FOSS, no?

[1] https://www.gog.com


> These devices are not general computing devices (according to Apple), so in their mind that's fine. It also makes no difference to the customer since alternatives exist.

Yeah, well, if that's all it takes, then we'll probably not have any more "general computing devices" being sold in a few years. (Where did I hear that before?)


This argument of "trust no one, not even the sysadmin you employ" is actually "trust no one except me and this black box I'd like to sell you". Even ignoring the externalities of this kind of push I don't really see the value.


> This technology provides a method of closing this loophole and aims to enable users (not private people) to have a secure domain that not even someone with physical access to the system and all administrative privileges has access to.

Whether it works as advertised is another story of course, but the gist of it is that no one is to be trusted; especially not the sysadmin.

Who exactly is the user in this scenario? Who exactly sets the rules that the pluton architecture should enforce here?

> And that's fine and you are free to not use these products then because they're not made for you anyway. This is not consumer level hardware (at least not yet).

I don't understand why you are so sure about this not being intended for consumer-level hardware. There are plenty of scenarios where locking consumers out of their own devices would be highly desireable from a business perspective - DRM being only one of them.


> Consoles aren't general computing devices, though.

Consoles are absolutely general computing devices. Microsoft just uses DRM to prevent you from running non approved software.


Indeed :

David Cutler was called back from retirement to get Windows 10 booting on the Xbox One X.


> Consoles are absolutely general computing devices.

Repeating a false statement doesn't make it true.

A general computation device is a device that manipulates data without detailed, step-by step control by human hand and is designed to be used for many different types of problems.

A gaming console is strictly not designed to be used for many different types problems. It's a piece of hardware designed to run a specific vendor-sanctioned class of video games and in some cases provide limited media playback capabilities.

It uses specially designed hardware for that purpose, which is different in many ways from general computer hardware (specialised SoCs, proprietary storage solutions, etc.).

Sure, it's perfectly possible to use a passenger jet as a demolition device for multi-storey buildings, but that doesn't mean that they're in same device class as demolition equipment. The type of a device derives from its intended use, not potential uses. That's why a nail gun isn't sold as a hunting weapon even though it ticks almost every box of being a firearm.


Without those pesky users computing wouldn't be as successful. Windows isn't secure enough to use it in government or IOT in my opinion, aside from office software for clerks.

But if it is not aimed at end users, I am sure a simple switch will help. Somehow I doubt we will see it.


> "the fundamental difference between Windows security and Xbox security is that the owner _is_ the bad guy".

That's an amazing quote that should be preserved for posterity. ;-)


Is that on the newest Xbox? Saw articles earlier that you can simply switch to dev mode and run retroarch emulators already.


Dev mode apps run under a hypervisor (so do retail games), so there's still a security platform underneath everything protecting apps from each other (and the OS from the apps)


All apps on Xbox run in as VMs in a modified HyperV environment so that should be pretty straightforward.


Then don't buy stuff like that.

None of this stuff will change unless people vote with their wallets. Companies have the idea that nobody cares. I've actually heard "nobody cares about privacy and security" repeated as a mantra in multiple circles.


There are two meaningful options for CPU. If they both adopt the tech...


... they create a niche for a third ... provided there are enough people who care.

If nobody cares nobody cares.


I don't have infinite resources to care with.

Even if everyone is willing to spend an extra $100, a duopoly can ignore them and lose no money. That's not enough money to bootstrap a competitor desktop/laptop CPU.


You make it seem like anybody can make a competitive CPU and take it to market. You need billions of dollars of R&D to make a good CPU, and if everyone who can afford that R&D signs on to taking your freedom away, there is no alternative.


I personally don't. I haven't bought a single Sony product since their rootkit fiasco 15 years ago [1]. I don't buy products with DRM on them: I buy classical music DRM free from an excellent record label called Hyperion (many of my friends in the choral music circuit have been recorded by them); I don't buy DVDs; I legally format-shift media freely streamed on OTA TV broadcasts, and I don't buy DRM encumbered things online. My indie games come from itch.io and my "indie" games come from gog.com.

The trouble is, (a) doing this is a bit of a giant PITA, and (b) companies never know that you're not there. Sony doesn't know, and doesn't care, that I don't buy their products because of their aggressive pro-DRM stance. I am one consumer, many sigma away from mu, and slowly but surely I look like an antiquated relic: streaming has been so successful, and Widevine so ubiquitous, that it's almost impossible to join the "normal world" without being subjected to (unwanted) DRM.

[1] https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...


As time goes on, we don’t have a lot of choices in a lot of these sorts of cases. There are dozens of hardware and software things I would love to replace if there were a viable alternative

Speaking of, I heard the librem phone just started shipping


Dave Cutler was even more explicit about what was intended with the design of the XBox.

If the user attempts to modify the system, it will brick itself.

https://youtu.be/SVgSLud50ss?t=3500


Stallman[1] and others[2] have talked about just this problem for over a decade now.

[1] https://www.gnu.org/philosophy/can-you-trust.en.html

[2] https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html


How do you see it being anti-free-speech?


Virtually every technology that can be used to create a walled garden with moderation can be bent to limit people's ability to speak freely.

How long until someone has a device which can go to Netflix, social networking, etc. but doesn't have a web browser on it that can load arbitrary pages, and it's impossible to jailbreak?

Since we have no freedom of speech within FAANG properties, that would be a considerable restriction of speech...

I do always find it wryly amusing when people who identify as being on the left see that the right are the only people who need defense using free speech laws, and then happily allow private industry to restrict speech since it doesn't impact them. The shoe could just as easily be on the other foot, and may well again be one day - principles matter.


> principles matter

To whom, though? Everybody is praising Apple for their (admittedly quite excellent) M1 hardware and no one seems to take issue with that either.

You cannot have truly open hardware as long as (software-) patents and IP exist, simple as that. Companies need to protect their investment, since the days of comparatively simple CPUs are over and a lot of "secret sauce" is actually software and licensed IP blocks.

Since patent holders are free to select who their licensee is, they'll always target the ones with the biggest margins (see for example [1]) so mainly consumer products and thus those won't be free (as in speech) anytime soon.

[1] https://www.bloomberg.com/news/articles/2020-10-20/nokia-see...


All of that hardware would be just fine if we had a Bill of Rights for the Internet that ensured free speech rights and ability to host content (of any sort that isn't strictly prohibited already) is enshrined in law.

Pity all these laws end up spending most of their time enabling obscenity instead of political speech.


This is all probably true, but doesn't change the point that the consequences of those perfectly reasonable business descisions will have massive effects on personal freedom.


> How long until someone has a device which can go to Netflix, social networking, etc. but doesn't have a web browser on it that can load arbitrary pages, and it's impossible to jailbreak?

Such a locked-down device wouldn't really be anything new, we already have Roku, Chromecast, Chromebook, and games consoles. They don't threaten the 'ordinary' PC market.


It's been an incremental war. For context, I've been posting about this for 20 years; during that time all the devices you say "we already have" were invented.

This has eaten away at the average person's computing freedom, ironically while providing them a ton more computing power and capability.

> They don't threaten the 'ordinary' PC market.

Not directly, but think about this:

* I know people who do not own a PC anymore, they just have tablet, phone, and TV. No need, apparently, but...

* This results in a ton of kids growing up without early exposure to general purpose computing

* This has been an increasing trend for quite some time now, and Pluton is just the next incarnation of Palladium, which tells us that Microsoft really does still see a completely captured market in their future. Linux plays a role, but in the embrace-and-extend sense; while WSL helps squeeze Linux out as a desktop OS down on the ground, Microsoft's ownership of Github cements it in the cloud.

I know, I know, old man yells at cloud, I don't expect this train to stop, I just want people to stop and think once in a while about where it might be going. It's OK to dream up worst-case scenarios and then strategize for how you might fend off that eventuality even if it's unlikely.


> ton of kids growing up without early exposure to general purpose computing

I agree that's a problem. It's possible to learn to code on an iPad, [0] but the system is generally closed to exploration.

[0] https://apps.apple.com/us/app/grasshopper-learn-to-code/id13...


I can't use my shoe to post on Hacker News.

Why is it a free speech issue for a device to exist that can go to Twitter but doesn't host your blog for you?


The issue is that the device may not be able to go to blogs hosted outside of the Trusted Ecosystem of Fact-Check Verified Not-Fake-News Websites, one day. Who knows?


Seems like a good way to perform surveillance, governments and agencies will be the clients. "You are our product" - Microsoft


Imagine it being capable of enforcing something like which executables you are able to load... Quite in the vein of Apple sending the executables hash to some random server


> Quite in the vein of Apple sending the executables hash to some random server

Isn't Microsoft already doing that on a default Windows installation?

Edit: Yes, SmartScreen, enabled by default, seems to send:

Hash, name and signature for executables. (Also hashes of urls you visit (though I guess only in Edge?))


Imagine it can only run MS Linux or Windows.


That's not how that works. At all.


Another commenter familiar with the tech said: Pluton can securely track what software was booted on the main core (called "measure boot") and it basically sends a hash of that to the cloud to prove to the cloud what software is currently running.

That sounds like most of what you need to build a system that can enforce what executables you're allowed to load and prevent you from attaching a debugger.


Pluton can securely track what software was booted on the main core as long as the previous component in the boot chain participates. If your OS doesn't participate, you don't get any measurements beyond that point. And that means there's no way for Pluton to block execution.


At which point the server will refuse to send you the protected portion of the software, or the decryption key for it. This blocks the execution.


You omitted the context there: the poster was talking about Azure Sphere - IoT devices that use Pluton for verification with remote services.

That's a different use case (chip-to-cloud). It can also not prevent you from attaching a debugger when all you need to do is to go offline.

In fact, the whole point is that you can run anything without compromising the security of the data in the secure enclave. That's what Zero-Trust is all about.


As a feature for cloud chips, it's great.

If it goes into client chips, and someone uses it for DRM, that's awful.

I guess we'll see?


The marketing for this chip is vague and confusing because the chip does absolutely nothing for you.

This chip is not here to protect you from compromised or malicious IoT devices, or to protect you from compromised or malicious cloud services.

This chip is here to protect the Microsoft cloud from compromised or malicious IoT devices. They would also like you to believe that the chip improves security in the cloud. In actuality it protects software running on your device from ... you. All this attestation stuff is great for DRM!

That poses a problem for marketing. They have to let it sound like it does something for you when it actually doesn't.

It's no surprise then that the marketing is basically a giant weasel word souffle with some buzzwords sprinkled on top, and a bit of name dropping.


A while ago (2006? Not sure) I was trying to buy a CD in a Tower Records store in SF, and I found the disk I wanted but it said it was "copy protected". I would have had no problem ripping it with abcde or cdparanoia or something, but refusing to support this, I asked the store employee: "Do you, by any chance, have another copy of this without copy protection?"

He looked at me, with complete disbelief, saying "But why wouldn't you want your copy to be protected?" I asked him "Do you know what kind of protection this is?" to which he replied, "yes, it's a copy less likely to break".

After this incident, I started asking a lot of people if they are aware of what's special about their "copy protected" disks - and the more technical people knew it was an attempt to restrict copying, but the rest thought it was probably a good thing (it says protection, and its on the label, it must be good, or reasoning as such).

It was at that point that I started religiously using RMS style acronyms, like Digital Restriction Management, Copy Restriction, etc. and I recommend everyone does.


After reading through @darzu's pluton explanation on the site linked, I realised this actually may give you the impression that it's a security measure, but in reality now Azure can verify each chip (not just the computer anymore) and see if it runs authentic software (ding ding ding - Windows Licensing). The two key pair method mentions that each devices can be verified to be running authentic software by azure (the phone home thing everyone is worried about). While most laptops and computers do not ship with keys anymore and instead the hardware generates some kind of signature that is then verified by windows activation, this feels like an easier method of doing that. I wonder if this also means Microsoft is aggressively going to make more and new hardware (or some Microsoft verified hardware kind of thing to setup standards) to directly compete with Apple Silicon and keep profits healthy by forcing more customers to pay for authentic software.


I stopped reading and started puking when I read “chip-to-cloud security”


Is Pluton specifically Windows related? Will this affect how easy it is to run Linux on hardware using Pluton?


In the embodiment I've seen, it's a ARM M-class core with some special crypto hardware and certain registers that allow the use of crypto keys without software running on the core ever seeing the key. There's a communication channel to the rest of the system, which is completely OS-agnostic.

In Azure Sphere, Windows is nowhere in sight. The device runs a Linux Kernel.


Is this replacing the Cortex A5 used for the PSB? https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-...


If Qualcomm is implementing it, I'm sure it will on a technical level. Linux is important to AMD and Intel, but Linux-running devices represent nearly all of Qualcomm's non-embedded SOC market.

Whether vendors can use it to restrict such things, I don't think anyone can say right now, but I would guess and hope not. The TPM does not.


If Qualcomm is implementing it then it should be easy enough to break. Qualcomm's secure enclave software for Android has had an absolutely abysmal security track record. Apple gets all the press precisely because it's such an achievement (well, that and Apple is more well known). Qualcomm hacks have come out like every 6 months for nearly a decade, and nobody cares anymore.


well apple's enclave is broken aswell. https://arstechnica.com/information-technology/2020/10/apple...

well at least it needs physical access.


FWIW, I meant Apple secure enclave hacks get all the attention because they're more of an achievement, at least in terms of published hacks being more rare. I tried to keep track of published Qualcomm breaks--which usually don't require physical access as they involve classic software bugs--several years ago but gave up because they were too numerous yet not as widely publicized. I had plenty of fodder by then, though I try to take mental note of new breaks that [briefly] appear on HN or elsewhere.

I was keeping track of hacks for marketing material related to a security startup I was working on. The competition would have principally been smartphone-based authentication apps, both Android and iPhone.


A curated list of public TEE resources for learning how to reverse-engineer and achieve trusted code execution on ARM devices (include all major ARM silicon vendor) https://github.com/enovella/TEE-reversing


Lol. Even Google had to put Titan M on its Snapdragon based Pixels.


I don't think so given how much share Linux has on Azure, it simply would not be a wise move.


Do the kids remember Microsoft's ghastly Palladium, from 2 decades ago?

>Known Elements of the Palladium System:

> The system purports to stop viruses by preventing the running of malicious programs. The system will store personal data within an encrypted folder.

>The system will depend on hardware that has either a digital signature or a tracking number.

> The system will filter spam. The system has a personal information sharing agent called "My Man."

> The system will incorporate Digital Rights Management technologies for media files of all types (music, documents, e-mail communications). Additionally, the system purports to transmit data within the computer via encrypted paths

https://epic.org/privacy/consumer/microsoft/palladium.html


Booting Linux is really a policy question, not technical.


Is it a policy question without Pluton? I.e. will this allow hardware vendors additional means to prevent me from installing Linux?


> will this allow hardware vendors additional means to prevent me from installing Linux?

No. Pluton serves as an on-chip secure enclave for encryption keys and the like. This is unrelated to installing operating systems.


Will the final purchaser of the computer chip, i.e., the consumer, have r/w access to the contents of the enclave?

EDIT: s/access/r\/w & to the contents of/


I don't know. I can't even speculate without knowing what you mean by "access to the enclave".

If you mean access as in being able to arbitrarily read and write keys or data to/from it, then no, you won't be able to access it that way. After all that's the whole point - even physical access to the hardware won't enable you to extract information (keys, etc.) from it.

This means any data or firmware stored on the chip by Microsoft or any OEM (e.g. firmware encryption keys or device signatures) won't be accessible to consumers.

TPMs define interfaces, though that allow programs to access its capabilities, so there are ways to interact with the hardware (and those are documented as well so you can write applications or operating systems that support it).

To get an idea what the interface looks like, you can check out the documentation of the Windows API: https://docs.microsoft.com/en-us/windows/win32/secprov/win32...

There's also a list of some TPM commands available on the same site: https://docs.microsoft.com/en-us/windows/win32/secprov/addbl...


Oh silly you, of course not.


Technically hardware vendors can already prevent this since years (secure boot without custom keys and only including Microsoft Windows keys).


Isn't this basically fTPM (basically software TPM implemented in the trusted execution environment of the CPU) that both AMD and Intel already offer?


It'll be built into the CPU, instead of having a separate chip, and seems to have secret-management functionality for user-specified keys, biometrics, etc.


>It'll be built into the CPU, instead of having a separate chip

so are the trusted execution environments used by fTPMs?

>seems to have secret-management functionality for user-specified keys

AFAIK TPMs already have that functionality. random search: https://github.com/tpm2-software/tpm2-tools/blob/master/man/...

>biometrics, etc.

AFAIK some fingerprint readers already use trusted execution environments to handle authentication, so from a feature point of view there isn't really anything new here.


> so are the trusted execution environments used by fTPMs?

AMD PSP for sure is. I think I might have heard something about Intel ME being on the PCH in some cases???

Anyway seems like the point is to make it a "more hardware" TPM rather than a firmware one – i.e. the key memory would only be accessible from fixed function crypto hardware blocks.

Coincidentally, this might mean that the embedded TPM would still work after me_cleaner destroys most of the ME firmware :)


The huge difference, and most tin-foil-hat worthy, is that all this functionality is extended to reach the cloud.

What's frightening is that - by design - the user will have little/no control or even awareness of what data is being sent or received.


That's most likely based on attestation.

Which means the OS still have full control over what it does send but the "thing" can attest that what the OS sends is valid and not made up.


It seems to be explicit about being a hardware TPM which is included in the CPU instead of an extra chip, so I guess no.


Locking out other OSes isn't a main goal of Pluton (although technically it can), there are just too many issues (hey Infineon, Intel and Qualcomm I am looking at you) with existing dTPM and fTPM implementations.


> What the Pluton project from Microsoft and the agreement between AMD, Intel, and Qualcomm will do is build a TPM-equivalent directly into the silicon of every Windows-based PC of the future.

CPUs with security modules controlled by MS? Who will guarantee it won't be abused against non MS systems and users?


How many Qualcomm CPUs run Windows?


Not many, but it's a growing market. Much of the Windows-on-ARM market is Qualcomm SoCs.


Not that I think Pluton will be a problem for Linux (as in, one that won't be present on Windows also) but AFAIK Qualcomm and Microsoft had partnered to not only run Windows on ARM, but to run x86 software on ARM Windows.


Wouldn't be surprised if this will be used to block programs coming from non western nations. Pompeo talked about creating a "Clean network" to keep foreign non allied nations hardware and software out of it.


It's hard for me to get interested in any hardware or software security news with ransomware amok and none of this addressing it.


Somehow I thought a Pluton should be a fundamental wealth particle, but apparently it's a thing from geology.


So wealth ought to be quantised and measured in plutons?


Call me sceptical, but I hope m$ is not pulling Apple tricks to lock computers to their OS. Is this open source? Will consumer be able to audit it down to the silicon level?


They already said it is OS agnostic. MS 2020 is far away from MS 2010.

With regards to the auditing need, can you audit a CPU down to the silicon level today?


> MS 2020 is far away from MS 2010.

I haven't seen evidence of this. Their OS is at least as user hostile as before and they desperately seek developers.

If I have a specific technical problem I have to slay hundreds of sales people before I find someone with real expertise.

That they aren't as dominant as before is probably due to the fact that they have few developers and need to regain some. Financially Office is probably the largest income and sure, the standard corporate AD solutions are wide spread. But their cloud tech seems to be restricted to very large companies and I haven't seen much of it.


I am not sure you were ever looking for evidence in the first place.

WSL and .NET as open-source?

Their R&D budget in 2019 was twice that in 2010, meanwhile sales and marketing budget increased by 30%.

Their Productivity and Business Processes (which includes Office, Dynamics, ERP and LinkedIn) accounted for 1/3 of global sales revenue in 2019.

Sure, there are some things that have stayed the same in the last 10 years, but of all the FAANGS I would argue that MS is the company that changed the most in the last 10 years.


You probably could to some degree but ultimately even if you did what ARM does and have a machine readable specification and had a way of verifying it on chip, you could still pull a VW-Thompson hybrid where the chip hijacks that process, i.e. detects when it is under audit.

I'm reminded of Christopher Domas' excellent talk/s on finding undocumented X86 instructions, if there are any backdoors in a modern processor they'll probably under some hyper-obscure (register, stack etc.) state even if they did use an undocumented instruction.


They could set a new standard. I think when it comes to security these things should be transparent. You know, documentation can say one thing and implementation another.


Exactly my point. You're asking to set a new standard that even todays CPUs does not adhere to, even though they process equally sensitive data and operations. Seemingly, because you are sceptical of MS.

And to your point about documentation vs implementation, it would not be difficult for anyone, to state one thing in the documentation and produce another thing, unless perhaps, you give access to the manufacturing facilities.


Apple allows you to boot any OS you want on their apple silicon macs (as long as you have uploaded the key so it can verify the kernel you tell it to boot)


An unnecessary solution for an inexisting problem.

I hope they lose their investment.

I also hope all their hordes of fanboys wake up to reality now. Yes, the people that "<3 open source" and "<3 Linux" and gave you VS Code for free, will now own your CPU now and you have nothing to do about it. And then, if they change their mind and don't want you to run Linux, you won't run Linux.


I wonder if it will be one of the inferior technologies that were forced by Microsoft even outside of their Windows world. Like it happened with UEFI (that has no multithreading, uses PE as a format, Microsoft C ugly coding convention, bloated), SecureBoot (that was designed to stop anything non-Windows instead of real security), UTF-16 (everyone except them and JavaScript uses UTF-8), and so on. The list is long.


> UTF-16 (everyone except them and JavaScript uses UTF-8)

It's like blaming America's analogue colour TV implementation (NTSC) when in fact PAL and SECAM haven't been invented yet (and NTSC is partially responsible for PAL even existing).


You can't blame MS for UTF-16. They adopted it when it was UCS-2 and unicode only had 64k codepoints. They are stuck on the older technology because they were early adopters.


UEFI originated from EFI which was first developed by Intel for use in Itanium platform. EFI saw widespread use on consumer devices pioneered by Apple on Intel Macs. I don't know how far Microsoft influence on EFI, because even Apple was responsible for widespread use of EFI on consumer devices.


You can't forget Java with UTF-16. I feel like Java is an even bigger culprit since the whole language assumes it and there are no real alternatives.


Not only was Java created when there was no real alternative, modern implementations a more efficient representation internally, since Java 9 (we are at 15 now), just keep the outside interface as UTF-16 for backwards compatibility, we don't need a Python 3 experience.

https://openjdk.java.net/jeps/254


Well, it could be worse. Python did the whole Py3k own-goal purely for the purpose of forcing everyone onto UTF-16. (Facepalm, yes.)


> for the purpose of forcing everyone onto UTF-16.

Honestly, Py2 was a PITA when handling raw data, it could corrupt your data if you don't know exactly what you are doing.

The goal was to separate (Unicode) text from binary data. It wasn't to UTF-16 though. In fact, you should just assume that text variables are encoded in Unicode points and not care whether it is UTF-16 or UTF-8 (and on Unix-like systems, it is definitely represented to UTF-8). If you are converting it into binary, at least you know what encoding is it: no "Oh no my Python code was broken on Windows/Unix" because even Py2 has already the UTF-16/UTF-8 OS split.


The easy and sane path would have been to just mandate that str is encoded as UTF-8. (Indeed, that's what every other sane language seems to do.)

Instead they tried to split every function into a 'wide' (str) and a 'narrow' (bytes) version, like in Windows.

The whole idea of 'wide' strings is predicated on the idea that Unicode charpoints are only ever two bytes and that two bytes is all you ever need.

This obviously doesn't work in 2020, and the Python folks tried to roll back their broken by design code that has immediately turned into technical debt right out the gate, but the warts still exist. (Like having to choose whether you open files with 'w' or 'wb', etc. No other language does this stupid thing, I think.)


I wasn't following the Python 2-to-3 transition that closely, but I don't think Python ever forced anyone to use UTF-16. On Unix platforms (the only place I've used Python for 10 years) the strings are UTF-8 internally. On Windows maybe it's UTF-16, I don't know, but most programmers don't need to know or care. They just bytes(my_string, 'utf-8') to encode it as bytes.


Python uses UCS-4.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: