Azure, of course, is Microsoft's cloud offering, the place where Microsoft would like us all to buy our cloud computing resources. The purpose of AIPA is to make Azure more appealing to companies that are concerned about attacks from patent trolls. According to Microsoft, the purpose is "to foster a community and business environment that values and protects innovation in the cloud" — not that Microsoft itself has ever been known to use software patents itself in ways that might inhibit innovation. There are three components to this program.
The first of those components is simple indemnification against patent attacks based on services that Microsoft offers itself. So, for example, if a company is using Azure HDInsight, otherwise known as Microsoft's version of Hadoop, Microsoft will indemnify that company against an attacker who claims to own a patent that reads on part of Hadoop. This indemnity only extends as far as Microsoft's own services; an independent installation of Hadoop would not qualify. Neither does, as explained in the FAQ, "a Linux distribution in a VM".
Thus far, Microsoft is simply protecting its customers from being sued for using its own products. This is a fairly normal practice in the industry and shouldn't raise too many eyebrows.
The second part of AIPA is perhaps more interesting; it is called "patent pick" and is only available to customers spending at least $1000/month on Azure. If such a customer is the target of a patent suit, and if that customer has "remained patent peaceful" against other Azure customers for at least two years, then that customer is allowed to pick one patent out of a list of 7,500 (apparently to be expanded to 10,000). Microsoft will, for a nominal fee, transfer that patent, which can then be used in a counterattack. For the curious, the list of available patents is available as a large Excel file.
Tech Insights dug through the list and concluded that it offers reasonably broad coverage and does not appear to just be a list of patents that Microsoft doesn't care much about anyway. So perhaps it is a useful resource, but it is an interesting one. Patent pick will only offer value to companies that otherwise have the resources to see a patent fight through to the end — a process that can cost millions of dollars even in the case of a successful outcome. There may well be companies out there with pockets that deep that nonetheless find themselves in a position where a single patent from Microsoft will change their fate, but it's not clear how many such companies exist. Even so, perhaps the knowledge that a potential target could pick one weapon from this arsenal will prove to be a deterrent in some cases.
Of course, the ability to file a patent-based countersuit will have little deterrent value against patent trolls. Microsoft is probably uninterested in solving the trolling problem, but it can promise to not make it worse — for $1000/month Azure customers, at least. The third component of AIPA is called a "springing license"; it says that, when Microsoft sells its patents to "non-practicing entities", those $1000/month customers get an automatic license to use the patented techniques. It is not clear from the publicly available materials, but the wording on the site suggests that this license only exists as long as the customer continues to spend the minimum amount with Azure.
While Microsoft claims that it doesn't normally transfer patents to trolls, this offering could be said to create a sort of moral hazard for the company. If a patent or two were to, somehow, end up in the hands of a troll that started asserting them widely, any customer thinking of leaving Azure would have to weigh the increased risk of attack that would result from such a move.
For better or for worse, the current phase of the computing industry is focused on consolidation. Computing that was once done in organizations is moving to the data centers of a relatively small number of huge cloud providers. Those providers have an interesting problem, though: computing, storage, and bandwidth are essentially commodity services. If this market is too easy to enter, competition could drive prices down to the point where the business is barely profitable.
The situation changes, though, if there are significant barriers to new entrants in the field. This kind of patent policy could prove to be just such a barrier. In a world where patent trolls run amok, the only safe harbor may well be the tiny number of providers with the resources to shield their clients. Rational organizations would have to think hard before hosting their work anywhere else.
Thus, AIPA shows us what the shape of the software patent threat may be in the near future. We will still be able to develop free software as we see fit and, perhaps, even distribute it. But anybody who wants to run that software in any sort of significant way will, as they are now, have to face the threat of patent attacks. Mitigating that threat may well require running branded versions of free programs on the systems of a cloud provider that can provide a credible patent shield. That may be the new form of the patent tax, and it bodes ill for anybody who truly seeks to bring about "innovation in the cloud".
Containers are an elegant way to combine two Linux primitives, control groups and and namespaces, with loopback filesystems to provide isolated structures that in many ways resemble virtual machines (VMs), though they don't have their own kernels. It is important to remember, however, that they are not actually VMs; no less an authority than Jessie Frazelle, who maintained Docker and now hacks on containers for Google when not speaking at KubeCon 2017, says exactly that in her blog. If you treat your containers like VMs, you're using them wrong, and things may not end well if you do that in production.
Tools like Docker and rkt automate the deployment of applications inside containers; Docker has the larger mindshare, but rkt is growing as well. I'll focus mostly on Docker here. It uses some elegant union filesystem magic to allow many containers to run inside many copies of the same file space, each seeing only its own version, without the host having to maintain many separate copies of the filesystem. It also prefers to only allow one process to run inside any given container. This last has given rise to the idea of the microservice: a single process, running inside its own container, doing exactly one thing well. Hook many microservices together, usually with some kind of persistent off-container data store, and you can build complex and clever applications.
Docker exploded into the world in 2013, instantly becoming beloved of developers. As a systems administrator, I've felt the resource and time pain of maintaining lots of individual copies of a production environment so that developers could work in their own sandboxes. A tool that allows each of them to spin up many such copies using amazingly little memory and disk space to do so is indeed a big win. But I will put my cards on the table here and say that the ways developers have deployed containers into production are often not ideal from a reliability and maintainability standpoint. The deep nesting of containers and the profusion of embedded distributions that results has led one commentator to write:
Containers should be ephemeral. To be production-ready, they should be able to be rebuilt from clean, trusted sources at the slightest provocation, without any interruption to your production services. What containerization needs for this is infrastructure which understands this ephemerality and the idea of microservices, and holds tight to those concepts to allow rapid deployment of containerized microservices into production as part of an orderly development and testing chain. Once you build your business applications around this sort of architecture, you're doing what the attendees called "cloud-native computing".
Kubernetes, which the majority of speakers at KubeCon 2017 pronounced to rhyme with "goober gettys", is such a tool. It's not the only one, but it seems to be gaining a lot of ground, probably because it came out of Google. But it now belongs to the CNCF, which is part of the Linux Foundation, and which maintains a toolchain of cloud-native computing tools, principal amongst which is Kubernetes.
Finally, before I can describe some of the talks I heard at CNC/KubeCon, a few Kubernetes-specific concepts must briefly be mentioned. The node is the basic unit of computing power given to a Kubernetes deployment; it comprises a server with some memory, disk, and CPU, which is given over to Kubernetes to manage. The pod is the basic unit of microservice deployment, and comprises one or more dockerized containers running on the same node. A service is a set of similar pods, usually on different nodes, that together provide one tier of a multi-tier application. The Kubernetes master gets your microservices out to pods on its nodes, checks they're happy, and ensures that pods and services can all find each other so they can work together to provide your business application.
Hopefully, that seems simple. As a number of people at CNC/KubeCon were willing to say, the really painful bit for most people is making their business applications cloud-native. Once that pain, which can be considerable, is absorbed, the benefits can be substantial. The business application can run in-house or in any cloud, managed by Kubernetes, with very little transitional pain. Scaling becomes fairly painless; have Kubernetes deploy more pods, and if you're running in-house, feed more nodes to Kubernetes when it needs them.
Now let's see how this works out in practice.
[Thanks to the Linux Foundation, LWN's travel sponsor, for assistance in getting to Berlin for CNC and KubeCon.]
Page editor: Jonathan Corbet
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds