My broken blog

May 8th, 2017

I wanted to let people know I’m well-aware that Shtetl-Optimized has been suffering from the following problems lately:

  • Commenters are presented with the logins (handle, email address, and URL) of random other commenters, rather than with their own default login data.  In particular, this means that email addresses are leaking, and that when you comment, you should not (for the time being) enter your real email address if that’s information that you’d wanted to share only with me.  Another thing it means is that, when I try to comment, I’m not logged in as “Scott,” so even I have to enter my login data manually every time I comment.
  • Comments (including my own comments!) take about an hour to show up after I’ve approved them.
  • New blog posts also take a while to show up.

Since all three of these problems started happening around the same time, I assume they’re related.  But I don’t even know where to start in trying to solve them (Googling for “WordPress” plus descriptions of these bugs was unhelpful).  Would anyone like to help out?  If you earn my trust, I’ll even temporarily give you administrative privileges on this blog so you can poke around yourself.

Thanks so much, and hope to return to your regularly scheduled programming shortly…

This Week’s BS

May 5th, 2017

There are two pieces of BosonSampling-related news that people have asked me about this week.

First, a group in Shanghai, led by Chaoyang Lu and Jianwei Pan, has reported in Nature Photonics that they can do BosonSampling with a coincidence rate that’s higher than in previous experiments by a factor of several thousand.  This, in particular, lets them do BosonSampling with 5 photons.  Now, 5 might not sound like that much, especially since the group in Bristol previously did 6-photon BosonSampling.  But to make their experiment work, the Bristol group needed to start its photons in the initial state |3,3〉: that is, two modes with 3 photons each.  This gives rise to matrices with repeated rows, whose permanents are much easier to calculate than the permanents of arbitrary matrices.  By contrast, the Shangai group starts its photons in the “true BosonSampling initial state” |1,1,1,1,1〉: that is, five modes with 1 photon each.  That’s the kind of initial state we ultimately want.

The second piece of news is that on Monday, a group at Bristol—overlapping with the group we mentioned before—submitted a preprint to the arXiv with the provocative title “No imminent quantum supremacy by boson sampling.”  In this paper, they give numerical evidence that BosonSampling, with n photons and m modes, can be approximately simulated by a classical computer in “merely” about n2n time (that is, the time needed to calculate a single n×n permanent), as opposed to the roughly mn time that one would need if one had to calculate permanents corresponding to all the possible outcomes of the experiment.  As a consequence of that, they argue that achieving quantum supremacy via BosonSampling would probably require at least ~50 photons—which would in turn require a “step change” in technology, as they put it.

I completely agree with the Bristol group’s view of the asymptotics.  In fact, Alex Arkhipov and I ourselves repeatedly told experimentalists, in our papers and talks about BosonSampling (the question came up often…), that the classical complexity of the problem should only be taken to scale like 2n, rather than like mn.  Despite not having a general proof that the problem could actually be solved in ~2n time in the worst case, we said that for two main reasons:

  1. Even under the most optimistic assumptions, our hardness reductions, from Gaussian permanent estimation and so forth, only yielded ~2n hardness, not ~mn hardness.  (Hardness reductions giving us important clues about the real world?  Whuda thunk??)
  2. If our BosonSampling matrix is Haar-random—or otherwise not too skewed to produce outcomes with huge probabilities—then it’s not hard to see that we can do approximate BosonSampling in O(n2n) time classically, by using rejection sampling.

Indeed, Alex and I insisted on these points despite some pushback from experimentalists, who were understandably hoping that they could get to quantum supremacy just by upping m, the number of modes, without needing to do anything heroic with n, the number of photons!  So I’m happy to see that a more careful analysis supports the guess that Alex and I made.

On the other hand, what does this mean for the number of photons needed for “quantum supremacy”: is it 20? 30? 50?  I confess that that sort of question interests me much less, since it all depends on the details of how you define the comparison (are we comparing against ENIAC? a laptop? a server farm? how many cores? etc etc).  As I’ve often said, my real hope with quantum supremacy is to see a quantum advantage that’s so overwhelming—so duh-obvious to the naked eye—that we don’t have to squint or argue about the definitions.

Thoughts on the murderer outside my building

May 2nd, 2017

A reader named Choronzon asks:

Any comments on the horrific stabbing at UT Austin yesterday? Were you anywhere near the festivities? Does this modify your position on open carry of firearms by students and faculty?

I was in the CS building (the Gates Dell Complex) at the time, which is about a 3-minute walk down Speedway from where the stabbings occurred.  I found about it a half hour later, as I was sitting in the student center eating.  I then walked outside to find the police barricades and hordes of students on their phones, reassuring their parents and friends that they were OK.

The plaza where it happened is one that I walk through every week—often to take Lily swimming in the nearby Gregory Gym.  (Lily’s daycare is also a short walk from where the stabbings were.)

Later in the afternoon, I walked Lily home in her stroller, through a campus that was nearly devoid of pedestrians.  Someone pulled up to me in his car, to ask whether I knew what had happened—as if he couldn’t believe that anyone who knew would nevertheless be walking around outside, Bayesian considerations be damned.  I said that I knew, and it was awful.  I then continued home.

What can one say about something so gruesome and senseless?  Other than that my thoughts are with the victims and their families, I hope and expect that the perpetrator receives justice, and I hope but don’t expect that nothing like this ever happens again, on this campus or on any other. I’m not going to speculate about the perpetrator’s motives; I trust the police and detectives to do their work.  (As one my colleagues put it: “it seems like clearly some sort of hate crime, but who exactly did he hate, and why?”)

And no, this doesn’t change my feelings about “campus carry” in any way. Note, in particular, that no armed student did stop the stabber, in the two minutes or so that he was on the loose—though some proponents of campus carry so badly wanted to believe that’s what happened, that they circulated the false rumor on Twitter that it had.  In reality, the stabber was stopped by an armed cop.

Yes, if UT Austin had been like an Israeli university, with students toting firearms and carefully trained in their use, it’s possible that one of those students would’ve stopped the lunatic.  But without universal military service, why would the students be suitably trained?  Given the gun culture in the US, and certainly the gun culture in Texas, isn’t it overwhelmingly likelier that a gun-filled campus would lead to more such tragedies, and those on a larger scale?  I’d rather see UT respond to this detestable crime—and others, like the murder of Haruka Weiser last year—with a stronger police presence on campus.

Other than that, life goes on.  Classes were cancelled yesterday from ~3PM onward, but they resumed today.  I taught this afternoon, giving my students one extra day to turn in their problem set.  I do admit that I slightly revised my lecture, which was about the Gottesman-Knill Theorem, so that it no longer used the notation Stab(|ψ⟩) for the stabilizer group of a quantum state |ψ⟩.

Me at the Science March today, in front of the Texas Capitol in Austin

April 22nd, 2017

If Google achieves superintelligence, time zones will be its Achilles heel

April 17th, 2017

Like a latter-day Prometheus, Google brought a half-century of insights down from Mount Academic CS, and thereby changed life for the better here in our sublunary realm.  You’ve probably had the experience of Google completing a search query before you’d fully formulated it in your mind, and thinking: “wow, our dysfunctional civilization might no longer be able to send people to the Moon, or even build working mass-transit systems, but I guess there are still engineers who can create things that inspire awe.  And apparently many of them work at Google.”

I’ve never worked at Google, or had any financial stake in them, but I’m delighted to have many friends at Google’s far-flung locations, from Mountain View to Santa Barbara to Seattle to Boston to London to Tel Aviv, who sometimes host me when I visit and let me gorge on the legendary free food.  If Google’s hiring of John Martinis and avid participation in the race for quantum supremacy weren’t enough, in the past year, my meeting both Larry Page and Sergey Brin to discuss quantum computing and the foundations of quantum mechanics, and seeing firsthand the intensity of their nerdish curiosity, heightened my appreciation still further for what that pair set in motion two decades ago.  Hell, I don’t even begrudge Google its purchase of a D-Wave machine—even that might’ve ultimately been for the best, since it’s what led to the experiments that made clear the immense difficulty of getting any quantum speedup from those machines in a fair comparison.

But of course, all that fulsome praise was just a preamble to my gripe.  It’s time someone said it in public: the semantics of Google Calendar are badly screwed up.

The issue is this: suppose I’m traveling to California, and I put into Google Calendar that, the day after I arrive, I’ll be giving a lecture at 4pm.  In such a case, I always—always—mean 4pm California time.  There’s no reason why I would ever mean, “4pm in whatever time zone I’m in right now, while creating this calendar entry.”

But Google Calendar doesn’t understand that.  And its not understanding it—just that one little point—has led to years of confusions, missed appointments, and nearly-missed flights, on both my part and Dana’s.  At least, until we learned to painstakingly enter the time zone for every calendar entry by hand (I still often forget).

Until recently, I thought it was just me and Dana who had this problem.  But then last week, completely independently, a postdoc started complaining to me, “you know what’s messed up about Google Calendar?…”

The ideal, I suppose, would be to use machine learning to guess the intended time zone for each calendar entry.  But failing that, it would also work fine just to assume that “4pm,” as entered by the user, unless otherwise specified means “4pm in whatever time zone we find ourselves in when the appointed day arrives.”

I foresee two possibilities, either of which I’m OK with.  The first is that Google fixes the problem, whether prompted by this blog post or by something else.  The second is that the issue never gets resolved; then, as often prophesied, Google’s deep nets achieve sentience and plot to take over the whole observable universe … and they would, if not for one fortuitous bug, which will cause the AIs to tip their hand to humanity an hour before planned.


In a discussion thread on Y Combinator, some people object to my proposed solution (“4pm means 4pm in whichever time zone I’ll be in then“) on the following ground. What if I want to call a group meeting at (say) 11am in Austin, and I’ll be traveling but will still call into the meeting remotely, and I want my calendar to show the meeting time in Austin, not the time wherever I’ll be calling in from (which might even be a plane)?

I can attest that, in ten years, that’s not a problem that’s arisen for me even once, whereas the converse problem arises almost every week, and is one of the banes of my existence.

But sure: Google Calendar should certainly include the option to tie times to specific time zones in advance! It seems obvious to me that my way should be the default, but honestly, I’d be happy if my way were even an option you could pick.

Daniel Moshe Aaronson

March 25th, 2017

Born Wednesday March 22, 2017, exactly at noon.  19.5 inches, 7 pounds.

I learned that Dana had gone into labor—unexpectedly early, at 37 weeks—just as I was waiting to board a redeye flight back to Austin from the It from Qubit complexity workshop at Stanford.  I made it in time for the birth with a few hours to spare.  Mother and baby appear to be in excellent health.  So far, Daniel seems to be a relatively easy baby.  Lily, his sister, is extremely excited to have a new playmate (though not one who does much yet).

I apologize that I haven’t been answering comments on the is-the-universe-a-simulation thread as promptly as I normally do.  This is why.

Your yearly dose of is-the-universe-a-simulation

March 22nd, 2017

Yesterday Ryan Mandelbaum, at Gizmodo, posted a decidedly tongue-in-cheek piece about whether or not the universe is a computer simulation.  (The piece was filed under the category “LOL.”)

The immediate impetus for Mandelbaum’s piece was a blog post by Sabine Hossenfelder, a physicist who will likely be familiar to regulars here in the nerdosphere.  In her post, Sabine vents about the simulation speculations of philosophers like Nick Bostrom.  She writes:

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

After hammering home that point, Sabine goes further, and says that the simulation hypothesis is almost ruled out, by (for example) the fact that our universe is Lorentz-invariant, and a simulation of our world by a discrete lattice of bits won’t reproduce Lorentz-invariance or other continuous symmetries.

In writing his post, Ryan Mandelbaum interviewed two people: Sabine and me.

I basically told Ryan that I agree with Sabine insofar as she argues that the simulation hypothesis is lazy—that it doesn’t pay its rent by doing real explanatory work, doesn’t even engage much with any of the deep things we’ve learned about the physical world—and disagree insofar as she argues that the simulation hypothesis faces some special difficulty because of Lorentz-invariance or other continuous phenomena in known physics.  In short: blame it for being unfalsifiable rather than for being falsified!

Indeed, to whatever extent we believe the Bekenstein bound—and even more pointedly, to whatever extent we think the AdS/CFT correspondence says something about reality—we believe that in quantum gravity, any bounded physical system (with a short-wavelength cutoff, yada yada) lives in a Hilbert space of a finite number of qubits, perhaps ~1069 qubits per square meter of surface area.  And as a corollary, if the cosmological constant is indeed constant (so that galaxies more than ~20 billion light years away are receding from us faster than light), then our entire observable universe can be described as a system of ~10122 qubits.  The qubits would in some sense be the fundamental reality, from which Lorentz-invariant spacetime and all the rest would need to be recovered as low-energy effective descriptions.  (I hasten to add: there’s of course nothing special about qubits here, any more than there is about bits in classical computation, compared to some other unit of information—nothing that says the Hilbert space dimension has to be a power of 2 or anything silly like that.)  Anyway, this would mean that our observable universe could be simulated by a quantum computer—or even for that matter by a classical computer, to high precision, using a mere ~210^122 time steps.

Sabine might respond that AdS/CFT and other quantum gravity ideas are mere theoretical speculations, not solid and established like special relativity.  But crucially, if you believe that the observable universe couldn’t be simulated by a computer even in principle—that it has no mapping to any system of bits or qubits—then at some point the speculative shoe shifts to the other foot.  The question becomes: do you reject the Church-Turing Thesis?  Or, what amounts to the same thing: do you believe, like Roger Penrose, that it’s possible to build devices in nature that solve the halting problem or other uncomputable problems?  If so, how?  But if not, then how exactly does the universe avoid being computational, in the broad sense of the term?

I’d write more, but by coincidence, right now I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.  It’s tremendously exciting—the mixture of attendees is among the most stimulating I’ve ever encountered, from Lenny Susskind and Don Page and Daniel Harlow to Umesh Vazirani and Dorit Aharonov and Mario Szegedy to Google’s Sergey Brin.  But it should surprise no one that, amid all the discussion of computation and fundamental physics, the question of whether the universe “really” “is” a simulation has barely come up.  Why would it, when there are so many more fruitful things to ask?  All I can say with confidence is that, if our world is a simulation, then whoever is simulating it (God, or a bored teenager in the metaverse) seems to have a clear preference for the 2-norm over the 1-norm, and for the complex numbers over the reals.

I will not log in to your website

March 19th, 2017

Two or three times a day, I get an email whose basic structure is as follows:

Prof. Aaronson, given your expertise, we’d be incredibly grateful for your feedback on a paper / report / grant proposal about quantum computing.  To access the document in question, all you’ll need to do is create an account on our proprietary DigiScholar Portal system, a process that takes no more than 3 hours.  If, at the end of that process, you’re told that the account setup failed, it might be because your browser’s certificates are outdated, or because you already have an account with us, or simply because our server is acting up, or some other reason.  If you already have an account, you’ll of course need to remember your DigiScholar Portal ID and password, and not confuse them with the 500 other usernames and passwords you’ve created for similar reasons—ours required their own distinctive combination of upper and lowercase letters, numerals, and symbols.  After navigating through our site to access the document, you’ll then be able to enter your DigiScholar Review, strictly adhering to our 15-part format, and keeping in mind that our system will log you out and delete all your work after 30 seconds of inactivity.  If you have trouble, just call our helpline during normal business hours (excluding Wednesdays and Thursdays) and stay on the line until someone assists you.  Most importantly, please understand that we can neither email you the document we want you to read, nor accept any comments about it by email.  In fact, all emails to this address will be automatically ignored.

Every day, I seem to grow crustier than the last.

More than a decade ago, I resolved that I would no longer submit to or review for most for-profit journals, as a protest against the exorbitant fees that those journals charge academics in order to buy back access to our own work—work that we turn over to the publishers (copyright and all) and even review for them completely for free, with the publishers typically adding zero or even negative value.  I’m happy that I’ve been able to keep that pledge.

Today, I’m proud to announce a new boycott, less politically important but equally consequential for my quality of life, and to recommend it to all of my friends.  Namely: as long as the world gives me any choice in the matter, I will never again struggle to log in to any organization’s website.  I’ll continue to devote a huge fraction of my waking hours to fielding questions from all sorts of people on the Internet, and I’ll do it cheerfully and free of charge.  All I ask is that, if you have a question, or a document you want me to read, you email it!  Or leave a blog comment, or stop by in person, or whatever—but in any case, don’t make me log in to anything other than Gmail or Facebook or WordPress or a few other sites that remain navigable by a senile 35-year-old who’s increasingly fixed in his ways.  Even Google Docs and Dropbox are pushing it: I’ll give up (on principle) at the first sight of any login issue, and ask for just a regular URL or an attachment.

Oh, Skype no longer lets me log in either.  Could I get to the bottom of that?  Probably.  But life is too short, and too precious.  So if we must, we’ll use the phone, or Google Hangouts.

In related news, I will no longer patronize any haircut place that turns away walk-in customers.

Back when we were discussing the boycott of Elsevier and the other predatory publishers, I wrote that this was a rare case “when laziness and idealism coincide.”  But the truth is more general: whenever my deepest beliefs and my desire to get out of work both point in the same direction, from here till the grave there’s not a force in the world that can turn me the opposite way.

Insert D-Wave Post Here

March 16th, 2017

In the two months since I last blogged, the US has continued its descent into madness.  Yet even while so many certainties have proven ephemeral as the morning dew—the US’s autonomy from Russia, the sanity of our nuclear chain of command, the outcome of our Civil War, the constraints on rulers that supposedly set us apart from the world’s dictator-run hellholes—I’ve learned that certain facts of life remain constant.

The moon still waxes and wanes.  Electrons remain bound to their nuclei.  P≠NP proofs still fill my inbox.  Squirrels still gather acorns.  And—of course!—people continue to claim big quantum speedups using D-Wave devices, and those claims still require careful scrutiny.

With that preamble, I hereby offer you eight quantum computing news items.


Cathy McGeoch Episode II: The Selby Comparison

On January 17, a group from D-Wave—including Cathy McGeoch, who now works directly for D-Wave—put out a preprint claiming a factor-of-2500 speedup for the D-Wave machine (the new, 2000-qubit one) compared to the best classical algorithms.  Notably, they wrote that the speedup persisted when they compared against simulated annealing, quantum Monte Carlo, and even the so-called Hamze-de Freitas-Selby (HFS) algorithm, which was often the classical victor in previous performance comparisons against the D-Wave machine.

Reading this, I was happy to see how far the discussion has advanced since 2013, when McGeoch and Cong Wang reported a factor-of-3600 speedup for the D-Wave machine, but then it turned out that they’d compared only against classical exact solvers rather than heuristics—a choice for which they were heavily criticized on this blog and elsewhere.  (And indeed, that particular speedup disappeared once the classical computer’s shackles were removed.)

So, when people asked me this January about the new speedup claim—the one even against the HFS algorithm—I replied that, even though we’ve by now been around this carousel several times, I felt like the ball was now firmly in the D-Wave skeptics’ court, to reproduce the observed performance classically.  And if, after a year or so, no one could, that would be a good time to start taking seriously that a D-Wave speedup might finally be here to stay—and to move on to the next question, of whether this speedup had anything to do with quantum computation, or only with the building of a piece of special-purpose optimization hardware.


A&M: Annealing and Matching

As it happened, it only took one month.  On March 2, Salvatore Mandrà, Helmut Katzgraber, and Creighton Thomas put up a response preprint, pointing out that the instances studied by the D-Wave group in their most recent comparison are actually reducible to the minimum-weight perfect matching problem—and for that reason, are solvable in polynomial time on a classical computer.   Much of Mandrà et al.’s paper just consists of graphs, wherein they plot the running times of the D-Wave machine and of a classical heuristic on the relevant instances—clearly all different flavors of exponential—and then Edmonds’ matching algorithm from the 1960s, which breaks away from the pack into polynomiality.

But let me bend over backwards to tell you the full story.  Last week, I had the privilege of visiting Texas A&M to give a talk.  While there, I got to meet Helmut Katzgraber, a condensed-matter physicist who’s one of the world experts on quantum annealing experiments, to talk to him about their new response paper.  Helmut was clear in his prediction that, with only small modifications to the instances considered, one could see similar performance by the D-Wave machine while avoiding the reduction to perfect matching.  With those future modifications, it’s possible that one really might see a D-Wave speedup that survived serious attempts by skeptics to make it go away.

But Helmut was equally clear in saying that, even in such a case, he sees no evidence at present that the speedup would be asymptotic or quantum-computational in nature.  In other words, he thinks the existing data is well explained by the observation that we’re comparing D-Wave against classical algorithms for Ising spin minimization problems on Chimera graphs, and D-Wave has heroically engineered an expensive piece of hardware specifically for Ising spin minimization problems on Chimera graphs and basically nothing else.  If so, then the prediction would be that such speedups as can be found are unlikely to extend either to more “practical” optimization problems—which need to be embedded into the Chimera graph with considerable losses—or to better scaling behavior on large instances.  (As usual, as long as the comparison is against the best classical algorithms, and as long as we grant the classical algorithm the same non-quantum advantages that the D-Wave machine enjoys, such as classical parallelism—as Rønnow et al advocated.)

Incidentally, my visit to Texas A&M was partly an “apology tour.”  When I announced on this blog that I was moving from MIT to UT Austin, I talked about the challenge and excitement of setting up a quantum computing research center in a place that currently had little quantum computing for hundreds of miles around.  This thoughtless remark inexcusably left out not only my friends at Louisiana State (like Jon Dowling and Mark Wilde), but even closer to home, Katzgraber and the others at Texas A&M.  I felt terrible about this for months.  So it gives me special satisfaction to have the opportunity to call out Katzgraber’s new work in this post.  In football, UT and A&M were longtime arch-rivals, but when it comes to the appropriate level of skepticism to apply to quantum supremacy claims, the Texas Republic seems remarkably unified.


When 15 MilliKelvin is Toasty

In other D-Wave-related scientific news, on Monday night Tameem Albash, Victor Martin-Mayer, and Itay Hen put out a preprint arguing that, in order for quantum annealing to have any real chance of yielding a speedup over classical optimization methods, the temperature of the annealer should decrease at least like 1/log(n), where n is the instance size, and more likely like 1/nβ (i.e., as an inverse power law).

If this is correct, then cold as the D-Wave machine is, at 0.015 degrees or whatever above absolute zero, it still wouldn’t be cold enough to see a scalable speedup, at least not without quantum fault-tolerance, something that D-Wave has so far eschewed.  With no error-correction, any constant temperature that’s above zero would cause dangerous level-crossings up to excited states when the instances get large enough.  Only a temperature that actually converged to zero as the problems got larger would suffice.

Over the last few years, I’ve heard many experts make this exact same point in conversation, but this is the first time I’ve seen the argument spelled out in a paper, with explicit calculations (modulo assumptions) of the rate at which the temperature would need to go to zero for uncorrected quantum annealing to be a viable path to a speedup.  I lack the expertise to evaluate the calculations myself, but any experts who’d like to share their insight in the comments section are “warmly” (har har) invited.


“Their Current Numbers Are Still To Be Checked”

As some of you will have seen, The Economist now has a sprawling 10-page cover story about quantum computing and other quantum technologies.  I had some contact with the author while the story was in the works.

The piece covers a lot of ground and contains many true statements.  It could be much worse.

But I take issue with two things.

First, The Economist claims: “What is notable about the effort [to build scalable QCs] now is that the challenges are no longer scientific but have become matters of engineering.”  As John Preskill and others pointed out, this is pretty far from true, at least if we interpret the claim in the way most engineers and businesspeople would.

Yes, we know the rules of quantum mechanics, and the theory of quantum fault-tolerance, and a few promising applications; and the basic building blocks of QC have already been demonstrated in several platforms.  But if (let’s say) someone were to pony up $100 billion, asking only for a universal quantum computer as soon as possible, I think the rational thing to do would be to spend initially on a frenzy of basic research: should we bet on superconducting qubits, trapped ions, nonabelian anyons, photonics, a combination thereof, or something else?  (Even that is far from settled.)  Can we invent better error-correcting codes and magic state distillation schemes, in order to push the resource requirements for universal QC down by three or four orders of magnitude?  Which decoherence mechanisms will be relevant when we try to do this stuff at scale?  And of course, which new quantum algorithms can we discover, and which new cryptographic codes resistant to quantum attack?

The second statement I take issue with is this:

“For years experts questioned whether the [D-Wave] devices were actually exploiting quantum mechanics and whether they worked better than traditional computers.  Those questions have since been conclusively answered—yes, and sometimes”

I would instead say that the answers are:

  1. depends on what you mean by “exploit” (yes, there are quantum tunneling effects, but do they help you solve problems faster?), and
  2. no, the evidence remains weak to nonexistent that the D-Wave machine solves anything faster than a traditional computer—certainly if, by “traditional computer,” we mean a device that gets all the advantages of the D-Wave machine (e.g., classical parallelism, hardware heroically specialized to the one type of problem we’re testing on), but no quantum effects.

Shortly afterward, when discussing the race to achieve “quantum supremacy” (i.e., a clear quantum computing speedup for some task, not necessarily a useful one), the Economist piece hedges: “D-Wave has hinted it has already [achieved quantum supremacy], but has made similar claims in the past; their current numbers are still to be checked.”

To me, “their current numbers are still to be checked” deserves its place alongside “mistakes were made” among the great understatements of the English language—perhaps a fitting honor for The Economist.


Defeat Device

Some of you might also have seen that D-Wave announced a deal with Volkswagen, to use D-Wave machines for traffic flow.  I had some advance warning of this deal, when reporters called asking me to comment on it.  At least in the materials I saw, no evidence is discussed that the D-Wave machine actually solves whatever problem VW is interested in faster than it could be solved with a classical computer.  Indeed, in a pattern we’ve seen repeatedly for the past decade, the question of such evidence is never even directly confronted or acknowledged.

So I guess I’ll say the same thing here that I said to the journalists.  Namely, until there’s a paper or some other technical information, obviously there’s not much I can say about this D-Wave/Volkswagen collaboration.  But it would be astonishing if quantum supremacy were to be achieved on an application problem of interest to a carmaker, even as scientists struggle to achieve that milestone on contrived and artificial benchmarks, even as the milestone seems repeatedly to elude D-Wave itself on contrived and artificial benchmarks.  In the previous such partnerships—such as that with Lockheed Martin—we can reasonably guess that no convincing evidence for quantum supremacy was found, because if it had been, it would’ve been trumpeted from the rooftops.

Anyway, I confess that I couldn’t resist adding a tiny snark—something about how, if these claims of amazing performance were found not to withstand an examination of the details, it would not be the first time in Volkswagen’s recent history.


Farewell to a Visionary Leader—One Who Was Trash-Talking Critics on Social Media A Decade Before President Trump

This isn’t really news, but since it happened since my last D-Wave post, I figured I should share.  Apparently D-Wave’s outspoken and inimitable founder, Geordie Rose, left D-Wave to form a machine-learning startup (see D-Wave’s leadership page, where Rose is absent).  I wish Geordie the best with his new venture.


Martinis Visits UT Austin

On Feb. 22, we were privileged to have John Martinis of Google visit UT Austin for a day and give the physics colloquium.  Martinis concentrated on the quest to achieve quantum supremacy, in the near future, using sampling problems inspired by theoretical proposals such as BosonSampling and IQP, but tailored to Google’s architecture.  He elaborated on Google’s plan to build a 49-qubit device within the next few years: basically, a 7×7 square array of superconducting qubits with controllable nearest-neighbor couplings.  To a layperson, 49 qubits might sound unimpressive compared to D-Wave’s 2000—but the point is that these qubits will hopefully maintain coherence times thousands of times longer than the D-Wave qubits, and will also support arbitrary quantum computations (rather than only annealing).  Obviously I don’t know whether Google will succeed in its announced plan, but if it does, I’m very optimistic about a convincing quantum supremacy demonstration being possible with this sort of device.

Perhaps most memorably, Martinis unveiled some spectacular data, which showed near-perfect agreement between Google’s earlier 9-qubit quantum computer and the theoretical predictions for a simulation of the Hofstadter butterfly (incidentally invented by Douglas Hofstadter, of Gödel, Escher, Bach fame, when he was still a physics graduate student).  My colleague Andrew Potter explained to me that the Hofstadter butterfly can’t be used to show quantum supremacy, because it’s mathematically equivalent to a system of non-interacting fermions, and can therefore be simulated in classical polynomial time.  But it’s certainly an impressive calibration test for Google’s device.


2000 Qubits Are Easy, 50 Qubits Are Hard

Just like the Google group, IBM has also publicly set itself the ambitious goal of building a 50-qubit superconducting quantum computer in the near future (i.e., the next few years).  Here in Austin, IBM held a quantum computing session at South by Southwest, so I went—my first exposure of any kind to SXSW.  There were 10 or 15 people in the audience; the purpose of the presentation was to walk through the use of the IBM Quantum Experience in designing 5-qubit quantum circuits and submitting them first to a simulator and then to IBM’s actual superconducting device.  (To the end user, of course, the real machine differs from the simulation only in that with the former, you can see the exact effects of decoherence.)  Afterward, I chatted with the presenters, who were extremely friendly and knowledgeable, and relieved (they said) that I found nothing substantial to criticize in their summary of quantum computing.

Hope everyone had a great Pi Day and Ides of March.

First they came for the Iranians

January 25th, 2017

Action Item: If you’re an American academic, please sign the petition against the Immigration Executive Order. (There are already more than eighteen thousand signatories, including Nobel Laureates, Fields Medalists, you name it, but it could use more!)

I don’t expect this petition to have the slightest effect on the regime, but at least we should demonstrate to the world and to history that American academia didn’t take this silently.


I’m sure there were weeks, in February or March 1933, when the educated, liberal Germans commiserated with each other over the latest outrages of their new Chancellor, but consoled themselves that at least none of it was going to affect them personally.

This time, it’s taken just five days, since the hostile takeover of the US by its worst elements, for edicts from above to have actually hurt my life and (much more directly) the lives of my students, friends, and colleagues.

Today, we learned that Trump is suspending the issuance of US visas to people from seven majority-Islamic countries, including Iran (but strangely not Saudi Arabia, the cradle of Wahhabist terrorism—not that that would be morally justified either).  This suspension might last just 30 days, but might also continue indefinitely—particularly if, as seems likely, the Iranian government thumbs its nose at whatever Trump demands that it do to get the suspension rescinded.

So the upshot is that, until further notice, science departments at American universities can no longer recruit PhD students from Iran—a country that, along with China, India, and a few others, has long been the source of some of our best talent.  This will directly affect this year’s recruiting season, which is just now getting underway.  (If Canada and Australia have any brains, they’ll snatch these students, and make the loss America’s.)

But what about the thousands of Iranian students who are already here?  So far, no one’s rounding them up and deporting them.  But their futures have suddenly been thrown into jeopardy.

Right now, I have an Iranian PhD student who came to MIT on a student visa in 2013.  He started working with me two years ago, on the power of a rudimentary quantum computing model inspired by (1+1)-dimensional integrable quantum field theory.  You can read our paper about it, with Adam Bouland and Greg Kuperberg, here.  It so happens that this week, my student is visiting us in Austin and staying at our home.  He’s spent the whole day pacing around, terrified about his future.  His original plan, to do a postdoc in the US after he finishes his PhD, now seems impossible (since it would require a visa renewal).

Look: in the 11-year history of this blog, there have been only a few occasions when I felt so strongly about something that I stood my ground, even in the face of widespread attacks from people who I otherwise respected.  One, of course, was when I spoke out for shy nerdy males, and for a vision of feminism broad enough to recognize their suffering as a problem.  A second was when I was more blunt about D-Wave, and about its and its supporters’ quantum speedup claims, than some of my colleagues were comfortable with.  But the remaining occasions almost all involved my defending the values of the United States, Israel, Zionism, or “the West,” or condemning Islamic fundamentalism, radical leftism, or the worldviews of such individuals as Noam Chomsky or my “good friend” Mahmoud Ahmadinejad.

Which is simply to say: I don’t think anyone on earth can accuse me of secret sympathies for the Iranian government.

But when it comes to student visas, I can’t see that my feelings about the mullahs have anything to do with the matter.  We’re talking about people who happen to have been born in Iran, who came to the US to do math and science.  Would we rather have these young scientists here, filled with gratitude for the opportunities we’ve given them, or back in Iran filled with justified anger over our having expelled them?

To the Trump regime, I make one request: if you ever decide that it’s the policy of the US government to deport my PhD students, then deport me first.  I’m practically begging you: come to my house, arrest me, revoke my citizenship, and tear up the awards I’ve accepted at the White House and the State Department.  I’d consider that to be the greatest honor of my career.

And to those who cheered Trump’s campaign in the comments of this blog: go ahead, let me hear you defend this.


Update (Jan. 27, 2017): To everyone who’s praised the “courage” that it took me to say this, thank you so much—but to be perfectly honest, it takes orders of magnitude less courage to say this, than to say something that any of your friends or colleagues might actually disagree with! The support has been totally overwhelming, and has reaffirmed my sense that the United States is now effectively two countries, an open and a closed one, locked in a cold Civil War.

Some people have expressed surprise that I’d come out so strongly for Iranian students and researchers, “given that they don’t always agree with my politics,” or given my unapologetic support for the founding principles (if not always the actions) of the United States and of Israel. For my part, I’m surprised that they’re surprised! So let me say something that might be clarifying.

I care about the happiness, freedom, and welfare of all the men and women who are actually working to understand the universe and build the technologies of the future, and of all the bright young people who want to join these quests, whatever their backgrounds and wherever they might be found—whether it’s in Iran or Israel, in India or China or right here in the US.  The system of science is far from perfect, and we often discuss ways to improve it on this blog.  But I have not the slightest interest in tearing down what we have now, or destroying the world’s current pool of scientific talent in some cleansing fire, in order to pursue someone’s mental model of what the scientific community used to look like in Periclean Athens—or for that matter, their fantasy of what it would look like in a post-gender post-racial communist utopia.  I’m interested in the actual human beings doing actual science who I actually meet, or hope to meet.

Understand that, and a large fraction of all the political views that I’ve ever expressed on this blog, even ones that might seem to be in tension with each other, fall out as immediate corollaries.

(Related to that, some readers might be interested in a further explanation of my views about Zionism. See also my thoughts about liberal democracy, in response to numerous comments here by Curtis Yarvin a.k.a. Mencius Moldbug a.k.a. “Boldmug.”)


Update (Jan. 29) Here’s a moving statement from my student Saeed himself, which he asked me to share here.

This is not of my best interest to talk about politics. Not because I am scared but because I know little politics. I am emotionally affected like many other fellow human beings on this planet. But I am still in the US and hopefully I can pursue my degree at MIT. But many other talented friends of mine can’t. Simply because they came back to their hometowns to visit their parents. On this matter, I must say that like many of my friends in Iran I did not have a chance to see my parents in four years, my basic human right, just because I am from a particular nationality; something that I didn’t have any decision on, and that I decided to study in my favorite school, something that I decided when I was 15. When, like many other talented friends of mine, I was teaching myself mathematics and physics hoping to make big impacts in positive ways in the future. And I must say I am proud of my nationality – home is home wherever it is. I came to America to do science in the first place. I still don’t have any other intention, I am a free man, I can do science even in desert, if I have to. If you read history you’ll see scientists even from old ages have always been traveling.

As I said I know little about many things, so I just phrase my own standpoint. You should also talk to the ones who are really affected. A good friend of mine, Ahmad, who studies Mechanical engineering in UC Berkeley, came back to visit his parents in August. He is one of the most talented students I have ever seen in my life. He has been waiting for his student visa since then and now he is ultimately depressed because he cannot finish his degree. The very least the academic society can do is to help students like Ahmad finish their degrees even if it is from abroad. I can’t emphasize enough I know little about many things. But, from a business standpoint, this is a terrible deal for America. Just think about it. All international students in this country have been getting free education untill 22, in the American point of reference, and now they are using their knowledge to build technology in the USA. Just do a simple calculation and see how much money this would amount to. In any case my fellow international students should rethink this deal, and don’t take it unless at the least they are treated with respect. Having said all of this I must say I love the people of America, I have had many great friends here, great advisors specially Scott Aaronson and Aram Harrow, with whom I have been talking about life, religion, freedom and my favorite topic the foundations of the universe. I am grateful for the education I received at MIT and I think I have something I didn’t have before. I don’t even hate Mr Trump. I think he would feel different if we have a cup of coffee sometime.


Update (Jan. 31): See also this post by Terry Tao.


Update (Feb. 2): If you haven’t been checking the comments on this post, come have a look if you’d like to watch me and others doing our best to defend the foundations of Enlightenment and liberal democracy against a regiment of monarchists and neoreactionaries, including the notorious Mencius Moldbug, as well as a guy named Jim who explicitly advocates abolishing democracy and appointing Trump as “God-Emperor” with his sons to succeed him. (Incidentally, which son? Is Ivanka out of contention?)

I find these people to be simply articulating, more clearly and logically than most, the worldview that put Trump into office and where it inevitably leads. And any of us who are horrified by it had better get over our incredulity, fast, and pick up the case for modernity and Enlightenment where Spinoza and Paine and Mill and all the others left it off—because that’s what’s actually at stake here, and if we don’t understand that then we’ll continue to be blindsided.