Alexander (“Sasha”) Razborov is a Russian-American mathematician, the Andrew McLeish distinguished service professor at the University of Chicago (Figure 1). Sasha graduated from the famous Moscow Specialized High School for Mathematics No. 2 in 1980; he received a gold medal at the 1979 International Mathematical Olympiad. His doctoral advisor at his alma mater, Moscow State University, was Sergei Ivanovich Adian.

Figure 1.
figure 1

Alexander Razborov. (Courtesy of the archives of the Mathematisches Forschungsinstitut Oberwolfach.)

Sasha works in combinatorial group theory, discrete mathematics, and theoretical computer science. His accolades include the Nevanlinna Prize (1990), Gödel Prize (2007), and David P. Robbins Prize (2013).

This interview was conducted by e-mail in the fall of 2021, thus before the outbreak of war in Ukraine. The questions were prepared by Karen Parshall and Sergei Tabachnikov, the Intelligencer’s editors-in-chief.

Our journal is a natural place for interviews with pathbreaking mathematicians, and we hope to publish them on a regular basis.

Alexander Razborov: Once we agreed upon this interview, Karen and Sergei worked out a suggested list of questions for it. They were quite considerate about this and made it very clear that they were only recommendations and that I could alter them in any way I might like. (In fact, they even cited an example of a stellar mathematician who started off by discarding a similar list entirely.) But after pondering their suggestions a bit, I realized that theirs was a very good list; it actually contains many questions that I have tried to answer for myself over the course of many decades. I do not claim to have complete and comprehensive answers to them, and honestly, given their open-ended nature, I do not think anyone does. Let me share some thoughts nonetheless, in the hope that, as David Hume once wrote, “they shall at least be natural and entertaining.”

Karen and Sergei: Could you give us a sense of your area(s) of research: discrete mathematics, theoretical computer science ...?

A. R. I will try not to tax the reader with extended personal stories here, so let me use up my quota right here at the start. My research career actually began in a different field, combinatorial group theory, when I was an undergraduate at Moscow State University under the supervision of Sergei Ivanovich Adian. I am definitely not ashamed of that student work: apparently, it has turned out at least somewhat useful in the breakthrough solution of Tarski’s problem by Olga Kharlampovich, Alexei Myasnikov, and Zlil Sela and in some follow-up papers in geometric group theory. But then the cards were dealt differently. Around the same time, the mid 1980s, I was reading Chang and Keisler’s great monograph [1] in model theory (Figure 2), and question 1 in the section “Open Problems in Classical Model Theory” caught my imagination. In the book, it is called the “finite spectrum problem” and seems remarkably innocent-looking.

Figure 2.
figure 2

C. C. Chang and H. J. Keisler. Model Theory.

Imagine that you have a finite set ϕof axioms expressible in the standard, that is, first-order, mathematical language that is supposed to axiomatize a class of algebraic or discrete structures. For example, ϕcan say that the structure in question is an abelian (or nonabelian) group, a field, or a regular graph in which the degrees of all vertices are equal to 2021. Then its spectrum Sp(ϕ)is the set of all integers n for which there exists a finite structure satisfying all axioms of ϕ(in other words, a model of ϕ) with precisely n elements. For example, when ϕcorresponds to groups or abelian groups, Sp(ϕ)is trivial; it consists of all integers. For fields, it is the set of all integers that are a power of a prime. For nonabelian groups, it is slightly trickier, but most likely a good description of the spectrum should not be very difficult to obtain. Anyway, a set of integers is called a spectrum if it has the form Sp(ϕ)for some ϕ.

It is an easy exercise to see that the union and the intersection of two spectra are also spectra. A very natural question to ask, then, is whether the complement NSp(ϕ)of a spectrum is always a spectrum as well; this is precisely the finite spectrum problem. To get a better feeling for how nontrivial it is, the reader is advised to think of the case of finite fields. That is, find a finite (first-order, no quantifiers on subsets!) set of axioms that happens to have a model of size n if and only if n has at least two prime divisors.

I was very young and daring at that time and really could not understand how one could be so dumb as to be unable to solve this very natural and innocent question with a minimal investment of effort. Of course, I immediately embarked on the project. For better or worse (my career might have taken a completely different turn otherwise), the book did not mention the paper [4], which had appeared precisely between the first edition of [1] and its Russian translation that I was perusing. In that beautiful paper, Neil Jones and Alan Selman proved that a set of integers represented in unary is a spectrum if and only if it belongs to the complexity class NP. Which immediately turns the finite spectrum problem into an even harder (presuming a negative answer) problem than “merely” the P=?NPquestion. In modern terminology, it is expressed as NE=?co-NE.

After rediscovering that equivalence by myself and connecting it to then modern terminology (for which I would like to acknowledge substantial help of books, such as Michael Rabin’s wonderful survey [9], and of generous people, such as Sergey Adian and Alexei Semenov), I immediately took up the task of proving P=?NPin its more difficult form above. Looking back, if being young and daring needs any excuses, I actually have one. At that time (and remember, I am talking about the mid 1980s), there was no Clay Institute with its list of Millennium Problems, and the words Pand NPwere relatively fresh and did not sound as haunting and daunting as they do today. They just described another interesting and obviously very important problem that would probably be solved soon, just as its older siblings in classical computability theory had been.

Anyway, that is how theoretical computer science and later discrete mathematics entered my professional life. They have remained there ever since. As time went by, a realization steadily grew in the community (of which I already considered myself a member) that P=?NPand several similar major questions in complexity theory would likely turn out to be much longer-term projects than we had thought. But simultaneously, the field was expanding and branching out. Increasingly beautiful and novel concepts, problems, and connections were enriching the body of the “theory” (the loving and charmingly modest way adepts often style theoretical computer science).

So much for personal reminiscences. Let me now focus on the questions at hand: how to describe theoretical computer science (TCS) and discrete mathematics, how to place them in the family of mathematical disciplines, and how to explain what makes their position unique.

A comprehensive definition of these disciplines is hardly possible, and I am not sure it would be very useful or instructive. The most naive formulations—that TCS is “some peculiar kind of mathematics allegedly somewhat related to computers” and discrete mathematics is “the kind of math you do if you are interested only in finite objects”—would be as good as any others. They would also be as inaccurate in many cases.

Somewhat provocatively, one good way of giving a sense of these areas might hinge precisely on explaining why exact definitions are so elusive.

Let me start with TCS. The first thing to realize is that this discipline, as we know it today, is subdivided into two different threads that are to a certain extent disjoint. To prevent possible objections, let me note that I have not made this up, and that the distinction between “Track A” and “Track B” is rather official. It is stipulated both in one of the main journals serving the area, Theoretical Computer Science, as well as in the flagship European conference in theory, International Colloquium on Automata, Languages and Programming. “Track B” is called “automata, logic, semantics, and theory of programming,” which gives a very good description of an area particularly popular in Europe and extremely well connected to “practical” computer science. But as exciting, important, and interesting as those topics are, I do not consider myself an expert in them and accordingly will not say anything more about them. The rest of my remarks will be entirely devoted to “Track A,” officially called “algorithms, complexity, and games” but understood more broadly to include an extremely wide array of various scenarios that have descended from “classical” algorithms and complexity.

Perhaps one of the biggest changes that has occurred over the several last decades is the establishment of wholly new areas aimed at capturing many aspects of networking and distributive computing. (In contrast, “classical” algorithms and complexity are primarily about single-machine computations.) It is therefore truly remarkable that no matter how far the field ventures, it remains generally faithful to two unifying principles that are clearly traceable to its roots.

First, a paper in theoretical computer science is a mathematical paper, that is, a collection of definitions, axioms (usually called “models”), lemmas, theorems, etc., stated and proved according to the standards of rigor prevailing across all of mathematics. The structures studied in theory might not be immediately recognized by “core” mathematicians (although this is changing for the better as the field matures), but the rules of the game are pretty much the same. If a paper does not conform to mathematical standards of rigor, it well might be beautiful, thought-provoking, or influential for many fields, TCS included, but it is not a paper in theoretical computer science.

The second principle concerns general methodology, and this is very specific to what I called Track A above. Again, no matter how far we go, there is usually a task to be performed, most often (but not always!) somewhat related to computing. There are many different ways to perform it, and it should be easy to show that there is at least one. Some of these ways are better than others, and normally there is a quantitative characteristic of performance, a complexity measure. After all of this is properly formalized, there usually appear two kinds of questions: either find a good, as determined by the complexity measure, way of performing the task (upper bounds, algorithms) or show that it is impossible (lower bounds, complexity theory).

One more reason why it is difficult to capture the meaning of TCS in precise words is the fact that there are theoretical “enclaves” in many neighboring communities. Some prominent examples that immediately come to mind are machine learning, optimization, quantum computing, and mathematical economics. All of these fields are really huge, and most of the research conducted in them is not necessarily mathematical. Nonetheless, they contain sizable theoretical communities, and here I do mean people who would likely identify themselves as belonging to those fields rather than occasional “TCS raiders.” I believe that it is this openness to the outside world and quite blurred notion of a “border” that accounts for much of the vitality of the discipline.

So far, I have said little about connections to more “classical” areas of mathematics itself, and this naturally brings us to the second subject, discrete mathematics. This discipline (almost synonymous with combinatorics) can be very loosely understood as the part of mathematics that deals with discrete structures. It would be fair to say that its development over, say, the last fifty years has been less dramatic than that of TCS. This, however, is perhaps only because combinatorics is much older; arguably the first published paper on the subject dates back almost 300 years to Euler [2].

Combinatorial structures naturally arise in many other areas of mathematics—to name just two, Cayley graphs and Coxeter–Dynkin diagrams—and in this context, it has been thriving for centuries. But the rapid advent of theoretical computer science also required the very extensive study of problems and structures that do not have immediate applications in “core” mathematics, and that part has also been booming, hand in hand with the corresponding parts of the theory. As before, there is no strict border between the “math-related” and “TCS-motivated” parts of discrete mathematics, and that largely contributes to its prosperity and health.

Let me briefly remark that while discrete mathematics indeed makes a major connection between TCS and “core” mathematics, it is by no means the only gateway. As theoretical computer science expands, it increasingly uses methods and concepts from algebra, geometry, probability theory, analysis, etc., directly. What is equally important is that in recent years, we have started to see very remarkable cases of payback, that is, when solutions to outstanding mathematical problems have been, at the very least, inspired by ideas in TCS. This subject is too huge to discuss here meaningfully; let me instead recommend the recent book [13] by Avi Wigderson on the topic (actually, on many other topics barely touched on above as well).

I began my response to this question with a personal story, so in conclusion, let me add one more brief personal remark. My own research in discrete mathematics was, for a long time, somewhat meager and “utilitarian” in the sense that it was very directly connected to specific problems in the theory I was interested in at the moment. This, however, has gradually changed over the past fifteen years or so, and currently a very significant portion of my work is devoted to discrete mathematics as a beautiful independent discipline with its own internal logic. The turning point, as I see it now, was my paper on flag algebras [10], which I will also mention below in other contexts.

K. P. and S. T. The most recent Abel Prize winners—László Lovász and Avi Wigderson—are in this field. Could you describe, in nontechnical terms, their contribution?

A. R. In a sense, much of the preceding text can be also used to answer this. Profound changes affecting the disciplines I attempted to describe are never a result of blind forces. Even a collective effort of many people still requires visionaries to direct and channel the effort. Besides being excellent researchers themselves, such visionaries should possess leadership qualities, penetrate deeply into the subject and its relations to other disciplines, have very good taste, etc. Lovász and Wigderson are exactly leaders of this kind, and the way theoretical computer science and discrete mathematics look today is, to a large extent, the result of their research input and guidance over many decades (Figure 3).

Figure 3.
figure 3

László Lovász and Avi Wigderson. (Photo by Gert-Martin Greuel courtesy of the archives of the Mathematisches Forschungsinstitut Oberwolfach.)

But if I understood your question correctly, you are rather asking about concrete contributions. Given the context provided in the previous paragraph, trying to recount, in this brief interview, even the most important of them is a futile task. Let me then select one example in each case. While my selection is completely subjective, I know firsthand that both laureates count their leading role in the establishment of these areas among their most important achievements.

As noted above, one characteristic feature of discrete mathematics is that it studies discrete, as opposed to continuous, objects. A very important project begun by Lovász and several of his collaborators has as its goal, to put it bluntly, to defy this piece of conventional wisdom. The starting point is that for many good purposes, it is very natural to consider graphs and other combinatorial objects as imperfect approximations of certain continuous structures, in analogy (due to Lovász) to decimal fractions as approximations to real numbers. The resulting theory of graph limits is beautiful and elegant and, what is very important, provides deep, numerous, and very welcome connections to many other areas of “pure” mathematics and even physics: algebra, analysis, probability theory, statistical mechanics, ergodic theory, to name a few. Lovász coauthored essentially all of the subject’s foundational papers, and for readers who want to learn more, I recommend his monograph [6].

In Wigderson’s case, I highlight the theory of pseudorandomness. Randomized algorithms are everywhere, but since genuine randomness is hard (if not impossible) to come by, they are always, in practice, cast in terms of so-called pseudorandom bit generators. The hope is that if pseudorandom bits are deterministically generated in a sufficiently clever way, then they will be “almost as good as random,” in the sense that they will trick all randomized algorithms from a certain class into performing almost as if the bits were truly random. One of the main, “officially stated,” goals of the theory of pseudorandomness is to lay a mathematical foundation in the form of explicit constructions and rigorous results. It has turned out, however, that this theory greatly transcends its original purpose and has beautiful and unexpected connections to many other areas such as complexity theory, additive combinatorics, and, somewhat surprisingly, algebraic geometry. Wigderson is the main player and one of the founders of this exciting area; among many other things, he (co)invented one of the most influential constructions (the Nisan–Wigderson generator) and proved perhaps the most remarkable connection to classical complexity theory (the Impagliazzo–Wigderson theorem). For more details on this fascinating subject, see [13].

K. P. and S. T. When in high school, you were a top olympiad problem solver. What role do you think the International Mathematical Olympiad (IMO), in particular, and various level competitions, in general, play in attracting new talent to mathematics? Do mathematical contests have any shortcomings in your view? How do olympiads compare with other forms of working with talented and motivated students such as specialized schools, math circles, summer camps, internet-based extracurricular activities, and so on?

A. R. When I was in the seventh grade, I appeared in a movie. Not exactly the best film produced in the USSR (or even in that year), and my role was not exactly the leading role (though my name was in the credits). To my deep surprise, after this truly remarkable experience was over, I received a phone call from the studio inviting me to audition for their next movie. Since by that time, I had been selected to represent Moscow at the All-Union Math Olympiad, I turned them down, and that marked the end of my career as an actor. I still wonder whether I made the right decision (Figure 4).

Figure 4.
figure 4

Sasha Razborov, early years.

On a more serious note, this is an enormously complex question (thank you, Karen and Sergei!), one that does not have an easy or universally accepted answer. Let me at least categorize it a bit.

First, with all due respect, I think that the form of these activities—timed competition, extracurricular, boarding, virtual, etc.—is not the most relevant classifying feature. You can either train kids for participation in math competitions, that is, for solving problems specifically designed for that purpose and with the competitions themselves as the heart of the matter, or you can try to teach the students serious research-level mathematics in an age-appropriate form. The border is not strict, and there are mixed forms. For example, I know of quite a few olympiad problems that were either directly inspired by serious mathematics or that used its tools, often in disguise, in their solutions. (See below for an example.) There is also recreational mathematics out there, although it seems to have gained less traction in these circles, probably due to an unwillingness to associate with endeavors that expressly position themselves as “just a hobby.” Still, I believe this is a good classification in most cases.

I sense little controversy about the idea of teaching kids reasonably modern mathematics in an age-appropriate form. No age is too early for math, and once a student is willing to receive the ideas and ready to absorb them, only good will come of it, particularly when an able educator is available.

Competitions are a different story, and as many readers are aware, opinions here differ dramatically, and discussion is sometimes heated. This issue, in my view, again naturally splits in two. How useful are math competitions for attracting new, beautiful minds to mathematics? And how useful are these competitions for the intellectual development of mathematicians-to-be?

You expressly asked about the first aspect, attraction, and that seems simpler. We are talking here about a very early stage in life when kids want to be many things, often at once. Challenges, and particularly competitive challenges, are definitely a very effective and appealing way of exposing talented students to their first math beyond the curriculum. This net for catching young minds is effective. Besides, this is not only about recruiting new generations of mathematicians; it is also about outreach, something I believe we should take seriously and not only because the National Science Foundation (NSF) demands it. Exposure to mathematics in any form is definitely good for everyone; it is a very important service we can and should provide to society.

As for the efficiency of the net, let me only remark that any statistics here are necessarily misleading, for there is no way to know for sure how many IMO and national olympiad winners, for example, would have become professional mathematicians without this stimulus. Equally hard to know is how many talented children started out doing competitions and then walked away from mathematics because they thought this was what the profession is about. The latter looks more hypothetical; I am not aware of such cases.

What is debated most hotly is the intellectual aspect of the question: how useful are math competitions for the development of future mathematicians? The effort necessary to advance to the IMO or a similar level is enormous. It is definitely way more than an “ordinary” adult full-time job, and that is in addition to school. Is it a waste of time, particularly at an age when time is precious? And if we cannot ban IMOs, at least for now, then what can we do to help to convert this experience into skills appropriate and necessary for professional life, and not necessarily in mathematics?

Let me take this opportunity to advertise the book [12], a collection of articles written by former IMO winners (and now professional mathematicians) that meaningfully tries to address this transition. As for my own opinion, I am afraid I will disappoint the reader. I have nothing like a definite answer. The only thing I am pretty sure about is that we should avoid extremes on this issue (actually, as well as on almost any other). Let me instead offer a few random remarks.

For me, doing math olympiads was definitely a lot of fun. At the very least, they help build character, and I definitely do not regret my career in that world. Still, olympiads have been criticized on many levels: for their competitiveness, their time restrictions, the artificiality of their problems, and the nature of their solution methods, among others. The concept of competitiveness (not against nature, or whatever we mathematicians study, but against our colleagues) is definitely not entirely foreign to the profession. As for time restrictions, they may not be as extreme in “real life” as in the setting of an olympiad, but isn’t time management key in building a tenure case or in speeding up the preparation of a paper in order to have it ready by a conference deadline or for a grant application? As for the nature of the problems and their solution techniques, I can assure the reader, first, that in the olympiad community there is a very strong sense of “good” and “bad” problems that is not so dissimilar to the concept of “good” or “bad” mathematics, and second, that there are “toolkits,” just as in many areas of serious mathematics, with which many problems can be solved. The most inspiring problems, however, require new approaches. Perhaps the main difference between mathematics as encountered in the olympiads and mathematics encountered by mathematicians is a certain lack of continuity: with rare exceptions, olympiad problems do not build upon each other to form a coherent picture; they largely exist in isolation.

The only loose conclusion I would like to draw here is that perhaps a lot depends on the individual in question. After all, there are two equally respected and tightly interrelated styles of doing mathematics: “visionary” (building new structures, finding new connections, etc.) and solving difficult concrete problems posed by others. Perhaps a previous life in math competition circles is slightly more relevant to mathematicians whose research tends to be of the second kind.

Let me finish with the problem I alluded to above, but let me warn the reader that in my time (and I am talking about the community composed of several USSR IMO teams!), this problem was deemed a “coffin,” that is, to use the olympiad community’s slang, it is a problem in the range “from very difficult to impossibly difficult.” A bunch of football players are trying to split themselves into two mutually agreeable teams, and they start with an arbitrary distribution. Some pairs of them are friends. Assume for simplicity that every player has an odd number of friends. When the coach blows a whistle, every player moves to the opposite side of the field if that is where the majority of his friends are, and they keep doing this indefinitely. An important twist is that at each move, they move simultaneously. Prove that eventually, every player either always stays on one side of the field or always runs to the opposite side. Hint: look up (synchronous, symmetric) Hopfield networks.

K. P. and S. T. What do you think about computer-verified proofs and, more generally, about the future role of AI in mathematical research?

A. R. “Computer-verified proofs” can mean several different things. For instance, one can take an entirely human proof and work on formalizing it, perhaps with a bit of effort, in a compromise language in which it can be subjected to formal verification. There are many prominent efforts of this kind, with several promising approaches based on the theory of univalent mathematics pioneered by the late Volodya Voevodsky (Figure 5).

Figure 5.
figure 5

Vladimir Voevodsky (1966–2017). (Courtesy of the archives of the Mathematisches Forschungsinstitut Oberwolfach.)

But I somehow feel you are asking about something else that has already become a part of our professional lives, whether we like it or not. I mean computer-assisted (or computer-aided) proofs that as a result of this assistance, either cannot, in principle, be verified by hand or have a verification that is so boring and time-consuming that no one, including the authors themselves, will ever do it. Let me first address the purely pragmatic aspect of this: whether we, as a community, should recognize such things as legitimate proofs.

I may be a bit naive, because I was not involved in the actual verification of the most famous precedents, the four color theorem and the Kepler conjecture (although I was involved in debates in one of those cases). Still, it seems to me that a significant part of the controversy can be attributed to the usually implicit assumption that human mathematics is infallible (“human supremacy” as it will undoubtedly be called one day). We know too well from the history of our discipline that this is not the case. At the end of the nineteenth century, the four color conjecture, which had generated much interest, was considered solved for more than ten years with two independent erroneous proofs. I am sure that many readers could contribute similarly entertaining examples from their own areas, and I think the best we can hope for is to set more or less universally accepted norms of behavior (that is, standards of rigor) that would reduce the chance of an error while still keeping mathematical texts manageable.

With a bit of good will and humility, shouldn’t it be easy to extrapolate these standards into the territory of computers without lowering them? Say, I do not think that a piece of code (cf. “lemma”) written in a major programming language (cf. English/French/German/Russian) and accompanied by a deductive argument establishing its correctness (“the proof of a lemma”) is any more prone to errors than a conventional proof. This is particularly so when it can be executed using a reasonable amount of computing resources. Like any other code, it can be badly written, of course, but shall we honestly admit that awfully written conventional papers with great results are also not entirely unheard of in mathematics?

In fact, computer code might be less prone to error. Let me give one example from personal experience. The direction that stemmed from the paper [10] (flag algebras for extremal combinatorics) requires, both in the work of others and in my own, a lot of computation using semidefinite programming, which results in humanly unverifiable proofs. But it also often involves concrete manipulations with rational or algebraic functions, their derivatives, etc., that are tedious but certainly doable by hand. At some point, I realized that I preferred to delegate such things to Maple. It will never forget a term, a minus sign, or the coefficient 2 in the middle of a tiring computation. I am much less sure about myself.

But I may be preaching to the choir, since according to my observations, this is quickly becoming, or has become, the norm. Invoking personal experience again, I have about five published papers that are accompanied by a Maple worksheet that is essential to the proofs; so far, no one has complained. One might attribute this to the fact that theoretical computer science and discrete mathematics are naturally more receptive to innovations of this sort, but there are also extended computing packages in combinatorial group theory, modular forms, and elliptic curves, and these are only in the fields of which I am aware.

Another popular objection to computer-assisted proofs is that they are “not instructive,” which is probably true in most cases. Let me just briefly remark, however, that the computer part usually does not constitute all, or even most, of the proof, and besides, in conventional mathematical papers, auxiliary statements (“lemmas”) also tend to be pretty boring when taken out of context.

The second part of the question is more interesting, since as far as I understand it, this is about the future role of AI in the “assistance” part and what the chances are that one day it will become dominant.

Given what has happened to chess, go, and Jeopardy!, one must be careful about predictions. Perhaps one difference between those games and mathematics, as well as our grounds for moderate optimism, is that our rules are more or less set only in terms of what makes a deductive argument a valid mathematical proof. The concept of “good” mathematics—what makes an interesting or influential paper, etc.—is evolving. It is much more subjective, and in general, these things are agreed upon much less. We definitely should hold the upper hand here, and if worst comes to worst, we can always declare the kind of mathematics in which we are beaten by AI “uninteresting.” In a sense, chess made a mistake (this actually started around the time of Wilhelm Steinitz, that is, at the end of the nineteenth century) in making the supreme goal to win the game rather than to produce, as Steinitz once wrote, “an evergreen in the laurel crown” of the player that would be remembered for centuries.Footnote 1

More seriously, as of now, this all seems to be more about the constructive, fruitful, and symbiotic relation between a human mathematician and a computer. This already takes different forms, and I expect it to evolve further to include new forms that are hard to anticipate. It remains to be seen whether AI will be able to solve, in the “plug-and-play” mode, a prominent open problem that strong human mathematicians have diligently attacked.

K. P. and S. T. Do you think that funding for the mathematical sciences is adequate? What, if anything, can be done to improve the attitude of society toward mathematics?

A. R. Funding means different things in different countries. In Russia, for example, there exists an extended network of purely research institutions, inherited from the former Soviet Union, in which the main obligation of the faculty is to conduct research. In the last several decades, however, this has been slowly shifting toward the Western model. For example, teaching is now strongly encouraged. Almost all of my colleagues at the Steklov Mathematical Institute, both in Moscow and in St. Petersburg, have secondary appointments in leading universities. The Steklov Institute has its own educational (albeit graduate-level) center, etc. As another example, several years ago, the Russian government approved a well-funded (at least by math standards) program of international mathematical centers; four of them are located in several major cities, and nine are regional.

The idea, as I see it (there were clashing views on the subject), is to blend the traditional format of a Soviet research institution with more Western forms such as thematic programming, short- and long-term visitors and visiting positions, extended PhD and postdoctoral programs, conferences and other large-scale events, etc. But despite these trends, it would still be fair to say that the main principle is sustained: many faculty members in Russia earn their salaries for doing research, and these days, those salaries can be substantially supplemented by grants. So the question of funding takes a very practical form: how many professional mathematicians does society need, and how many can it afford?

In the USA, teaching and administrative service factor prominently into salary considerations, and the overall number of mathematicians is largely regulated by this demand. At research-oriented institutions, however, research is also expected. “Funding” mainly means “external funding.” Access to it can improve professional life by, say, leading to new collaborations, and its lack can make it less productive or even miserable. But usually this is not a matter of life and death for mathematicians, since our research seldom depends on expensive equipment, at least not in a critical way.

I somehow feel it would be a bit inappropriate for me to offer extensive views on adequacy, since I am in a much better position than many of my colleagues. Subjectively, I am not very much of a “traveling scientist,” being more of an introvert. Objectively, I am blessed to be at a university in which the ability to advise students is strictly separated from having access to external funds (which is extremely good for students, since their intellectual pursuits can change during their course of study). My university views this policy as an integral part of its culture. Moreover, my department has a renowned extended postdoctoral program that is equally emancipated from the grants of individual faculty members. So let me venture to offer a general remark instead, without drawing any definite conclusions.

Mathematics is a very cheap science, in the sense that it is inexpensive. But this word has another, unfortunate, meaning (both in English and in Russian, in fact), and this connotation is totally unfair to us. After all, other scientific disciplines depend on and use the outcomes of our research directly and do so with great enthusiasm. It might be good for us to be somewhat aggressive in explaining to society at large and decision-makers in particular that the mathematics zealously used in myriad contexts today came out of thin air just about as much as money grows on trees. No one can responsibly predict what kind of abstract mathematics will suddenly become relevant to the sciences, engineering, or finance tomorrow; such examples abound. Significantly oversimplifying (I have not bothered to process the actual NSF figures), suppose there are two decisions to make: to fund fundamental mathematics at the level of 1% or 2% or to fund at the level of 50% or 70% a hot area that is constantly in the news and that, among other things, heavily uses mathematics. Question: which of these would be more appealing or exciting to a public (political) figure (with the other one then likely to fall between the cracks as insignificant)? The answer is clearly the second, but for long-term impact on society, the first decision is at least as important, even if it might require a mental note that the value after the dollar sign is not always the right measure of importance. Considering it from a slightly different angle, mathematics is an excellent investment for very little money.

As for the “attitude” part, again I may be naive, or simply lucky, but neither in the USA nor in Russia have I observed a deterioration in attitudes toward mathematics or noted any particular disrespect when people learn about my profession. Might the feeling that there has been some sort of deterioration in attitude not be relative? I mean, if you look, say, at the absolute number of news headlines and articles about mathematics, the number of column inches devoted to it, even movies or plays about mathematics and mathematicians, it seems to be steadily growing. Wasn’t Grigori Perelman as popular as a pop star for quite a while (and weren’t many people, as a result, trying to understand what a 3-manifold is)? Is it not simply the fact that mathematics’ exposure is growing much more slowly than that of the incredible amount of “other stuff” in this information age? (Whether this distributed attack on our intelligence is beneficial for society is beyond the scope of this interview.) Maybe this is what gives the impression that mathematics is less appreciated these days.

But all that said, we should keep doing good work in terms of outreach. This is our obligation, and it is also very good for the discipline.

K. P. and S. T. It has been said many times (for example, by I. M. Gelfand) that it should be possible to explain a good mathematical result to a proverbial person on the street. Can you give an example of one of your results that can be so explained?

A. R. This one is easy (I mean the choice of the topic, not the proof itself). Assume that you have n=108(or still better, 10100) people. They can befriend each other; there are (n2)n22possible bonds. If they then form committees of three, there are at most (n3)n36such committees. We require that a committee may be formed only if at least two of its members are friends. Assuming that you allow 30% of all (n2)possible friendships, what is the maximal percentage of committees that can be formed?

The answer is very close to 71.3%, and if your imaginary person on the street is already satisfied, let me offer several more details, since I do not want to create the impression that it is “yet another olympiad problem.”

  1. 1.

    If instead of maximizing the number of legitimate committees the goal is to minimize it, this becomes a special case of the classical Kruskal–Katona theorem.

  2. 2.

    As should be clear from my frivolous treatment of the parameter n, it is of little importance as long as it is large enough. One of the most important cornerstone ideas of the project on graph limits I wrote about before is that it is percentages (they may also be called fractions, densities, or probabilities) that matter, not absolute sizes of the object [6]. This idea is shared by the theory of flag algebras begun in [10], which is specifically tailored for problems in extremal combinatorics. The solution of the “proverbial person” problem in [11] was the first concrete application of that theory.

  3. 3.

    So as n, we have a well-defined function f(ν)that is the maximal fraction of legitimate committees you can get by making a fraction νof bonds. The bound f(ν)ν(32ν)has been known since 1959 [3], and it is tight when ν=12,13,,1n,. The conjecture as to what happens in between these critical values was made (for the first time as far as I know) by László Lovász and Miklos Simonovits in 1983 [7], and this is exactly what was proved in my paper of 2008 [11]. The function f(ν)is piecewise smooth, with singularities at the critical values. For instance, when ν[14,13], we have f(ν)=14+32ν+(12ν3)3/236. In particular, f(310)=710+153000.713.

  4. 4.

    Extremal configurations, which is how to proceed to maximize the overall number of legitimate committees, are also known and proved to be the only ones in [5, 8]. Say that when ν(14,13), we have to divide people into four groups in which three are of equal size and the fourth is smaller. Then we select all pairs within each group. The edge density uniquely determines the sizes of the groups. Interestingly, this is not the only optimal construction. It can be significantly deformed without changing the fractions of bonds/committees. This is perhaps what accounts for the difficulty of proofs in this area.

  5. 5.

    Finally, a warning to the reader who may choose to take a look at some of these papers. In the literature, the problem is usually represented in the dual form, that is, as minimizing the fraction of triangles when the fraction of edges is given. I flipped edges and nonedges in my presentation to make the figures slightly more presentable.

K. P. and S. T. You moved from Russia to the USA. Many other mathematicians made the same move, particularly shortly before or after the collapse of the Soviet Union. What effect do you think this brain drain had on mathematics in Russia?

A. R. This question looks rhetorical to me, since the impact is clearly negative. But on the positive side, it has been significantly offset by several gains, some of which I have hinted at above. Let me briefly elaborate.

Emigration, including scientific emigration, is something very different now from what it was in the “good old days.” I vividly remember a farewell party we threw in high school for a friend who was permanently leaving in the late 1970s. The overall mood was like sending to Mars a colonist who had neither the intention nor the technical means to return. It was odd to meet him some fifteen years later at the San Francisco airport (and then many times thereafter, both in the USA and in Moscow), virtually unchanged.

Those feelings are now a thing of the past. With mathematicians busily swarming across borders in all directions, the simple truth is that Russia is now a full-fledged member of the global mathematical community, which also includes, whether you like it or not, the international job market. You are definitely right in suggesting that so far at least, market forces have mostly worked against Russian institutions. But this is changing, albeit slowly. In my own area, several prominent mathematicians have chosen to return to Russia after several years spent in the West. There are also examples of mathematical hires in Russia of those with no previous Russian connections. In my own, relatively small, area, such a young researcher accepted just such a position in 2021 in St. Petersburg, owing likely to the fact that they had succeeded in building a leading research group there (as well, of course, to the natural beauty of the city).

The number of visitors has also increased by (literally!) several orders of magnitude. They come, moreover, in capacities ranging from speaking at a conference to serving for several years as head of one of the so-called megagrants.Footnote 2 They bring with them indispensable experience, perspective, and expertise.

Last, but definitely not least, the majority of scientists in the Russian “diaspora” feel sympathetic toward the country they once left and are ready to invest a significant amount of effort to become involved in various ways. What is important is that this effort is appreciated on the Russian side; the “traitors” mentality is also long gone. I myself have a part-time appointment at the Steklov Institute (which, in practice, means a summer job) and try to be helpful whenever I can. I am not the only one by far. There are extremely prominent mathematicians investing much more effort, for example in the organization of ICM22 in St. Petersburg.

In lieu of a conclusion, perhaps it is freedom of choice that actually matters. The world with the “one-way ticket” and the “it is us or them” mentality is one thing, and precisely thirty years after the unsuccessful August coup in Moscow, let us hope it is gone forever. In the world of communicating vessels (in the physics sense), most decisions are reversible; something agreeable always bubbles up, even if in forms previously unanticipated.

K. P. and S. T. A related question: do you find cultural differences between doing mathematics in Russia (as you remember it) and in the USA and, more generally, in the West?

A. R. Well, I am blessed to be able to remember last summer, which is the last time I was “doing mathematics” in my office at the Steklov Institute.

More seriously, as I tried to describe above, these days, the differences really appear to be very insignificant and constantly diminishing, as long as we are talking about intellectual aspects of the profession (funding and administrative practices are different matters). Your question is more interesting in its historical aspect, and perhaps this is what you actually meant.

Since this interview is full of truisms anyway, concluding with one more may not make matters much worse. Mathematics has existed as a more or less continuous human endeavor for more than 2500 years if we count from the time of Pythagoras and his school. That is a long time for a culture to evolve that is resilient to temporary (should I perhaps say temporal?) fluctuations in the structure of society. This time has been well spent. Returning to the matter at hand, I was amazed to realize that even given all the enormous differences between practically everything else, we were doing the same mathematics in the “good old days” as our Western colleagues, guided by the same basic principles. So, the differences were never enormous, and the only observations (not very original) I would like to contribute pertain to the famous notion of a “named Soviet scientific school.”

The concept of a recognized leader in an area of science, with many scientific descendants, determining, to a certain extent, the development of the discipline is certainly not a Russian invention. It is inherent in the nature of mathematics, again beginning with Pythagoras. I wrote about two good examples above. But in the West, and particularly in the USA, a “school” is seldom a geographic notion; it is usually scattered over many places on the globe. The general immobility in the Soviet Union, combined with the system of academic research institutes, led to the very interesting phenomenon of geographically compact schools, those where the “core” is in the same unit administratively headed by the leader. There are good examples in the West as well, particularly in Europe, but in the Soviet Union, it happened on a much larger scale. It has been much debated whether this system was good or not so good, and compelling arguments have been made on both sides. I personally think that it may very much depend on the context, on the qualities of the school leader, on his/her openness, on the vitality of the discipline, etc.

Let me conclude with a continuation of the personal story I opened with. I was in the school of my former advisor, Sergei Adian, whose own area of research was (he passed away in 2020) combinatorial group theory (Figure 6). I have indirect reasons to believe that he had strong hopes that I would continue the “family business.” Instead, as I noted above, I deserted it, and rather soon. In the almost forty years of intense communication and cooperation on all kinds of topics that followed, I do not remember a single instance in which Adian expressed even the slightest displeasure with my decision. Somewhat relevantly perhaps, he had made many personal friends in the USA and Germany at a time when this was considered exotic, to say the least. I am far from being sure that all school leaders in Soviet mathematics were that liberally minded.

Figure 6.
figure 6

Sergei Adian (1931–2020), right, with his friend and colleague Frank Cannonito.