Excelentísimos e Ilustrísimas Autoridades,
Profesores/ Profesoras, Señores y Señoras
I feel very sorry but I need to continue my speech in English. I hope it will not cause you any inconvenience.
I feel deeply honored by the act whose ceremonial we have just executed, the awarding to me of the title of Doctor Honoris Causa by Miguel Hernandez University of Elche. I wish to thank my colleague, Professor Laureano Escudero, for coming up with this idea and initiating the process, and the members of the distinguished group that supported him and made this decision. Since the honor bestowed upon me is a recognition of my scientific and educational accomplishments, I would also like to express my gratitude to Carnegie Mellon University and its Graduate School of Industrial Administration, where I have been active for the last 35 years, for the wonderful environment that they have offered me and that has been crucial to my accomplishments. Finally, I would like to recognize the contribution to this work of my 50 coauthors, with many of whom I wrote several joint papers. They come from all over the world and roughly half of them are my former students. Without them, this would not have been possible.
I find it a fitting coincidence that this distinction is being conferred upon by my the University that bears the name of Miguel Hernandez, the poet who lost his life in what he perceived as the struggle for his country’s freedom. His fate has some echoes in my own life. I come from Northern Transylvania, the part of Romania which in 1940 was awarded by Hitler to Hungary. At the age of 19, in 1941, I joined the Hungarian communist underground to fight the Nazis. About the time when Hernandez died in his prison, I was organizing the distribution of clandestine antiwar leaflets in my home town. A couple of years later I was arrested, tortured and sentenced to 14 years in a fascist prison. Unlike Hernandez, I did not die in jail; I managed to escape and after the war became a high official in Communist Romania. But alas, my dreams of a social order based on justice and fairness were soon dispelled by the horrible reality of the totaliarian dictatorship that developed. I was soon arrested again, and spent two years in solitary confinement before being released as a consequence of Stalin’s death. I did not give up my earlier beliefs easily. For a while I tried to contribute as an economist to attempts at reforming the system. But in the end I got totally disillusioned and realized that it was bankrupt beyond repair.
It was at that point, at the relatively advanced age of 37, that I embarked upon a change of career and returned to my old high-school hobby, mathematics. I was attracted to the new branch of applied math called Operations Research, which had started during World War II and has undergone a rapid development throughout the fifties. From the beginning, the backbone of Operations Research was linear programming, the problem of optimizing a linear function subject to linear constraints. For a couple of years I worked on the kind of linear programs known as network and transportation problems. However, by 1963 I got hung up on a different kind of problem, a so called integer program, i.e. a linear program in which some or all of the variables are required to be integer.
It is instructive to contemplate the circumstances that led me to this new problem. At the beginning I was not interested in integer programming, and the integrality condition on some variables did not strike me as crucial. But in 1963, while working at the Design Institute for Forestry and the Timber Industry in Bucharest, I came up against a forest management problem that opened my eyes to the practical importance of integer programming. I wanted to optimize the harvesting plan from an area of forests consisting of a set of plots with trees of different age and quality characteristics, subject to various constraints. As the profit from the operation was proportional to the amount of wood harvested, a linear programming formulation seemed to provide an adequate representation of the problem. However, while most of the costs related to the harvesting were roughly linear, there was a major cost item that was not linear at all. In order to gain access to the plots, one had to build a road network. Depending on which plots one intended to harvest, different segments of the road network had to be built, and these decisions were interrelated in a rather intricate fashion. If one wanted to cut any amount of wood from a certain plot, one had to build a road segment connecting that plot to one of its neighbors. The building of such a road segment in turn created the need to connect that neighboring plot to one of its other neighbors, etc. This led me to formulate the problem as a linear program in 0-1 variables, and at once opened my eyes to the richness of the toolkit that I came across: 0-1 variables could be used to formulate all kinds of logical conditions whose presence in real world problems is pervasive, and more generally, they offered a universal way to deal with various types of discontinuities.
As there was no available computer code at the time to solve this type of problem, I devised my own approach, which I called the additive algorithm, because it involved only additions and comparisons. It became more widely known under the name of implicit enumeration. It was based on a series of logical tests exploring the implications of fixing certain variables at 0 or 1, and can therefore be viewed as the spiritual forerunner of what today’s computer scientists call constraint propagation, one of the main ingredients of constraint programming. It was easy to implement, it did not require a linear programming code, and it was able to solve more or less reliably problems with about 30 binary variables. It solved several versions of my forest management problem, which had about 40 variables.
I was still in Romania in 1965 when my paper on the additive algorithm was published in Operations Research. A year later I managed to cross the Iron Curtain with my family, and by 1967 I was working at Carnegie Mellon University in Pittsburgh. In the meantime, that paper became highly popular: according to statistics published in 1982, it was the most frequently cited article published in Operations Research between 1954 and 1981.
The field that I thus joined, integer programming, had just been created a few years earlier. It arose from the need to model a variety of real world situations that involve discontinuities, either-or situations, etc. An amazing variety of activities and situations can be adequately modeled as linear programs (LP’s) or convex nonlinear programs (NLP’s). However, as the world that we inhabit is neither linear nor convex, the most common obstacle to the acceptability of these models is their inability to represent nonconvexities or discontinuities of different sorts. This is where integer programming comes into play: it is a universal tool for modeling discontinuities and nonconvexities of various kinds. Applications of integer programming abound in all spheres of decision making. Some typical real-world problem areas where integer programming in particularly useful as a modeling tool, include facility (plant, warehouse, hospital, fire stations) location; scheduling (of personnel, production, other activities); routing (of trucks, tankers, aircraft); design of communication (road, pipeline, telephone, computer) networks; capital budgeting; project selection; analysis of capital development alternatives. Various problems in science (physics: the Ising spin glass problem; genetics: the sequencing of DNA segments) and medicine (optimizing tumor radiation patterns) have been successfully modeled as integer programs. In engineering (electrical, chemical, civil and mechanical) the sphere of applications is growing steadily. By far the most important special case of integer programming is the (pure or mixed) 0-1 programming problem, in which the integer-constrained variables are restricted to 0 or 1. This is so because a host of frequently occurring discontinuities, nonconvexities, logical alternatives, etc., can be modeled through the use of 0-1 variables.
There are essentially two ways to deal with integrality conditions. One is to intelligently enumerate a subset of the integer value assignments so as to implicitly examine all such assignments. This is best done, as proposed by Land and Doig, Dakin and others by using the linear programming relaxation to bound the optimum over each subproblem created by the successive value assignments — whence the name of branch and bound. The other approach, pioneered by Gomory, is to try to convexify the feasible set by identifying the convex hull of its integer points, or if that is too difficult, to approximate the convex hull by generating valid cutting planes, i.e. linear inequalities that cut off part of the LP relaxation but are satisfied by all feasible integer points.
Around the late sixties and early seventies, implicit enumeration and branch and bound codes were successfully implemented and able to cope with small and sometimes medium-sized integer programs that arose in practice. These algorithms did not rely on any deep theoretical insights, nor did they offer much scope for theoretical research, and by their very nature they were of exponential complexity. However, they solved some reasonably sized practical problems. On the other hand, the cutting plane approach, which offered the prospect of reducing the apparently intractable integer programming problem to linear programming over the integer hull, attracted strong theoretical efforts which led to significant results concerning the properties of various classes of cutting planes, the characteristics of facet inducing inequalities, etc.; but all these results failed for a while to yield even moderately efficient algorithms and computer codes. Thus while the bulk of theoretical work was focused on cutting plane theory, when it came to solving practical problems, the only codes that were of any use were of a branch and bound type. This situation lasted until the late eighties, when finally cutting planes were found to be demonstrably useful in practice: combined with enumerative methods under the names of branch-and-cut, or cut-and-branch (the two are not the same), by the mid-nineties they were shown to be able to solve many if not most of the problems on which branch and bound alone failed. This progress, which amounted to a decisive change in the state of the art, was made possible by the advent of more efficient linear programming code and faster computers, but was primarily brought about by improved use of cutting panes: improvements in the cutting plane techniques themselves, and the combination of convexification with enumeration. In this context, it was with considerable satisfaction that I experienced during the early nineties the success of the approach that became known as lift-and-project, and that had its roots in my work on disjunctive programming in the early seventies.
What is disjunctive programming and how did I come to it? As described above, I came to integer programming through the implicit enumeration/branch-and-bound entrance. However, I keenly felt the limitations of this approach and already in the late sixties I turned to the study of convexification techniques. In particular, I explored the use of convex analysis techniques like polarity, maximal convex extensions, and projection. This has led me to the concepts of intersection cuts and disjunctive cuts, and more generally to the study of optimization problems over the union of polyhedra, which I called disjunctive programming. Since polyhedra are convex, their union is in a way the simplest kind of non-convex structure that lends itself to fruitful study.
This is how I arrived to disjunctive programming as the study of optimization over unions of polyhedra. Zero-one programming is the main application of this approach and was the focus of my interest, but the scope of disjunctive programming is broader than that. There are two basic results on disjunctive programming that have immediate relevance to 0-1 programming, pure or mixed.
First, there is a compact representation of the convex hull of a union of polyhedra in a higher dimensional space. This representation involves a number of variables and constraints linear in the number of polyhedra. Projecting it back onto the space of the original variables one obtains the family of all valid inequalities for the disjunctive set. Alternatively, solving a linear program over the projection cone (after normalizing it) one can generate a member of the family that is strongest in a well-defined sense.
Second, an important family of disjunctive sets, called facial, can be convexified sequentially, i.e. by imposing the disjunctions one by one, each time deriving the convex hull of the points satisfying the current set of constraints. Zero-one programs are facial disjunctive programs, which means that they are sequentially convexifiable: the integer hull of a (pure or mixed) 0-1 program with p 0-1 variables can be obtained in p steps. General integer programs when viewed as disjunctive programs are not facial, and they are not sequentially convexifiable. Thus one of the first insights yielded by the disjunctive programming approach was to identify the main characteristic that distinguishes 0-1 programming (pure or mixed) from general integer programming.
These results and others were laid out in a July 1974 technical report. As they were not backed up by computational results, they were not very well received at the time. With the exception of a few researchers like Jeroslow and Blair, who got interested and made their own contributions to disjunctive programming, the professional community at large remained rather skeptical. In fact my technical report, which I was unwilling to rewrite according to the taste of a referee, was not published at the time. Over the next 25 years I received quite a few requests from fellow researchers for Management Science Research Report No. 348, but it did not appear in print until the end of 1998, when it was published in Discrete Applied Mathematics as an invited paper with a nice foreword by two distinguished colleagues.
The Latin saying habent sua fata libelli (books have their own fate), apparently also applies to theories. While at its conception in the early-to-mid-seventies disjunctive programming got a cool reception, about 15 years later when Ceria, Cornuéjols and myself recast essentially the same results in a new framework which we called lift and project, the reaction was quite different. Our interest in returning to the ideas of disjunctive programming was prompted by the recent work of Lovász and Schrijver on matrix cones. We soon discovered that a streamlined version of the Lovász-Schrijver procedure was in fact isomorphic to the disjunctive programming procedure for generating the integer hull of a 0-1 program. Our work was focused on algorithmic aspects, and accompanied by the development of an efficient computer code, MIPO (Mixed Integer Program Optimizer). We used MIPO to demonstrate that a particular version of disjunctive cuts, which we called lift-and-project cuts, derived in a subspace, then lifted and strengthened, when judiciously combined with branch and bound, could solve most of the mixed integer programs of MIPLIB that had been impervious to solution by branch and bound codes alone.
The success of lift-and-project triggered a revival of interest in cutting planes. It turned out that, when judiciously combined with enumeration, Gomory’s mixed integer cuts could also be successfully used in a 0-1 context, and since their implementation is straightforward, they made their way into the commercial codes in the late nineties.
Although the ideas of disjunctive programming and lift-and-project have their origin in the geometry of mixed 0-1 programs and the mathematics of convex analysis, much of my research throughout my career was inspired by real world problems and situations like the example given at the beginning of this talk. Thus, in the late sixties, job shop scheduling and machine sequencing problems triggered my interest in implicit enumeration on disjunctive graphs, and later led me and my students to develop the Shifting Bottleneck procedure which is widely used today as an efficient scheduling heuristic.
The idea of lift-and-project itself came to me from a practical situation. In 1980-81 while spending a sabbatical at the University of Köln, I was approached by a Dutch OR group that was trying to solve a driver scheduling problem for a municipal bus company. They were using a set covering model, but the resulting problem had more than 100,000 variables. Upon closer examination it turned out that there were morning duties and afternoon duties that could be scheduled separately, each of them giving rise to a set covering problem with a few hundred variables, i.e. of manageable size, provided one could impose the condition that morning and afternoon duties had to be compatible in order to be assignable to the same driver. It turned out that these compatibility conditions could be expressed as a system of linear inequalities whose solutions define the perfectly matchable subgraphs of a bipartite graph. We derived this characterization with Bill Pulleyblank, who also happened to be on sabbatical in Bonn, by first guessing the required system of linear inequalities, and then proving the integrality of the resulting polyhedron by using what came to be known as extended formulation and projection. During the eighties and nineties this approach of extended formulation and projection was applied by us and others to many other situations.
Finally, a third example of how real world problems often lead to interesting theoretical developments concerns my work on the traveling salesman problem. In the mid-eighties I was approached by the OR group at LTV Steel about the scheduling of their hot rolling mill. Such a mill processes steel slabs into sheets by rolling, i.e. passing the hot slabs between two layers of rolls which flatten them. The order in which the slabs are processed affects both the quality of the product and the efficiency of the process. Scheduling the rolling mill consists in selecting the slabs to be included into the next “round,” and putting the selected items into an appropriate sequence. The two tasks, selecting and ordering the slabs, cannot be optimized independently, since a selection that ignores ordering criteria may result in a set for which the ordering problem has no solution. Red Martin and I formulated this as a Prize Collecting TSP. Given n cities and a table of intercity travel costs, a salesman who gets a prize for every city that he visits, is asked to come up with a minimum cost tour of a subset of the cities, subject to the condition that he collect a prescribed amount of prize money. We developed several heuristics for solving this problem and implemented a software package, ROLL-A-ROUND, which is in uninterrupted daily use at the Cleveland Works of LTV Steel since January 1991. This practical problem has led to the study of the polyhedral structure of the Prize Collecting TSP, to the identification of facet defining inequalities, and later to a partial characterization of the cycle polytope of a directed graph. It has also stirred my interest in the facial structure of the asymmetric traveling salesman polytope, and has started the chain of events that led to my joint work with Matteo Fischetti on that subject.
One of the most satisfying experiences of my professional career has been to witness the rise in prestige of Mathematical Programming and Combinatorial Optimization among the branches of Mathematics. The International Congress of Mathematicians gathers about 4,000 mathematicians from all over the world every 4 years. When I started out in Mathematics around 1960, our specialty occupied a secondary, peripheral place among the mathematical disciplines, and was not represented at the International Congress either through invited speakers or through specialized sessions. This situation has radically changed over the intervening decades. At the last International Congress, which was held in Berlin in 1998, the chairman of the organizing committee was an integer programmer; the congress had five or six sections devoted to the various subdisciplines of mathematical programming, and several of my colleagues gave invited 45-minute lectures in these sections. Today Mathematical Programming and Combinatorial Optimization enjoys the full respect of the larger mathematical community, as an exciting young field that often attracts some of the best talent.
To finish on a personal note, I would like to cite Leo Rosten, a contemporary American writer whose piece of wisdom about the purpose of life is one of my favorites:
“The purpose of life is not to be happy. The purpose of life is to matter, to be productive, to have it make some difference that you lived at all. Happiness, in the noble, ancient verse, means self-fulfillment, and is given to those who use to the fullest whatever talents God or luck or fate bestowed upon them.”