This page provides access to the original research that has been the backbone of my academic career. These results appear in a variety of journals and conferences and throughout my books. The purpose of this page is to add commentary and context on problems that I most enjoyed solving.

  • Current research on cardinality estimation, potentially leading to a tough-to-beat new algorithm for the problem.
  • Research on sorting algorithms, including seminal results on several classic methods that answer open questions left by Knuth.
  • Original work on data structures that have now been in widespread use for decades.
  • The evolution of algorithm science, in my Algorithms books, which has helped programmers embrace classic algorithms in modern systems, with impact on the billions of devices that we use today.
  • The evolution of Analytic Combinatorics with Philippe Flajolet, a new field of mathematics with origins in computer science that is also useful in research in physics, biology, and other scientific fields.

This page is intended to add personal perspective on some of my published research. All of my books and papers are listed in the full CV on the ABOUT page. Also, some general commentary and context, particularly about algorithm science and analytic combinatorics, is covered on the IDEAS page.

Cardinality Estimation

My most recent research is in a field of research originated by my late friend Philippe Flajolet on a fundamental problem in data science known as cardinality estimation. The problem is simple: You have to process a stream of data items and then give an estimate of the number of distinct values. What makes the problem interesting is the constraints. You need to:

  • Process the stream in a single pass.
  • Spend as little time as possible on each item.
  • Use as little memory as possible.

Flajolet and colleagues developed an algorithm called HyperLogLog that provides accurate answers within these constraints. For example, with a few hundred bits of memory, it can pass through a stream spending only few machine cycles per item to get estimates accurate to within a few percentage points for streams with billions of items.
     Flajolet’s work is an outstanding example of algorithm science. With several colleagues, he developed algorithms, analyzed them, and experimented with them, iterating several times over many years. As with most successful algorithm science projects, this work has paid off. HyperLogLog is a simple algorithm that has been fully analyzed and validated and is now an industry standard, used by Google, Amazon, and many other major companies.

HyperLogLog. The algorithm is based on using a technique called stochastic averaging to divide the stream into M substreams, hashing the values, keeping track of the maximum of the positions of the rightmost 1 bit in the hashed values, and then computing a statistic from those M numbers (loglog N bits each). The genius of the method is a mathematical proof of the bias in the statistic, giving in the end an accurate estimate of the cardinality. The proof is the essence of the algorithm. As Flajolet put it, “without the proof, there is no algorithm.”
     To develop an invited talk about HyperLogLog, I spent an extensive amount of time in conversation with my colleague and Philippe’s last student Jérémie Lumbroso, who taught me everything I know about the topic. At one point, he mentioned in conversation that he “always thought there should be a method that uses only a few bits per substream.”

HyperBitBit. Jérémie’s comment led me to a possible approach that is simple to describe and uses only two bits per substream. The idea is to keep an estimate of lg N and then, for each substream, maintain one bit that marks whether lg N might be increased by one and another bit that marks whether it might be increased by two. When more than half the substreams argue for increasing the estimate of lg N, the algorithm does so and resets the bits appropriately.
     In a style reminiscent of my implementations in Algorithms, I sketched some code to get an idea of how a solution might look. My only goal in writing this code was to have a simple and elegant implementation that I could use to communicate the idea as the last slide in my invited talk, which was already too long (and, in my mind, there is no need to study the idea unless there is an elegant implementation). From the response of the audience, I knew that I would need to at least compile and run the code before posting the slides for the talk.
     On the plane ride home, I decided to do just that. There were a few minor compile-time errors, but to my amazement, the code worked perfectly on the first test case I tried. Needless to say, I spent the rest of the trip double-checking that it worked well for the test cases I had been using for HyperLogLog and the other algorithms in the talk, and it did. As is typical, a constant for scaling needed to be determined empirically. Following Flajolet’s dictum, there is no algorithm without analysis telling us the value of that constant, so I knew that the real work lies ahead. But these tests are a critical first step—there would also be no algorithm if the experimental results were unfavorable!
     After hearing my talk in 2016 at AofA’16 in Krakow and again at Knuth’80 in Pitea, Svante Janson asked a few questions at lunch one day at AofA’18 in Uppsala. The next day, he handed me a piece of paper with a proof sketch that gives an expression for the constant I determined empirically on the plane ride. More work is needed to analyze the accuracy and to help determine some parameter settings, but we are on the right track.

Cardinality Estimation. Slides for invited talk at AofA’16 (Flajolet Lecture), University of Utah, University of Southern California, Knuth’80 Symposium.
HyperBitBit: A Memory-Efficient Alternative to HyperLogLog (with J. Lumbroso and S. Janson), in preparation.


My early research revolved around addressing open questions left by Knuth in The Art of Computer Programming, Volume 3: Sorting and Searching, which was published during my first year as a graduate student.


A reviewer for one of the several journal articles I wrote that were based on research in my thesis commented, “I hope that this is the last paper we see on Quicksort.” I have a feeling that that reviewer was Tony Hoare, who might have been justified in thinking that his original 1960 paper covered everything interesting about Quicksort. But there have been hundreds of papers published on the algorithm since that time, and quite a bit of useful knowledge discovered. As the method of choice (still) in practical sorting applications, the algorithm is scrutinized carefully, and new variants are being discovered and tested to this day.

Equal keys. One contribution of my thesis was a study of the performance of Quicksort when equal keys are present, building on work about BSTs by Burge. It did not seem of much practical interest at the time, except the observation that quadratic running time could be easily avoided when large numbers of equal keys are present simply by stopping the partitioning pointers on keys equal to the partitioning element during partitioning, and that things could be further improved to make Quicksort linear in many cases by doing three-way partitioning (doing the work to put all keys equal to the partitioning element into position).
     Not all practitioners followed this advice—Jon Bentley and Doug McIlroy at Bell Labs discovered early on that the Unix implementation of Quicksort went quadratic with large numbers of equal keys, and decided that it was worthwhile to do three-way partitioning. I was working with them on the third edition of Algorithms at the time, and Bentley and I realized that three-way partitioning is within a constant factor of optimal for any set of key values (and conjectured that the constant is 1). We also realized that it was easy to adapt to sort records with keys spanning multiple fields, again the method of choice because typically applications have large numbers of equal keys in some fields.
     Eventually, in the fourth edition of Algorithms, we decided to recommend three-way partitioning using an ancient algorithm due to Dijkstra. More recently, a three-way partitioning algorithm using two partitioning elements has attracted some attention. I studied this one in my thesis, but judged it to be not worth the effort. Recent studies on modern machines show that it can be recommended over normal Quicksort, particularly because it also handles equal keys well.

Quicksort with Equal Keys. SIAM J. on Computing 6, 2, 1977.

Quicksort is Optimal. Slides for talk at KnuthFest, Stanford University, January, 2002.

Multiway Quicksort (with J. Bentley). Dr. Dobbs Journal, November, 1998.

Algorithms, Third Edition, in C, Parts 1–4: Fundamentals, Data Structures, Sorting, and Searching. Addison-Wesley, Reading, MA, 1998, pp. 327–329.

Algorithms, Fourth Edition (with K. Wayne). Addison-Wesley, Reading, MA, 2011, pp. 298–301.

Solved. In 2016, Sebastian Wild proved the Bentley-Sedgewick conjecture about asymptotic optimality.

Strings. But the real advantage of three-way partitioning comes when sorting with string keys. The simple string-sorting algorithm that Bentley and I developed that is based on 3-way partitioning is the method of choice in many practical applications.

Fast Algorithms for Sorting and Searching Strings (with J. Bentley). Proc. 8th Symposium on Discrete Algorithms, 1997.


One of my most original pieces of research involved a connection between a little-understood sorting algorithm and a classic problem in number theory known as the Frobenius problem.
     This broke new ground and led to several papers, two Ph.D. theses on Shellsort and variants, and a number of practical improvements. At the time, the problem of whether there exist O(N log N) sorting networks was still open (resolved in 1983 by Ajtai, Komlos, and Szemeredi), so this research seemed a possible path to finding practical O(N log N) sorting networks.
     Many later papers have studied this problem, and the central question about worst-case performance is resolved—there is no increment sequence for which the worst-case number of compares used by Shellsort is O(N log N)—but the question of whether there exists an increment sequence that would make Shellsort compete with Quicksort in practice still remains open.

A New Upper Bound for Shellsort. Journal of Algorithms 7, 1986. Invited paper at 1982 IEEE International Conference on Information Theory.

Analysis of Shellsort and Related Algorithms. Invited paper at 1996 European Symposium on Algorithms.

Solved. Poonen developed a lower bound that proves that there is no sorting network in the Shellsort family for which the worst-case number of compares is O(N log N).

Solved. Goodrich developed randomized O(N log N) Las Vegas and Monte Carlo variations of Shellsort.

Open (theory). Find an increment sequence where the average number of compares used by the standard Shellsort algorithm to sort N elements is O(N log N).

Open (practice). Find an increment sequence for which Shellsort sorts random files of 1 billion elements with fewer than 50 billion compares on average.


Knuth left the analysis of Heapsort almost completely open. The worst-case performance is within a factor of two of the optimal, which is good enough for most people. But settling for this result undermines the high standard that he set for precise, if not exact, formulas describing the best, worst, and average case for other leading sorting algorithms.

Best case. Tackling the average case did seem much too difficult, so my first idea was to try to understand what the best case might be. Maybe the average case could be trapped in between the best and the worst. Which permutation of N items causes Heapsort to use the fewest compares? A simple argument based on Floyd’s variant of Heapsort shows that the number of compares cannot be above 1/2 N lg N, but could it be linear? If so, Floyd’s variant would be optimal.
     There are many fewer possible heaps than permutations, so I wrote a program to generate all possible heaps to look for the answer. I could see that there was a growing gap, but the size of the heaps I could generate was still too small to see a pattern.
     Fortunately, I spent a couple of summers consulting at IDA, where I learned to program the CRAY-1, by far the biggest and fastest computer available. I coded up my Heapsort program (in assembly language, I think) and was able to see the best case in some good-sized heaps, which gave the insight to come up with a construction proving that Floyd’s method is not asymptotically optimal in the worst case.

Average case. Using standard techniques seemed hopeless from the start, but I realized that my CRAY-1 runs showed that most heaps seemed to require ~N lgN compares, so I wondered whether some sort of counting argument might work. My student Russel Schaeffer found such an argument—it is an amazingly simple resolution to our knowledge of basic facts about Heapsort.

The Analysis of Heapsort (with R. Schaffer). J. of Algorithms, 1993.

Algorithm Science

The heart of my career in research is work on algorithm science. For details and perspective about what this phrase means, see the essay on the IDEAS page here. My research in algorithm science is chronicled not in conference papers or journal articles, but in my series of books on algorithms. The story is best told through a commentary on the four editions of these books.

First edition

When I started working on Algorithms in the late 1970s, there was not much code to be found in print. Some algorithms were described in pseudo-code in books and papers, but implementations of many classic algorithms were rarely found. Of course, Knuth’s books provided code in MIX for many algorithms, but I wanted students to see that a broad variety of classic algorithms can be implemented elegantly and efficiently in a familiar programming language.
     At the time, Pascal was in favor as a teaching language and widely available, so I set out to implement everything in Pascal. Since my goal was elegant implementations, I first wrote the code, then made sure it would compile, then ran it on a small example or two. With batch processing and one run a day, this was a very significant challenge, but it was also a significant contribution. After all, competing books and papers that used pseudo-code were exhibiting algorithms that had not been tested at all (and that practice continues to this day)!
     Why did I set out to write this book? I was committed to teaching large numbers of undergraduates about algorithms. Knuth’s books are terrific for reference, but not suitable as textbooks, did not cover many important algorithms, and still seemed a bit removed from real code. I felt that another leading textbook of the time, The Design and Analysis of Computer Algorithms by Aho, Hopcroft, and Ullman, was unsuitable because it was based on pseudo-code and on O-analyses that are not useful for predicting performance.
     By contrast, I wanted students to have hands-on experience with classic algorithms and to understand why they perform well on real computers. Looking back, this was a fork in the road that separated my work from that of many other authors. That separation persists today, with the popular book by Cormen, Leiserson, Rivest, and Stein following in the tradition of Aho, Hopcroft, and Ullman and my books going all in on real code and performance in practical situations.
     As I was working on Algorithms, the personal computer arrived in all its splendor. After visiting Xerox PARC in 1978 and 1979, I realized that my efforts in teaching algorithms were just in their infancy, for two reasons. First, my work was based on continually refining, improving, and testing implementations—removing the one-run-a-day constraint was like adding rocket fuel to the endeavor. Second, the personal computer enabled the possibility of visualizing algorithms in action—I was able to instrument my implementations with relative ease in Smalltalk on the Alto. Later, my student Marc Brown developed a system for algorithm animation that we used to teach algorithms in an innovative “classroom of the future” at Brown. That system was way ahead of its time—who nowadays allows students to experiment with dynamic visualizations of algorithms on their personal computers during a large live lecture?

Algorithms. Addison-Wesley, Reading, MA, 1983, second edition, 1988, 672 pp. Translated into Japanese (1990), German (1991).

A System for Algorithm Animation (with M. Brown). Computer Graphics 18, 3, 1984.

A Prototype Environment for Algorithm Animation (with M. Brown). ACM SIGCSE Bulletin, 1984.

Second edition

As soon as I arrived in Princeton in 1985, my new Apple LaserWriter arrived, and I knew from my experience with algorithm animation at Xerox PARC and Brown that I would be able to use PostScript to create quality visuals for the new edition of Algorithms. Moreover, I quickly adopted these goals:

  • The code would be debugged to the extent possible, not just compiled and run on a few cases.
  • The code appearing in the book would be the same code that was debugged.
  • When the text says, in essence, “this code produces the output illustrated here” that had to be a true statement.

These goals were audacious in a world where few textbooks were exhibiting real code. Realizing them was a fun and exciting effort involving thousands of lines of PostScript code, fully debugged implementations of all the classic algorithms in the book, and scores of unique and persuasive algorithm visualizations. You can find details in my invited talk “Creating Algorithms.” The approach was way ahead of its time—indeed, I know of no similar system today.
     Before long, Pascal lost favor as a serious programming language and I did versions of the books in C, C++, and Modula-3. This experience forced me to seriously address the idea of language independence. There were plenty of bumps in the road. For example, radix sort, the method of choice in many practical situations, was a casualty of strong typing in Pascal—the cost of converting integers to 32-bit quantities was far higher than the cost of the sort. Another example is the “struct foo*” hack in C, a singularly unappealing way to achieve data abstraction. Others persist to this day. For example, the lack of generics, even in modern languages, is very frustrating: Do I really need to write different code for each type of data to implement a pushdown stack (or any other data structure)? And there will not be an “Algorithms in Python” as long as each object carries a dictionary, which incurs a three-order-of-magnitude performance penalty on any linked structure.
     Still, in the 1980s I was extremely pleased that my overall story did turn out to be language-independent: an effective way to learn classic algorithms is through elegant implementations, which enable every student to experience their effectiveness in realistic applications and provide a basis for any developer to put them to good use.
     The focus of the effort was creating the books, but I now realize that this experience was the genesis of my approach to algorithm science—I was working with real code on real computers, not pseudo-code, and the focus of statements about performance had to be properties of algorithms, not programming languages, computer systems, or hardware.

Algorithms, Second Edition. Addison-Wesley, Reading, MA, 1988, 672 pp. Translated into Japanese (1990), German (1991).

Algorithms in Modula-3. Addison-Wesley, Reading, MA, 1993, 672 pp.

Algorithms in C++. Addison-Wesley, Reading, MA, 1992, 672 pp. Translated into German (1993), Japanese (1994), Spanish (1996).

Algorithms in C. Addison-Wesley, Reading, MA, 1990, 672 pp. Translated into French (1991), German (1992), Italian (1993).

Third edition

Do you want to know how Prim’s MST algorithm compares to Kruskal’s algorithm? Or how splay trees compare to skip lists? For answers to questions like these, turn to Algorithms, Third Edition. I spent most of the 1990s researching these sorts of issues for the classic algorithms covered in early editions. The books are filled with tables like the one here, which summarize the results of experiments comparing a number of algorithms and variants in terms of relative running times. This was an enormous amount of work, but worthwhile because of the information content. For example, this table admits fifteen pairwise comparisons of algorithms and links to tables for other families of algorithms to admit hundreds of comparisons.
     Research results like these are rare in the computer science research literature. Why? First, there is little appeal in demonstrating that someone else’s new algorithm is not likely to be at all useful in practice. Second, there is little excitement in confirming that great classical algorithms do indeed perform as advertised. Still, the tables and mathematical models in Algorithms, Third Edition have proven extremely useful in guiding people when making decisions about implementing classic algorithms in real-world situations.
     Yes, the tables might look different when using a different programming language or a different computer, but they turned out to be remarkably robust over a decade with three programming languages and multiple computers. In an attempt at intellectual honesty, the last thing I would do when writing a section was to run the experiments and build the tables. With rare exceptions, they would validate the statements about algorithm performance made in the text. Those statements were based on mathematical models that had been validated many times in earlier editions of the book. The whole experience was a validation of the idea of algorithm science.
     One memorable surprise was for the Mergesort implementation in Algorithms in Java. I expected to significantly beat the system sort, as usual, but only managed to match its performance. It turned out that the Java system sort was using my code from Algorithms in C (!).That experience validated the idea that practitioners were paying attention to my work, even though many computer science researchers may not have been doing so.
     More generally, my book sales in the professional market are comparable to my book sales in the academic market. If a student has not worked with algorithm code as an undergraduate, they need to learn to do so as a developer. If a developer needs to choose between splay trees and hashing, a good first step is to refer to one of my books. As a result, my code (or code based on it) is found throughout our computational infrastructure. The timing was right.

Algorithms, Third Edition, in Java, Part 5: Graph Algorithms. Addison-Wesley, Reading, MA, 2003, 502 pp.

Algorithms, Third Edition, in Java, Parts 1–4: Fundamentals, Data Structures, Sorting, and Searching. Addison-Wesley, Reading, MA, 2002, 737 pp.

Algorithms, Third Edition, in C++, Part 5: Graph Algorithms. Addison-Wesley, Reading, MA, 2002, 496 pp.

Algorithms, Third Edition, in C, Part 5: Graph Algorithms. Addison-Wesley, Reading, MA, 2001, 472 pp.

Algorithms, Third Edition, in C++, Parts 1–4: Fundamentals, Data Structures, Sorting, and Searching. Addison-Wesley, Reading, MA, 1998, 716 pp.

Algorithms, Third Edition, in C, Parts 1–4: Fundamentals, Data Structures, Sorting, and Searching. Addison-Wesley, Reading, MA, 1998, 702 pp.

Fourth edition

More than a decade of research in algorithm science certainly paid dividends, but it turned the books more into reference works than textbooks. Peter Gordon, my longtime editor at Addison-Wesley, convinced me to reconsider the model of a textbook suitable for large numbers of undergraduates. At about the same time, I convinced Kevin Wayne to help, as he was providing an invaluable perspective from his experience teaching the material.
     Another step forward in book design was also an essential ingredient in the fourth edition, but that is not the topic of this essay.
     To make a realistic textbook, we had to substantially pare down the amount of material in the book and focus on clear coverage of the most important algorithms, without necessarily justifying them in terms of performance comparisons with many alternatives. More important, algorithm science became a specific focus at the beginning of the text. We also carefully delineated formal statements about performance, in terms of theorems, propositions, and hypotheses. Our goal was to make algorithm science foundational in giving a wide audience appreciation of classic algorithms and their effectiveness.

Algorithms, Fourth Edition (with K. Wayne). Addison-Wesley, Reading, MA, 2011, 955 pp.

VALIDATION OF THIS WORK comes in many forms. I have had the opportunity to visit large tech companies on many occasions and have been heartened to see copies of my books on many developers’ desks. One of the first books scanned for the Google Book project was Algorithms in C++. The books have sold over one million copies, another million have watched the online lectures, and millions more have accessed the associated online content. Without algorithm science as a foundation, I believe that little of this would have been possible.

Data Structures

Red-black trees

In the spirit of attacking problems left open by Knuth, Leo Guibas and I started studying AVL trees at Xerox PARC in 1978. Could we characterize the average-case performance? Scouring the literature, we eventually realized that many known algorithms could be characterized within a simple framework requiring just one bit per node, a big improvement over standard implementations of AVL trees, which required storing the height in each node. (Actually, Marc Brown later pointed out that no bits per node were necessary, using a simple link-switching trick.)
     AVL trees are easy to understand, but we were driven by the idea that there might be something simpler. Most of all, we wanted to find something that we could analyze and/or a version that we could prove had the same asymptotic performance as perfectly balanced trees. The idea of “top-down” trees seemed to put us on this right track. Unlike known trees, these trees could admit parallel insertions with only local locking, a very important property in practice. It also seemed that this same property might lead to a way to analyze performance for random keys, but we never found it.
     We did enjoy studying the worst case and other properties of various trees and found plenty of interesting mathematical problems to resolve.
     While we were at it, we thought we should run experiments to learn which methods worked best in practice. It is hard to conceive of now, but I wrote and ran the code for these experiments with punched-card batch processing (one run a day)! They confirmed that the algorithms have similar performance and that it is reasonable to conjecture that they are asymptotically equivalent to perfectly balanced trees.
     Why red-black? Two reasons: Our offices at Xerox PARC had whiteboards (a luxury at the time) and it was easy for us to draw the trees in color to communicate with one another. Later, we were able to actually print color drawings of the trees using an experimental laser printer.
     I wrote about these trees in Algorithms. The simple code and appealing graphics made them one centerpiece of the book. They performed better than competing symbol-table implementations in my experiments for the third edition (as I knew they would) and they have been widely adopted.
     We included an algorithm for deletion in the original paper, but it wound up in a figure after the references, so many people missed it. I judged it a bit too complicated to include that code in my books, until the 4th edition, when I took another look and settled on a version that can be implemented with a relatively small amount of code, even with deletion. And I insisted that we have a second color in the text in this edition, precisely because it meant that we could include drawings of red-black trees.

A Dichromatic Framework for Balanced Trees (with L. Guibas). 19th Annual Symposium on Foundations of Computer Science, 1980. (Also in A Decade of Research—Xerox Palo Alto Research Center 1970-1980 ed. G. Laverdel and E.R. Barker.

Algorithms, Fourth Edition (with K. Wayne). Addison-Wesley, Reading, MA, 2011, Section 3.3.

Left-Leaning Red-Black Trees. Invited talks in Maresias, Philadelphia, Wadern, and Gramado, 2008–2009.

Left-Leaning Red-Black Trees, posted here, 2008.

Pairing heaps

One of my first experiences with algorithm animation was visualizing MST algorithms in Smalltalk on the Alto at Xerox PARC in 1978. After years of working with punched cards and TTY interfaces, to see these algorithms in operation was a revelation. I was immediately fascinated by the behavior at the fringe of the MST and the opportunity to improve performance with a suitable data structure.
     When I returned to Brown and finally got a personal computer a few years later, I tried to illustrate how Fibonacci heaps work through animation. I realized almost immediately that this would be very difficult because much of the code would never be executed for any family of inputs that I could reasonably visualize (or that would arise in the MST application). I had much better luck with binomial queues, and on the basis of those visualizations started to think about simpler algorithms. The idea for pairing heaps followed immediately.
     I explained the data structure to Bob Tarjan on a visit to Bell Labs shortly afterwards and asked whether he thought it might match the performance of Fibonacci heaps. He showed the problem to Danny Sleator and Mike Fredman, and we wound up considering many different variants of the basic idea, but none that resolved the basic question. Decades of research by many people have improved our understanding of pairing heaps, but the fundamental questions remain open.
     I did not write about pairing heaps in Algorithms because it seemed to me that multiway heaps were going to be most effective in the most important practical applications (MST and graph-search algorithms). But maybe that decision needs to be rethought, at least for the booksite, as pairing heaps are now generally recognized as simple to implement and having excellent performance in practical applications. I was especially pleased to see pairing heaps recommended in 2008 as a “robust choice” by Mehlhorn and Sanders, algorithm engineers extraordinaire.
     As with balanced trees, I suspect that there may be some simple and provably effective data structure for priority queues that remains to be discovered.

Open. Do pairing heaps match the amortized worst-case performance of Fibonacci heaps?

Open. What is the asymptotic growth rate of the running time when pairing heaps are used in Prim’s algorithm to find the MST of N random points in the unit square?

Pairing Heaps: A New Form of Self-Adjusting Heap (with M. Fredman, D. Sleator, and R. E. Tarjan). Algorithmica 1, 1, 1986.

Ternary search trees

The duality between recursive sorting algorithms and recursive data structures (trees) made the discovery of ternary search trees with Jon Bentley almost immediate. This was an algorithm that almost wrote itself—beautifully simple and effective. The description in Algorithms Fourth Edition gives full details.
     As mentioned above, the second edition of my book marked the emergence of algorithm science, and the relative performance of the many symbol-table algorithms and data structures that have been invented was at the top of the list. How do splay trees compare with skip lists and red-black trees? How do any of them compare with linear probing? Implementing and developing mathematical models for all these algorithms, their variants, and many others was a significant research effort for me in the mid 1990s. Beyond random integers, I used the words in Moby Dick as a test case, which eventually made it clear that strings were a widely used key data type that might be addressed separately.
     The idea that we could develop a symbol table that is faster than hashing (and also support a host of other useful operations not available with hashing) was quite a surprise, but that is precisely what TSTs do in many applications.

Algorithms, Fourth Edition (with K. Wayne). Addison-Wesley, Reading, MA, 2011, pp. 746–751.

Fast Algorithms for Sorting and Searching Strings (with J. Bentley). Proc. 8th Symposium on Discrete Algorithms, 1997.

Ternary Search Trees (with J. Bentley). Dr. Dobbs Journal, March, 1998.

Analytic Combinatorics

The story of the evolution of this area of research, mostly joint with Philippe Flajolet, is mainly told on the IDEAS page. This page provides a bit of additional perspective in the context of published work.

Early work

My first journal article outside my thesis research solved an open problem left by Knuth—analyzing Batcher’s odd-even merging networks. Soon afterwards, Philippe Flajolet and I solved a similar (but more difficult) problem. An important characteristic of these problems is that they involve recurrence relations that are difficult to solve, but enable a simple computer program that can precisely compute the answer. Because the analysis results in formulas involving the gamma function and the zeta function, it was hard to be convinced that it could be correct, but we could easily check that the analytic solution matched the computed solution. This kind of “experimental mathematics” would characterize my work (and Philippe’s) throughout our careers. This research was very satisfying, and left us convinced of two things.
     First, the mathematics was fascinating, and the things we were learning were going to be applicable to many problems in the analysis of algorithms—the functions involved were inherent in divide-and-conquer algorithms, but they were far from trivial to characterize (some having an oscillating term of magnitude less than .000001). Real progress in the analysis of algorithms would have to lead down this path. And it seemed clear that there should be general theorems that could apply to many algorithms.
     Second, it was going to be a real challenge to get students involved in this research. The mathematics courses that were central to our training were necessarily dropping by the wayside in many curricula to make room for computer science courses. We were going to have to figure out what students need to know and write our own textbook.
     I gave a series of lectures in Brussels in 1981, where I tried to assimilate the methods I knew in the context of interesting applications, and published a monograph based on those lectures. Then I visited Philippe at INRIA, who quickly convinced me that there were general forces at work that could save us from many of the detailed calculations found in that monograph (and in much of the research literature.) We arranged for me to visit INRIA in 1982-83, and things took off from there.

Data Movement in Odd-Even Merging. SIAM Journal on Computing 7, 2, 1978.

Digital Search Tree Analysis Revisited (with P. Flajolet). SIAM J. Computing 15, 2, 1986.

Mathematical Analysis of Algorithms, in Probability Theory and Computer Science. G. Louchard and G. LaTouche, ed., Academic Press (1983), 97 pp.

INRIA technical reports and The Book

As described in IDEAS, our planned book became two books (and many research papers by Philippe and his colleagues and coauthors). As with every good book, the content often reflects research results that are not published elsewhere.
     The first book (with me as the lead author) was primarily about the basic techniques of mathematical analysis needed for the analysis of algorithms, with motivating examples. One notable feature of the book was the development of an effective visualization for limiting distributions, created using a remarkably small amount of Postscript code. Another was the use of detailed summary tables, which are still useful for reference today.
     Early drafts of the second book (with Philippe as the lead author) appeared as INRIA technical reports. A considerable amount of ongoing original research by Philippe, his students, and coauthors stood behind these drafts and putting it all together into a cohesive story was a formidable task.
     Through the 1990s and into the 2000s, we regularly exchanged drafts of chapters and provided detailed comments on each others’ work. These exchanges were extremely productive and fruitful, as our comments were voluminous, nearly every page filled with detailed markup. Together, we realized our plan to create a book (well, two) that we could use to prepare students for research in our field.
     While the research-report drafts are obsolete, they do illustrate our slow realization that we were working on something broader than the analysis of algorithms. Philippe’s research was replacing “folk theorems” in combinatorics with transfer theorems of amazing power and general applicability. Somewhere in the mid-1990s, inspired by Apostol’s book Analytic Number Theory, I suggested to Philippe that we adopt the title Analytic Combinatorics. That idea eventually gave purpose to the whole enterprise and started a relentless drive to completion.

The Average-case Analysis of Algorithms: Counting and Generating Functions (with P. Flajolet). INRIA Research Report No. 1888, April, 1993.

The Average-case Analysis of Algorithms: Complex Asymptotics and Generating Functions (with P. Flajolet). INRIA Research Report No. 2026, September, 1993.

The Average-case Analysis of Algorithms: Saddle Point Asymptotics (with P. Flajolet). INRIA Research Report No. 2376, October, 1994

An Introduction to the Analysis of Algorithms (with P. Flajolet). Addison-Wesley, Reading, MA, 1996, 512 pp. Translated into French (1997). Second edition, 2013.

The Average-case Analysis of Algorithms: Multivariate Asymptotics and Limit Distributions, (with P. Flajolet). INRIA Research Report, August, 1997.

Analytic Combinatorics: Functional Equations, Rational and Algebraic functions (with P. Flajolet). INRIA Research Report, January, 2001.

Analytic Combinatorics: Symbolic Combinatorics (with P. Flajolet). INRIA Research Report, December 2002.

Analytic Combinatorics (with P. Flajolet). Cambridge University Press, 2009, 824pp.

Mellin transforms

Not all of our joint research and certainly only a small fraction of Philippe’s research is covered in the book. The most important missing topic is the Mellin transform, which got us started working together when we first met. Indeed, based on several research papers, it was included in the proposed table of contents in all of the INRIA research reports except the last, and the 1996 INRIA research report was certainly intended for the book.
     Why is the Mellin transform only briefly mentioned in Analytic Combinatorics? It was a bit of a casualty of the shift from “analysis of algorithms” to “analytic combinatorics.” While Mellin transforms have many applications in the analysis of algorithms, Philippe knew that the 1996 draft chapter, even at 100 pages, was telling only part of the story, and felt that it may have been a distraction from the powerful storyline defining analytic combinatorics that had emerged for the book.
     Despite Philippe’s untimely death, we can put together what he knew about the Mellin transform, which certainly could fill another volume, from his collected works. One of my goals for the future is to develop a lecture on this topic for my analytic combinatorics online course.

Some Uses of the Mellin Integral Transform in the Analysis of Algorithms (with P. Flajolet and M. Regnier). in NATO Conference, Springer-Verlag, Berlin, 1985.

Mellin Transforms and Asymptotics: Finite Differences and Rice’s Integrals (with P. Flajolet). Theoretical Computer Science A 144, 1995.

The Average-case Analysis of Algorithms: Mellin Transform Asymptotics (with P. Flajolet). INRIA Research Report No. 2956, August, 1996.