The University of Pennsylvania Law Review (June 2020) has published a nine-paper symposium on antitrust law, with contributions by a number of the leading economists in the field who tend to favor more aggressive pro-competition policy in this area.
Whatever your own leanings, it's a nice overview of many of the key issues. Here are snippets from three of the papers. Below, I'll list all the papers in the issue with links and abstracts.
C. Scott Hemphill and Tim Wu write about "Nascent Competitors," which is the concern that large firms may seek to maintain their dominant market position by buying up the kinds of small firms that might have developed into future competitors. The article is perhaps of particular interest because Wu has just accepted a position with the Biden administration to join the National Economic Council, where he will focus on competition and technology policy. Hemphill and Wu write (footnotes omitted):
Nascent rivals play an important role in both the competitive process and the process of innovation. New firms with new technologies can challenge and even displace existing firms; sometimes, innovation by an unproven outsider is the only way to introduce new competition to an entrenched incumbent. That makes the treatment of nascent competitors core to the goals of the antitrust laws. As the D.C. Circuit has explained, “it would be inimical to the purpose of the Sherman Act to allow monopolists free rein to squash nascent, albeit unproven, competitors at will . . . .” Government enforcers have expressed interest in protecting nascent competition, particularly in the context of acquisitions made by leading online platforms.
However, enforcers face a dilemma. While nascent competitors often pose a uniquely potent threat to an entrenched incumbent, the firm’s eventual significance is uncertain, given the environment of rapid technological change in which such threats tend to arise. That uncertainty, along with a lack of present, direct competition, may make enforcers and courts hesitant or unwilling to prevent an incumbent from acquiring or excluding a nascent threat. A hesitant enforcer might insist on strong proof that the competitor, if left alone, probably would have grown into a full-fledged rival, yet in so doing, neglect an important category of anticompetitive behavior.
One main concern with a general rule that would block entrenched incumbents from buying smaller companies is that, for entrepreneurs who start small companies, the chance of being bought out by a big firm is one of the primary incentives for starting a firm in the first place. Thus, there is a concern that more aggressive antitrust enforcement against buying smaller firms could reduce incentives to start such firms in the first place. Hemphill and Wu tackle the question head-on:
The acquisition of a nascent competitor raises several particularly challenging questions of policy and doctrine. First, acquisition can serve as an important exit for investors in a small company, and thereby attract capital necessary for innovation. Blocking or deterring too many acquisitions would be undesirable. However, the significance of this concern should not be exaggerated, for our proposed approach is very far from a general ban on the acquisition of unproven companies. We would discourage, at most, acquisition by the firm or firms most threatened by a nascent rival. Profitable acquisitions by others would be left alone, as would the acquisition of merely complementary or other nonthreatening firms. While wary of the potential for overenforcement, we believe that scrutiny of the most troubling acquisitions of unproven firms must be a key ingredient of a competition enforcement agenda that takes innovation seriously.
In another paper, William P. Rogerson and Howard Shelanski write about "Antitrust Enforcement, Regulation, and Digital Platforms." They raise the concern that the tools of antitrust may not be well-suited to some of the competition issues posed by big digital firms. For example, if Alphabet was forced to sell off Google, or some other subsidiaries, would competition really be improved? What would it even mean to, say, try to break Google's search engine into separate companies? When there are "network economies," where many agents want to be on a given website because so many other players are on the same website, perhaps a relatively small number of firms is the natural outcome.
Thus, while certainly not ruling out traditional antitrust actions, Rogerson and Shelanski argue that the case for using regulations to achieve pro-competitive outcomes. They write:
[W]e discuss why certain forms of what we call “light handed procompetitive” (LHPC) regulation could increase levels of competition in markets served by digital platforms while helping to clarify the platforms’ obligations with respect to interrelated policy objectives, notably privacy and data security. Key categories of LHPC regulation could include interconnection/interoperability requirements (such as access to application programming interfaces (APIs)), limits on discrimination, both user-side and third-party-side data portability rules, and perhaps additional restrictions on certain business practices subject to rule of reason analysis under general antitrust statutes. These types of regulations would limit the ability of dominant digital platforms to leverage their market power into related markets or insulate their installed base from competition. In so doing, they would preserve incentives for innovation by firms in related markets, increase the competitive impact of existing competitors, and reduce barriers to entry for nascent firms.
The regulation we propose is “light handed” in that it largely avoids the burdens and difficulties of a regime—such as that found in public utility regulation—that regulates access terms and revenues based on firms’ costs, which the regulatory agency must in turn track and monitor. Although our proposed regulatory scheme would require a dominant digital platform to provide a baseline level of access (interconnection/interoperability) that the regulator determines is necessary to promote actual and potential competition, we believe that this could avoid most of the information and oversight costs of full-blown cost-based regulation ... The primary regulation applied to price or non-price access terms would be a nondiscrimination condition, which would require a dominant digital platform to offer the same terms to all users. Such regulation would not, like traditional rate regulation, attempt to tie the level or terms of access to a platform’s underlying costs, to regulate the company’s terms of service to end users, or to limit the incumbent platform’s profits or lines of business. Instead of imposing monopoly controls, LHPC regulation aims to protect and promote competitive access to the marketplace as the means of governing firms’ behavior. In other words, its primary goal is to increase the viability and incentives of actual and potential competitors. As we will discuss, the Federal Communication Commission’s (FCC) successful use of similar sorts of requirements on various telecommunications providers provides one model for this type of regulation.
Nancy L. Rose and Jonathan Sallet tackle a more traditional antitrust question in "The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right." A "horizontal" merger is one between two firms selling the same product. This is in contrast to a "vertical" merger, where one firm merges with a supplier, or a merger where the two firms sell different products. When two firms selling the same product propose a merger, they often argue that the two firms will be more efficient together, and thus able to provide a lower-cost product to consumers. Rose and Sallett offer this example:
Here is a stylized example of the role that efficiencies might play in an antitrust review. Imagine two paper manufacturers, each with a single factory that produces several kinds of paper, and suppose their marginal costs decline with longer production runs of a single type of paper. They wish to merge, which by definition eliminates a competitor. They justify the merger on the ground that after they combine their operations, they will increase the specialization in each plant, enabling longer runs and lower marginal costs, and thus incentivizing them to lower prices to their customers and expand output. If the cost reduction were sufficiently large, such efficiencies could offset the merger’s otherwise expected tendency to increase prices.
In this situation, the antitrust authorities need to evaluate whether these potential efficiencies exist and are likely to benefit consumers. Or alternatively, is the talk of "efficiencies" a way for top corporate managers to build their empires while eliminating some competition? Rose and Sallett argue, based on the empirical evidence of what has happened after past mergers, that antitrust enforcers have been too willing to believe in the possibility of efficiencies that don't seem to happen. They write:
As empirically-trained economists focused further on what data revealed about the relationship between mergers and efficiencies, the results cast considerable doubt on post-merger benefits. As discussed at length by Professor Hovenkamp, “the empirical evidence is not unanimous, however, it strongly suggests that current merger policy tends to underestimate harm, overestimate efficiencies, or some combination of the two.” The business literature is even more skeptical. As management consultant McKinsey & Company reported in 2010: “Most mergers are doomed from the beginning. Anyone who has researched merger success rates knows that roughly 70 percent of mergers fail.”
For more on antitrust and the big tech companies, some of my previous posts include:
Here's the full set of papers from the June 2020 issue of the University of Pennsylvania Law Review issue, with links and abstracts:
"Framing the Chicago School of Antitrust Analysis," by Herbert Fiona Scott Morton
The Chicago School of antitrust has benefitted from a great deal of law office history, written by admiring advocates rather than more dispassionate observers. This essay attempts a more neutral examination of the ideology, political impulses, and economics that produced the School and that account for its durability. The origins of the Chicago School lie in a strong commitment to libertarianism and nonintervention. Economic models of perfect competition best suited these goals. The early strength of the Chicago School was that it provided simple, convincing answers to everything that was wrong with antitrust policy in the 1960s, when antitrust was characterized by over-enforcement, poor quality economics or none at all, and many internal contradictions. The Chicago School’s greatest weakness is that it did not keep up. Its leading advocates either spurned or ignored important developments in economics that gave a better accounting of an economy that was increasingly characterized by significant product differentiation, rapid innovation, networking, and strategic behavior. The Chicago School’s protest that newer models of the economy lacked testability lost its credibility as industrial economics experienced an empirical renaissance, nearly all of it based on models of imperfect competition. What kept Chicago alive was the financial support of firms and others who stood to profit from less intervention. Properly designed antitrust enforcement is a public good. Its beneficiaries—consumers—are individually small, numerous, scattered, and diverse. Those who stand to profit from nonintervention were fewer in number, individually much more powerful, and much more united in their message. As a result, the Chicago School went from being a model of enlightened economic policy to an economically outdated but nevertheless powerful tool of regulatory capture.
"Nascent Competitors," by C. Scott Hemphill & Tim Wu
A nascent competitor is a firm whose prospective innovation represents a serious threat to an incumbent. Protecting such competition is a critical mission for antitrust law, given the outsized role of unproven outsiders as innovators and the uniquely potent threat they often pose to powerful entrenched firms. In this Article, we identify nascent competition as a distinct analytical category and outline a program of antitrust enforcement to protect it. We make the case for enforcement even where the ultimate competitive significance of the target is uncertain, and explain why a contrary view is mistaken as a matter of policy and precedent. Depending on the facts, troubling conduct can be scrutinized under ordinary merger law or as unlawful maintenance of monopoly, an approach that has several advantages. In distinguishing harmful from harmless acquisitions, certain evidence takes on heightened importance. Evidence of an acquirer’s anticompetitive plan, as revealed through internal communications or subsequent conduct, is particularly probative. After-the-fact scrutiny is sometimes necessary as new evidence comes to light. Finally, our suggested approach poses little risk of dampening desirable investment in startups, as it is confined to acquisitions by those firms most threatened by nascent rivals.
"Antitrust Enforcement, Regulation, and Digital Platforms," by William P. Rogerson & Howard Shelanski
There is a growing concern over concentration and market power in a broad range of industrial sectors in the United States, particularly in markets served by digital platforms. At the same time, reports and studies around the world have called for increased competition enforcement against digital platforms, both by conventional antitrust authorities and through increased use of regulatory tools. This Article examines how, despite the challenges of implementing effective rules, regulatory approaches could help to address certain concerns about digital platforms by complementing traditional antitrust enforcement. We explain why introducing light- handed, industry-specific regulation could increase competition and reduce barriers to entry in markets served by digital platforms while better preserving the benefits they bring to consumers.
"The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right," Nancy L. Rose and Jonathan Sallet
The extent to which horizontal mergers deliver competitive benefits that offset any potential for competitive harm is a critical issue of antitrust enforcement. This Article evaluates economic analyses of merger efficiencies and concludes that a substantial body of work casts doubt on their presumptive existence and magnitude. That has two significant implications. First, the current methods used by the federal antitrust agencies to determine whether to investigate a horizontal merger likely rests on an overly-optimistic view of the existence of cognizable efficiencies, which we believe has the effect of justifying market-concentration thresholds that are likely too lax. Second, criticisms of the current treatment of efficiencies as too demanding—for example, that antitrust agencies and reviewing courts require too much of merging parties in demonstrating the existence of efficiencies—are misplaced, in part because they fail to recognize that full-blown merger investigations and subsequent litigation are focused on the mergers that are most likely to cause harm.
"Oligopoly Coordination, Economic Analysis, and the Prophylactic Role of Horizontal Merger Enforcement," by Jonathan B. Baker and Joseph Farrell
For decades, the major United States airlines have raised passenger fares through coordinated fare-setting when their route networks overlap, according to the United States Department of Justice. Through its review of company documents and testimony, the Justice Department found that when major airlines have overlapping route networks, they respond to rivals’ price changes across multiple routes and thereby discourage competition from their rivals. A recent empirical study reached a similar conclusion: It found that fares have increased for this reason on more than 1000 routes nationwide and even that American and Delta, two airlines with substantial route overlaps, have come close to cooperating perfectly on routes they both serve.
"The Role of Antitrust in Preventing Patent Holdup," by Carl Shapiro and Mark A. Lemley
Patent holdup has proven one of the most controversial topics in innovation policy, in part because companies with a vested interest in denying its existence have spent tens of millions of dollars trying to debunk it. Notwithstanding a barrage of political and academic attacks, both the general theory of holdup and its practical application in patent law remain valid and pose significant concerns for patent policy. Patent and antitrust law have made significant strides in the past fifteen years in limiting the problem of patent holdup. But those advances are currently under threat from the Antitrust Division of the Department of Justice, which has reversed prior policies and broken with the Federal Trade Commission to downplay the significance of patent holdup while undermining private efforts to prevent it. Ironically, the effect of the Antitrust Division’s actions is to create a greater role for antitrust law in stopping patent holdup. We offer some suggestions for moving in the right direction.
"Competition Law as Common Law: American Express and the Evolution of Antitrust," by Michael L. Katz & A. Douglas Melamed
We explore the implications of the widely accepted understanding that competition law is common—or “judge-made”—law. Specifically, we ask how the rule of reason in antitrust law should be shaped and implemented, not just to guide correct application of existing law to the facts of a case, but also to enable courts to participate constructively in the common law-like evolution of antitrust law in the light of changes in economic learning and business and judicial experience. We explore these issues in the context of a recently decided case, Ohio v. American Express, and conclude that the Supreme Court, not only made several substantive errors, but also did not apply the rule of reason in a way that enabled an effective common law-like evolution of antitrust law.
"Probability, Presumptions and Evidentiary Burdens in Antitrust Analysis: Revitalizing the Rule of Reason for Exclusionary Conduct," by Andrew I. Gavil & Steven C. Salop
The conservative critique of antitrust law has been highly influential. It has facilitated a transformation of antitrust standards of conduct since the 1970s and led to increasingly more permissive standards of conduct. While these changes have taken many forms, all were influenced by the view that competition law was over-deterrent. Critics relied heavily on the assumption that the durability and costs of false positive errors far exceeded the costs of false negatives. Many of the assumptions that guided this retrenchment of antitrust rules were mistaken and advances in law and economic analysis have rendered them anachronistic, particularly with respect to exclusionary conduct. Continued reliance on what are now exaggerated fears of “false positives,” and failure adequately to consider the harm from “false negatives,” has led courts to impose excessive burdens of proof on plaintiffs that belie both sound economic analysis and well-established procedural norms. The result is not better antitrust standards, but instead an unwarranted bias towards non-intervention that creates a tendency toward false negatives, particularly in modern markets characterized by economies of scale and network effects. In this article, we explain how these erroneous assumptions about markets, institutions, and conduct have distorted the antitrust decision-making process and produced an excessive risk of false negatives in exclusionary conduct cases involving firms attempting to achieve, maintain, or enhance dominance or substantial market power. To redress this imbalance, we integrate modern economic analysis and decision theory with the foundational conventions of antitrust law, which has long relied on probability, presumptions, and reasonable inferences to provide effective means for evaluating competitive effects and resolving antitrust claims.
"The Post-Chicago Antitrust Revolution: A Retrospective," by Christopher S. Yoo
A symposium examining the contributions of the post-Chicago School provides an appropriate opportunity to offer some thoughts on both the past and the future of antitrust. This afterword reviews the excellent papers presented with an eye toward appreciating the contributions and limitations of both the Chicago School, in terms of promoting the consumer welfare standard and embracing price theory as the preferred mode of economic analysis, and the post-Chicago School, with its emphasis on game theory and firm-level strategic conduct. It then explores two emerging trends, specifically neo-Brandeisian advocacy for abandoning consumer welfare as the sole goal of antitrust and the increasing emphasis on empirical analyses.
Timothy Taylor is an American economist. He is managing editor of the Journal of Economic Perspectives, a quarterly academic journal produced at Macalester College and published by the American Economic Association. Taylor received his Bachelor of Arts degree from Haverford College and a master's degree in economics from Stanford University. At Stanford, he was winner of the award for excellent teaching in a large class (more than 30 students) given by the Associated Students of Stanford University. At Minnesota, he was named a Distinguished Lecturer by the Department of Economics and voted Teacher of the Year by the master's degree students at the Hubert H. Humphrey Institute of Public Affairs. Taylor has been a guest speaker for groups of teachers of high school economics, visiting diplomats from eastern Europe, talk-radio shows, and community groups. From 1989 to 1997, Professor Taylor wrote an economics opinion column for the San Jose Mercury-News. He has published multiple lectures on economics through The Teaching Company. With Rudolph Penner and Isabel Sawhill, he is co-author of Updating America's Social Contract (2000), whose first chapter provided an early radical centrist perspective, "An Agenda for the Radical Middle". Taylor is also the author of The Instant Economist: Everything You Need to Know About How the Economy Works, published by the Penguin Group in 2012. The fourth edition of Taylor's Principles of Economics textbook was published by Textbook Media in 2017.