User Improving Regulatory Targeting: Lessons from OSHA

User Improving Regulatory Targeting: Lessons from OSHA

User Improving Regulatory Targeting: Lessons from OSHA

Many government agencies with enforcement power face a common problem.

They only have the resources to visit or audit a tiny fraction of the possibilities, so they need to pick and choose their targets. How should they make that choice?

Consider the Occupational Safety and Health Administration, which is responsible for monitoring and passing rules about workplace safety. OSHA has jurisdiction over about 8 million workplaces, but (in cooperation with state-level agencies) it has resources to actually visit less than 1% of that number. How to choose which ones? Matthew S. Johnson, David I. Levine, and Michael W. Toffel discuss their research on this topic in “Making Workplaces Safer Through Machine Learning” (Regulatory Review, Penn Program on Regulation, February 26, 2024; for the underlying research paper, see “Improving Regulatory Effectiveness Through Better Targeting: Evidence from OSHA,” published in the American Economic Journal: Applied Economics, October 2023, 15:4, pp. 30-67; for an ungated preprint version, see here).

One insight is that it’s useful for regulatory purposes if the inspection process has a degree of randomness, because then firms need to be just a little on their toes. As it turns out, the largest OSHA inspection program from random OSHA process also allows researchers to look at workplace safety records in the aftermath of an OSHA inspection from 1999-2014 was called Site-Specific Targeting. The idea was to develop a list of firms that had the highest injury rates two years ago, and randomly select a group of them for visits. It’s then possible to compare the aftermath of an OSHA regulatory visit for firms that (randomly) got one to the firms (remember, with similar high injury rates) that didn’t get one. The authors write: “We find that randomly assigned OSHA inspections reduced serious injuries at inspected establishments by an average of 9 percent, which equates to 2.4 fewer injuries, over the five-year post- period. Each inspection thus yields a social benefit of roughly $125,000, which is roughly 35 times OSHA’s cost of conducting an inspection.”

But might it be possible, holding fixed the limited resources of OSHA, to do better? For example, what if instead of looking at injury rates from two years ago, one instead looked at the average injury rate over the previous four years–to single out firms with sustained higher rates of workplace injury? But is it possible to do better? What if we used a machine-learning model to predict which firms are likely to have the most injuries, or which firms could have the biggest safety gains, and focus on those firms? The authors write:

We find that OSHA could have averted many more injuries had it targeted inspections using any of these alternative criteria. If OSHA had assigned to those establishments with the highest historical injuries the same number of inspections that it assigned in the SST program, it would have averted 1.9 times as many injuries as the SST program actually did. If OSHA had instead assigned the same number of inspections to those establishments with the highest predicted injuries or to those with the highest estimated treatment effects, it would have averted 2.1 or 2.2 times as many injuries as the SST program, respectively.

A few thoughts here:

1) I was surprised that the simple rule of looking back over four years of injury rates, rather than just looking at injury rates from two years ago, had such substantial gains. The reason is that injury rates in any given year can bounce around a lot. For example, imagine a firm that has one bad episode every 20 years, but quickly corrects the situation. In that bad year, it could turn up on the OSHA high-priority list–but the OSHA inspection won’t do much. A firm that is poorly ranked for accidents over four years is more likely to have a real problem.

2) Going beyond changing the inspection rule in the simple way of looking at four years of injury rates to using a more sophisticated and hard-to-explain machine learning approach has only modest gains. It might be that the machine learning analysis is useful for showing if large gains are possible through better regulatory targeting, and if so, then regulators might wish to figure out a way to get most of those gains using a simple rule that they can explain, rather than black-box machine-learning rules they can’t easily explain.

3) One concern is that these new methods of targeting would leave out the randomization factor: firms would be able to predict that they were more likely to receive a visit from OSHA. It’s not clear that this is a terrible thing: firms which have poor workplace safety records over a period of several years should be concerned about a visit from regulators. But it may be wise to keep a random element in who gets visited.

Finally, it feels to me as if regulators, who are always under political pressure, sometimes see their role as akin to law enforcement: that is, they have an incentive to show that they are going after those who are probably in the wrong. But as this OSHA example shows, going after employers who had a really bad workplace event two years ago may not lead to as big a gain in workplace safety as going after employers who have worse records over a sustained time.

I wrote last year about a similar issue that arises in IRS audits. It turns out that when the IRS is deciding who to audit, it puts a lot of weight on whether it will be easy to prove wrongdoing. Thus, it tends to do a lot of auditing of low-income folks receiving the Earned Income Tax Credit, where the computers show that it should be straightforward to prove wrongdoing. But of course, there isn’t a lot of money to be gained from auditing those with low incomes. Consider the situation where the IRS audits 10 people who all had more than $10 million in income last year. Perhaps nine of those audits find nothing wrong, but the 10th results in collecting an extra $500,000. If the IRS auditors are focused on a high conviction rate, they make one choice; if they are focused on a strategy which brings in the most revenue, they will chase bigger fish.

My point is not that the choice of regulatory priorities should be turned over to machine learning! Instead, the point is that machine learning tools can help evaluate whether the existing rules are being set appropriately, and how well those rules work relative to alternatives.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Timothy Taylor

Global Economy Expert

Timothy Taylor is an American economist. He is managing editor of the Journal of Economic Perspectives, a quarterly academic journal produced at Macalester College and published by the American Economic Association. Taylor received his Bachelor of Arts degree from Haverford College and a master's degree in economics from Stanford University. At Stanford, he was winner of the award for excellent teaching in a large class (more than 30 students) given by the Associated Students of Stanford University. At Minnesota, he was named a Distinguished Lecturer by the Department of Economics and voted Teacher of the Year by the master's degree students at the Hubert H. Humphrey Institute of Public Affairs. Taylor has been a guest speaker for groups of teachers of high school economics, visiting diplomats from eastern Europe, talk-radio shows, and community groups. From 1989 to 1997, Professor Taylor wrote an economics opinion column for the San Jose Mercury-News. He has published multiple lectures on economics through The Teaching Company. With Rudolph Penner and Isabel Sawhill, he is co-author of Updating America's Social Contract (2000), whose first chapter provided an early radical centrist perspective, "An Agenda for the Radical Middle". Taylor is also the author of The Instant Economist: Everything You Need to Know About How the Economy Works, published by the Penguin Group in 2012. The fourth edition of Taylor's Principles of Economics textbook was published by Textbook Media in 2017.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline