What Kermit got on right carbon credits
It's Not Easy Bein' Green
Disclosure: My employer, FARM-TRACE, works to quantify carbon impacts. All opinions are my own.
This year has been a turning point on climate change. It seems that every company I hear advertise on a podcast is (or plans to be) carbon neutral or even carbon negative. I don’t know if it was the terrible seasons of wildfires blanketing large swaths of the country in a haze of smoke last summer, the Ted Cruz freeze in Houston, or the most hurricanes on record. Regardless, the vast majority of Americans - including almost three-quarters of Republicans - think climate change is a significant problem for future generations.1 That’s far more Republicans than who think the 2020 presidential election was legitimate.
It’s safe to say that concerns about climate change are mainstream. The argument is no longer about if climate change is a problem but how much time we have.
Carbon emissions are the global focus for stymieing global warming trends. The magical thing about carbon as a pollutant is it doesn’t matter where it’s produced or captured: 1 ton of carbon put into the atmosphere from coal can, in theory, be balanced out by growing 1 ton of carbon stored in trees somewhere else. This is what makes carbon tradeable: the polluters can pay the people capturing carbon, such as people planting trees. My Economics 101 textbook discussed hypothetical dreams of trading for the right to pollute. Now there’s a thriving set of marketplaces that make it a reality.
These trades work great when there's a good measurement of the amount of carbon on both sides of the equation. That’s a problem as the polluters have a strong disincentive to underreport their emissions, so they pay less for their emissions. The people capturing carbon are incentivized to overreport to get additional compensation. You need accurate measurement on both sides to minimize any party’s ability to distort what’s actually happening.
And good measurement isn’t enough on its own. You also need a counterfactual - what would have happened without the intervention. For additive technologies, that’s pretty simple. The company I work for pays farmers to plant carbon-capturing trees alongside their cacao and coffee trees. It’s relatively easy to run a randomized control trial to see how many trees are grown (and carbon captured) with the program versus farms that do not have a program. It’s reasonable to assume that one farmer planting a bunch of trees doesn’t really affect the behavior of other farmers that much. So randomization can give us a good alternative universe.
There’s another class of technologies that try to prevent deforestation. In theory, they’re way more cost-effective: keeping a tree up is easier than growing a sapling into a carbon-storing tree, which takes ten or more years. In practice, it’s really hard to measure the effectiveness of deforestation approaches. If you protect one area of the forest, you may be just playing whack-a-mole; loggers will just cut down trees elsewhere.
This is a microcosm of a broader set of statistical issues. A randomized trial only works when the treatment group doesn’t affect the control. This is called the stable unit treatment assumption (SUTVA). It seems simple enough, but it’s violated whenever an intervention in one place induces a reaction in another. In the forest protection example, protecting one area may incentivize the loggers to cut down the control areas at a higher rate. This would bias your estimates and make the intervention appear more effective than it actually is.
The choice of the comparison areas of forest are critical and can be gamed to choose favorable places. But even if you don’t cheat, there’s an innate paradox. Areas closer to the protected forests are likely the best control groups; however, the closer the areas to the protected areas, the larger the potential spillover effects from the protected areas.
In the extreme, the spillovers may just be shifting where trees are cut down rather than reducing total deforestation. Yet your randomized control trial results would look like you’re having an incredible impact. But that impact is just a mirage. To be fair, shifting may be preferable under some circumstances. If you’re protecting particularly sensitive or valuable ecosystems, it may be preferable to clear-cut in other places. You also may make it more costly for would-be loggers or farmers to cut down trees if you’re increasing transportation costs, which may reduce the amount of forest logged. These are real possibilities, but they’re challenging to quantify.
This isn’t to say that land protection isn’t worth it at all; it very well could be more cost-effective than other approaches. I’d just want to see how SUTVA concerns are being addressed. And my guess is whatever the effectiveness a study claims, I’d discount at least some of that impact due to SUTVA concerns that probably weren’t (or can’t be addressed).
These issues aren’t unique to land protection - this issue affects almost any study. SUTVA violations can also work in the opposite directions by obscure effective interventions. The AIDS patients enrolled in the first clinical trials for AZT famously pooled their pills to ensure that each person got some of the promising medicine. One can’t blame the individuals for wanting to do this to obtain potentially life-saving drugs, but it clouded the results from the early trials.2 Fortunately for our COVID trials, injections are rather harder to share.
The bottom line is whenever you’re reading a study (or designing one,) it’s always worth asking yourself “how might the treatment affect the control group’s behavior?” If there’s a SUTVA violation, it’s worth having a bit more of a skeptical eye.