Almost all the choices we make in life — like trying a new restaurant, deciding to go to school, or moving across the country for a new job — are made with some level of uncertainty. We might get food poisoning, take on a bunch of student loan debt and end up hating school, or find out you hate the new job and don’t like living in a place where you don’t know anybody, respectively.
How do we end up making these decisions that have the possibility of such horrible outcomes?
We, subconsciously, measure the likelihood, or probability of failure. We create a little equation in our head that assigns a probability to a good outcome and a bad outcome. If it’s more likely I will fail if I go to college, I simply don’t go.
The thing is, most of these decisions only affect the individual. If I take out a bunch of loans, go to school and then drop out because “it wasn’t for me,” I’m the only one that is impacted negatively. Sure, my mom might be annoyed, but in truth, no one else is screwed by my decision to take the plunge and go to college and fail.
But, what about decisions that do impact a lot of people (virtually everyone) — such as climate change policies, healthcare, FDA approval of drugs, education standards, or taxes? How can we possibly know the likelihood of success or failure of these policies when there are so many variables and so much at stake?
The economist Trygve Haavelmo quietly cleared it up for us. The shy, Norwegian economist won the Nobel Prize in 1989 “for his clarification of the probability theory foundations of econometrics and his analyses of simultaneous economic structures.” Which, by the way, is what we all kind of do in our heads, but he laid it out on paper.
We’ve talked about econometrics in previous posts. In fact, the “father of econometrics” Ragnar Frisch was Haavelmo’s professor and advisor for much of his college career. Much of econometrics involves a long string of equations with different variables to predict some kind of output or outcome. But, before Haavelmo, these equations were rather rigid and did not have the “probabilistic structure needed to assess stochastic variability of test statistics and forecast.” Basically, they didn’t incorporate probabilities into econometric equations.
By adding the probability component into these equations, now econometricians could give a more accurate range of success and failure.
Not only did this contribution revolutionize econometrics but it gave economists a louder voice to influence public policy. Now, policymakers could rely on economics to make these big decisions mentioned above.
Today, the Congressional Budget Office (CBO) — the main cause of all the ruckus with the healthcare debate — uses this type of econometrics to come up with long-term cost estimates for different policies proposed by Congress. For example, to predict how the economy and all the government programs will consider the future, they use “microsimulation” which is essentially what Haavelmo brought to the world of econometrics. Taken from the Long-Term Model report,
“Microsimulation, however, can generate not only averages but also distributional outcomes: It can show, for example, how a particular policy change might affect individuals with low earnings differently from individuals with high earnings. Microsimulation also allows analysts to quantify the effects of different policies or assumptions on specific populations.”
OK, back to why this matters.
Not only do we get a more accurate look at the success and the failure of a policy, but it gives a more complete picture of how particular policies will affect different aspects of the economy. Vernon Smith, another Nobel laureate in economics, talks about how the economy is ecological, where everything is interconnected with each other.
As a simple illustration, if a country tries to put a hefty carbon tax on producers, these producers will move their production sites to other countries. People lose their jobs and goods become more expensive because now they must be imported from another country. Due to the increase in prices and loss of jobs, people stop consuming as much, which in turn slows down economic growth, which then creates a recession. As a result, the government is forced to print more money, save businesses from failing, and spend more on different government programs to pick the economy back up.
Not that this would necessarily happen, but with Haavelmo’s model, this possibility would be captured in the probabilities.
To be sure, he didn’t just “add probabilities” to econometric models. He did some serious statistical and economic work in clarifying how we keep different variables “independent” from each other. He also did work with business cycles, theories of capital and investment, and showed the limitations in all of these areas. Like us, econometric models make mistakes all the time.
Like Milton Friedman, he gave a voice to economics in the realm of policymaking and big decisions. But unlike Friedman, Haavelmo did it quietly, staying out of the limelight as much as possible. Importantly, he sharpened the tools necessary to allow governments, companies, and even individuals, to make impactful decisions a little better.