Banks in developing countries often won't lend to the poor, because they have no credit, or they will only lend at prohibitively high rates, making it so that many people can never break out of the cycle of poverty.
Natalia Rigol is a PhD candidate in economics at MIT with an innovative thought. Is it possible, she wonders, to use community information to create an informal credit rating to help banks or microfinance institutions decide who to lend money to? Rigol ran a pilot project asking this question in India this summer, and she is now launching a much larger study of some 1,500 small business owners in poor communities in India.
Tell us a little bit about your background and how you got inspired to become an economist?
I am originally from Cuba, so I lived in Cuba until I was 9 and did the beginning of my schooling there. At the age of 9, I moved to Russia and lived there for two years, and then I was in the Czech Republic for two years. I came to the U.S. when I was 13 and did my middle school to high school in Florida. I went to do my undergrad at Harvard and went for my PhD at MIT where I’ve been for five years. When I was an undergrad, I started working with a mentor—economist Rohini Pande—at Harvard. She’s the one who got me hooked on microfinance and gender issues, which are the things I focus on now.
What’s it like working in India?
The poverty issues in India are extremely striking. India’s a great place [to do research] because it’s a place where a lot of countries are headed. People think of China as being this exemplary country, but India looks a lot more like what poor countries are going to look like soon, in terms of really big income inequality. It’s a place where you can think about poverty issues and really learn.
Tell us about your current project.
One big problem that exists in financing the poor is that, with the poor, you don’t have much information about them. If you think about finance in developed countries, in places like America, you can go to American Express and American Express is going to have reliable information about Natalia Rigol—what her savings look like, what her credit score looks like. A company that’s going to make a loan to Natalia Rigol has a lot of information. But in developing countries there’s nothing like that. In India, they’re only now getting social security numbers for people. A bank doesn’t have much information about poor people. If a bank doesn’t have information about poor people, one way to get a loan is to put up collateral. But of course poor people don’t have that. It’s very difficult for banks to differentiate between Natalia and Emily. We look the same to them. In the end, the bank makes a decision that they’re going to charge a high interest rate, because they’re taking a risk. The question I’m interested in is this: Is there some tool we can develop that can help banks differentiate between Natalia and Emily?
How might that work?
I’ve been thinking about using information that’s available in communities. Especially in a place like India, people live in social networks. It’s not like the U.S. where you live in a house and may not know your neighbors. The project is trying to understand if people have information about one another that a lending institution would find useful in differentiating between Natalia and Emily. I go to a community and ask people to talk to me about Natalia and Emily and tell me different types of information about Natalia and Emily—questions about, for example, work ethic, intelligence, business sense. Who is going to be the most productive? Who is going to grow her business the most? It seems that communities know who’s highly capable.
How does the information-collecting process work?
We first conduct an interview in private for each household in their home. Here we collect a ton of information about a person's household, business and personal ability. We will use some of this data to validate whether community members know things about one another since it is conducted before anyone knows anything about the fact that they are going to be ranking their peers. We then invite five-member groups [of friends and neighbors] into a hall where they conduct our "ranking game." Depending on the randomization, they conduct these in the presence of other people or alone, and they are told if their information will be used to allocate grants or not and whether they receive incentives or not. At the end of this game, we conduct a lottery to select the grant winners. We then conduct follow-up interviews to measure changes in business and household wealth and use this data to validate if community members could predict business growth.
What questions do you ask?
At the first interview, we ask for information on the labor activities of all household members, very detailed information about all household businesses, psychometric questions with business owners, and a lot of questions about wealth, health and general well-being.
How do you make sure people tell you the truth about their friends and neighbors?
If you go to a community and ask questions, and people know that the information is going to be used to allocate relatively large grants, it’s possible they’re going to lie. We have lots of pilot data that suggests that people do, in fact, lie if they have an incentive to lie. I want to know how to get people to tell us the truth.
The most salient way to do this is we give people [financial] incentives for their answers. We offer a higher incentive for telling the truth. We use a peer elicitation payment rule, Bayesian Truth Serum, developed by Drazen Prelec here at MIT. The way the rule works is that we ask people their first order beliefs—to rank people from highest to lowest profits—and their second order beliefs—how many people in the community would say that Emily would be ranked the highest? How many would say she would be ranked the second highest, and so on? We pay people based on their first and second order beliefs. Paying for second order beliefs is easy: we see how many people they guessed would rank Emily number one, and then we see how many people did, in fact, rank Emily number one. Paying for first order beliefs is the hard part. The rule works by paying higher amounts to people who give answers that are "surprisingly common," meaning that the first order belief is more common in the population than people predicted it would be via second order beliefs. Prelec has proven that this incentive payment rule is truthful—people are better off telling the truth about what they know than lying. There are also some lab experiments with students that confirm the properties of this rule.
How much are the grants? And how can these kinds of grants or microloans help people in an impoverished community?
The grants are $100, which is really a massive amount of money for this population. This is about 30 percent of a business owner's capital. Other studies find that microentrepreneurs are really productive. You give them $100 and their profits increase by 50 percent two or three years down the line and continue to be higher. In terms of impacts: people’s consumption increases, people’s health improves. With $100, your husband can go and get whatever operation and get back to work, while the absence of that $100 means you’re literally in abject poverty.
What are your plans for the future of this project?
We’re doing a baseline survey, and we’ll be done by December or January. Then we’ll randomly allocate grants to measure whether communities were able to predict outcomes or not. We’ll probably track people for one to two years to see the evolution of their businesses and household incomes, and see how community information predicts that. We are working with a microfinance institution, which is very interested in this project. The next step, if it ends up working, would be to see how they could integrate this into their operations.