CCTs and Crime

The connection between poverty and crime is both well-established and notoriously difficult to distentangle. We know that high-crime areas are likely to be poorer than low-crime areas, and yet we don’t usually profess that crime causes poverty, although a certain blogger/writer team of economist and journalist is quick to remind you that crime doesn’t pay. We might expect poverty to cause crime for a number of reasons–idleness leads to thrill-seeking, social norms make stealing appear common or acceptable, families may not be able to feed their families without stealing–but separating one effect from the other is incredibly difficult.

In a careful and very well executed new paper by three economists at the PUC-Rio, crime is in fact lowered in the face of conditional cash transfers, or a directed attempt to put more money in the hands of low-income families while simultaneously requiring their kids to go school/not work during school hours. The authors exploit the expansion of the program–to pay benefits to families with older children–to causally identify the effect of additional income on crime.

The authors find that expanding the Bolsa Familia program to include 16- and 17-year olds did have a dramatic, causal effect on crime rates.

My primary question on the paper has to do with the expansion. Because the program had already been in place for some time, many families lost some income when their children turned 16 and thus were no longer eligible for benefits. Many of these same families would regain benefits with the expansion. So, did crime increase as these children aged out? Surely there’s some variation in average age and distribution of children in the program by school, so we should be able to at least speculate on whether there is something about turning 16 and 17 that makes one particularly prone to criminal behavior, or whether leaving the program leads to more behavior. Perhaps we can’t identify it the same way causally, but it’s an important dimension, I think.

The second problem I have is stylistic: a clear link to a number in a table with words such as “the program expansion lead to an average X% decrease in crime” would have helped make reading easier.

h/t: @franciscome

Cited: Laura Chioda, João MP de Mello and Rodrigo R. Soares. “Conditional Cash Transfer Programs: Bolsa Família and Crime in Urban Brazil.” PUC Working Paper No. 559.

Advertisements

Testing, incentives, and low-achieving students, redux

Last week, a few kind words from a friend turned into an extended conversation about testing structures and incentives for teachers to help low-achieving students. Mark’s organization is unique and very cool because it targets the lowest achievers, students Mark posited are the least likely to benefit from the incentives provided by standardized testing to maximize the pass rate. Brett Keller responded with a link to a discussion of an article from the Review of Economics and Statistics that basically confirmed Mark’s thinking.

Below is a quick summary of a long, dense paper and lessons learned. In short, Mark, yes, research backs up your intuition. From “Left Behind by Design: Proficiency Counts and Test-based Accountability” by Derek Neal and Diane Whitmore Schanzenbach:

The use of proficiency counts as performance measures provides strong incentives for schools to focus on students who are near the proficiency standard but weak incentives to devote extra attention to students who are already proficient or have little chance of becoming proficient in the near term.

Students who might just need a little extra push to get to the passing mark are going to get any extra teaching effort that is encouraged by the testing system itself, and even may draw effort that might have gone to students at the ends of the distribution. It seems that this problem at least would unite parents of the highest and lowest achievers in protest. Low achieving students are left behind and high-ability students make no gains either. This system is clearly not beneficial to anyone except the marginal passers and ensures that low-achieving students never have an opportunity to catch up.

The continual process of raising the standards only makes worse the distribution problem. In their model, an increase in the proficiency standard necessarily increases the number of high-ability students receiving extra attention, thus decreasing the number of low-achieving students receiving extra attention.

The study was also repeated with low-stakes testing, where the individual student may have had something to gain by passing (not going to summer school), but the school had little to gain. The lopsided distribution of effort didn’t appear in these cases.

Derek Neal and Diane Whitmore Schanzenbach. 2010 “Left Behind by Design: Proficiency Counts and Test-based Accountability.” Review of Economics and Statistics 92(2); 263-283.

Tests, incentives, and low-achieving students

In the midst of my paper-reading/grading marathon over the weekend, I expressed some frustration on twitter and got some pretty wonderful responses from friends. In particular, one friend who runs a non-profit in DC sent me an immediate gchat, “I believe in you; you can do it.” It managed to snap me out of it and put a smile on my face, but then also morphed into a discussion about the quality of students’ writing. Mark’s contention was that writing skills have in fact declined over time, largely because composition, grammar, and spelling aren’t emphasized any longer in school curricula. It’s not tested, so it’s not taught. I confessed my inability to make a claim about the decline given my limited tenure as a teacher and lack of good comparisons. I think I’m a pretty good writer.

This resulted in Mark calling me arrogant, so I had to laugh a little when Mark’s recent blog post for Reach, Inc. had an arrogance-related title, but he also brings up another really important point regarding incentives and testing in schools.

It is true that incentives are not aligned to support the work we do. If a student comes to Reach reading in the 5th percentile, he or she can make 2-3 years of reading growth and still be labeled a failure on standardized tests. This means, in an environment with limited resources, it actually doesn’t make sense for a school to invest in that child’s learning. The incentives push schools to focus on those students that can go from failing to passing.

I’ll admit that I’m only cursorily familiar with the practices and rewards of the public school system and testing, but I am pretty sure that we haven’t it gotten right yet. A system that rewards or punishes based on the mean or median or a dichotomous pass/fail and ignores distribution and progress is necessarily going to leave a lot of students behind. As Mark suggests, it makes it near impossible for individual students to catch up, not only because it’s hard work, but because there’s little immediate reward for stakeholders to do the pushing. It works the same way with writing. There’s not a good way to test writing, so we don’t test it, and thus it’s not emphasized in school, leading to worse outcomes in writing.

Mark’s work reminded of a paper I saw presented at CU this winter. In an RCT in Togo (or Benin? The researcher was from one of those and did the work in the other) an experiment was set up to see how different incentives schemes could reward cooperation to study for standardized tests and how that affected student outcomes from different parts of the ability distribution. The results make cooperation look pretty good. I of course, cannot remember the job candidate’s name or the title of the paper, but I’m going to find it. Don’t worry.

Student thoughts on recent Gettysburg economics events

As the semester goes on, my Methods students have more and more tools with which to analyze current events in economics, and ideas they encounter in their classes. A few students put together some thoughts on their blogs about recent visitors including Nate on George DeMartino and Andy on Hanushek.

I’m happy to see my students talking about what they’re seeing, but it’s also a reminder that I may need to talk a little bit more about dummy variables before the semester is up.

My post on Hanushek and Reschovsky is here. Sadly, I didn’t make it to DeMartino.

More on Education and TFA

A week or so ago, Matthew diCarlo of the Shanker Institute posted on the Shanker Blog a post exploring the link between teacher performance and the much-lauded, much-criticized, and thus, controversial, program Teach for America. TFA, as it is known, puts high-achieving, service-oriented college grads into classrooms in high-need areas all over the country for a period of two years. It’s an extremely competitive program. My senior year of college, I watched several close friends navigate the process and succeed, while another close friend did not get a spot. Ironically, the one who entered the education system as an emergency teacher taught for several more years than the TFAers.

Matt diCarlo provides a quick and dirty review of the literature that rests on this:

Yet, at least by the standard of test-based productivity, TFA teachers really don’t do better, on average, than their peers, and when there are demonstrated differences, they are often relatively small and concentrated in math (the latter, by the way, might suggest the role of unobserved differences in content knowledge). Now, again, there is some variation in the findings, and the number and scope of these analyses are limited – we’re nowhere near some kind of research consensus on these comparisons of test-based productivity, to say nothing of other sorts of student outcomes.

The assertion, and indeed the post, is filled with caveats, conditions, and couching, which serves to tell me that Matt is likely a reasonable person and certainly an economist. It also underscores how difficult it is to analyze teacher performance with standardized tests, something which Dana Goldstein explores a bit today.

Both Matt diCarlo and a linked post at Modeled Behavior suggest that “talent” at least as measured by the private sector, isn’t a good indication of teacher effectiveness. While that’s interesting, I’m curious what is?

What makes a good teacher? At any level? I’m curious because–among other reasons–I  think I’m a pretty good teacher. I would imagine that most of us like to think we’re good at our jobs. If the skills that make me a good (or average, or mediocre, or bad) teacher aren’t the same ones that would help me in other markets, what are they? And perhaps more importantly, why are we asking the people in the private sector, which hasn’t enumerated the qualities of a good teacher and doesn’t reward them, what entails good teaching? And shouldn’t we figure this out before we go about firing “bad” teachers as a means trying to improve student outcomes?

h/t @ModeledBehavior

Public Education Finance

Last week, my department hosted two prominent economists who do research on public education finance to speak to students, faculty, and local teachers regarding how we’re going to finance public schools and improves US student outcomes in the coming decades. By international standards, school performance in the US lags behind other countries in math and science, in particular, which is largely heralded as expected to bring about the eventual demise of out economic and geopolitical advantage.

This is certainly not my area of expertise, so I’m speaking a bit off the cuff here, but I thought I’d summarize a bit.

Andrew Reschovsky, who is a professor in the policy school at the University of Wisconsin-Madison, asserts that we’re not paying teachers enough. His argument wasn’t entirely clear to students, it seemed, but his ultimate prescription is to bring more money to the problem.

Eric Hanushek, of the Hoover Institute at Stanford, presented an argument that many of my students found much more compelling. Firing the bottom 5-6% of teachers from each school and replacing them with an average teacher, he says, would raise math and reading scores dramatically. And, if we could only get to Canada’s level, it would add trillions to our GDP.

One teacher rightly asked, where do you expect to get these teachers, particularly when you’re cutting their salaries left and right? Hanushek replied that there are a lot of unemployed teachers, but mostly ignored the distributional problem. There are a lot of unemployed teachers in Michigan, where salaries are high and applicants far outnumber openings. There are lots of openings in places like Arizona, where salaries are low. It’s as much a problem of getting people to move to Arizona as it is replacing those teachers that get fired.

Neither side got too much into the question of how we measure student outcomes (for more on this, see Dana Goldstein, who is also moderating an event on the same at the New America Foundation tomorrow evening at NYC). Though Hanushek was fairly convinced that some measure of value-added by teacher seemed to be in order through rigorous testing, by his own admission, principals and colleagues all seem to know who the bad teachers are. In that sense, amending the system to allow teachers to evaluate each other might lead to more efficient outcomes than administering tests that hamper the ability of teachers to teach, and could be racist or biased in different cultural situations. We know testing is problematic, and yet, right now, it seems to be all we have.

As a teaching moment, I wanted to highlight how two people, coming from rather different sides of the aisle, could use the exact same information to come up with very different policy prescriptions. I’ve heard some students remark lately that econometrics seems like a science without answers. But I think the better description is that there are many answers, and we’re tasked with finding the best ones. I also had a long discussion with a colleague and my students about how teacher quality over the past fifty years has likely changed dramatically as more opportunities for women opened up in different fields.

I find it all really fascinating.

I was really hoping to outsource this post by linking to my students’ blogs, but none of them wrote about it (though many  have some interesting thoughts about Moneyball). Even for extra credit. Guess it’s going to be a required assignment next year. I will update here if I notice any posts about it in the next week.

On E-Universities

Megan McArdle tackles the future of society and universities in a recent article at The Atlantic. In response to a post on the future of universities by Stephen Gordon at the Boston Globe, she enumerates her predictions for how societies will change if universities change to a totally online model.

Both McArdle and Gordon place great emphasis on cost, and perhaps not wrongly. Gordon claims that because they can hire an MITx credentialed student for cheaper than a regular university grad due to lack of student loans, the MITx model win win. McArdle says that the economies-of-scale that result will make us all go to the cheaper option and she thinks that’s good. But there are a couple of assumptions that are implicit in the analysis that I find incredibly disturbing. And not just because it would likely put me out of a job.

The first is that it’s valuable to have everyone learn the same thing. I find this horrifying. Yes, it’s useful if everyone used the same computer programming language, but if they did, then things wouldn’t progress. They become entrenched, like the QWERTY keyboard, which we all know is inefficient, and yet we learn and use it anyway. I think it’s great that most economists use Stata, but I also think it’s great that some use SAS, so that if I needed something done in SAS–which handles large datasets much better, while Stata is perhaps simpler to learn–I could get it done.

I want to know people who have read different books and studied different thinkers and learned different ways of studying or learning about the world. I think life would be incredibly boring otherwise.

Secondly, though McArdle mentions it, I think both authors severely underestimate the networking effect of college. McArdle says that we’ll need to find a different way to essentially make friends, but I think it’s more than that.

People I know from college represent not only many of my close friends, but also collaborators, colleagues, coauthors, references, providers of services, and directors of charities I support. If I wanted to go into investment banking or consulting or medicine or some other field, I have a list of people I would call for advice and to let them know what I was hoping to find, work-wise. I’d imagine that at least one Duke alum, if not many, would aid in my career change or become a client down the line.

This is not unique to Duke. If I’d gone to CU or Stanford or UVA or Metropolitan State, those networks would still be important. And important to my employer, not just to me. I think employers recognize this. Education signalling is not just about quality (regardless of noise levels), there’s also an assumption that who you know might matter at some point, as well.

Besides, what the heck are journalists going to cover if researchers aren’t putting out papers and books?

Why we educate women

The World Bank’s Development Impact Blog has recently been hosting guest posts from job market candidates in economics and a few days ago, Berk Ozler, a regular contributor, decided to synthesize some of the lessons from their papers and one by Rob Jensen (forthcoming in the QJE). With a brief mention of the fact that some are working papers, and certainly subject to change, Ozler concludes that we’ve been going about increasing women’s educational attainment in the developing world in the wrong way. Backward, he calls it. Instead of making it easier for women to go to school by providing school uniforms or scholarships or meals, we should be concentrating on changing women’s opportunities to work. If women see the possibility of work or higher wages or more openings, then they will likely demand more education for themselves or for their female children.

From a purely incentive-based approach, it makes perfect sense. If female children are likely to bring in earnings, particularly if they might be comparable to or even higher than their brothers, then parents have an incentive to educate female children. Higher earnings perhaps mean better marriage matches, but most certainly mean better insurance for parents as they age. Women with their own incomes can choose to take care of their parents.

From a feminist perspective, however, it’s a bit problematic. Such analysis implicitly values waged work over non-waged work, a problem inherent in many economics questions, most apparent in how we measure GDP. We know that increasing women’s education levels is valuable in and of itself, regardless of whether those women go on to work. More education for women means later marriage, lower fertility, reduced HIV/AIDS transmission, reduced FGM, and more.

It’s reasonable to think that regardless of how we set up the incentives–either by showcasing opportunity or reducing the immediate costs of schooling–all of these things will happen. And certainly job creation and the encouragement of seeking new opportunities to work is desirable. But if we choose to focus all of our resources on showcasing opportunity (particularly when it may set up unrealistic or very difficult to achieve expectations. note I haven’t read the Jensen paper yet), then we reinforce the idea that “women’s work”, or work in the home, is worth less than waged work.

In a world where a woman becomes educated in hopes of finding work, but doesn’t, how does that affect her ability to make household decisions? To leave an abusive spouse? To educate her own children, male and female, equally? Jensen’s paper seems to imply the very promise of women’s wages is enough to change bargaining power, but I wonder if that will stick. Does failure to find work, for whatever reason when it is understood to be the sole goal of attaining more schooling, affect women’s status?

Treating students differently

Education research seems to be teeming lately with the idea of the “threat of stereotype”, whereby women in particular don’t do as well on tests not because they are incapable but because they are faced with prejudice. If people think I’m going to do poorly, why work hard, or so goes the logic.

This article from the Daily Beast, which outlines much of the research on such ideas of late, struck me for its mention of how students are treated differently by their teachers.

In a study published last year, psychologist Howard Glasser at Bryn Mawr College examined teacher-student interaction in sex-segregated science classes. As it turned out, teachers behaved differently toward boys and girls in a way that gave boys an advantage in scientific thinking. While boys were encouraged to engage in back-and-forth questioning with the teacher and fellow students, girls had many fewer such experiences. They didn’t learn to argue in the same way as boys, and argument is key to scientific thinking. Glasser suggests that sex-segregated classrooms can construct differences between the sexes by giving them unequal experiences. Ominously, such differences can impact kids’ choices about future courses and careers.

I don’t teach single-sex classes, but in my principles classes, I’ve noticed that the men seem to ask questions–and answer questions–in a way that encourages debate. While women are perfectly willing to raise their hands when they have the right answer, they’re less likely to disagree with me or ask a question that seems to critically engage the subject matter.

Thankfully, this seems to diminish a little in upper division classes, where I see both men and women engaging the ideas and critiquing what is set before them. So at least anecdotally, I’d argue that all is not lost by middle school. But that doesn’t mean we shouldn’t work harder to get women to engage critically at every level.