Building Capacity for Maximum Impact
Gara LaMarche delivers the Keynote Address at the Better Business Bureau of Metropolitan New York Symposium: Baruch College School of Public Affairs.
It’s a little odd that I should be standing here this morning to share some thoughts about the role of evaluation in foundations and the non-profit organizations they support. The fact is, I don’t have much of a history as an evaluation guy. I don’t remember the subject coming up in the twenty or so years I spent on the staff of human rights organizations, and for 11 years I was a senior official of the Open Society Institute, the foundation established by George Soros, who was a well-known evaluation skeptic. When last year I was named President of Atlantic Philanthropies – a foundation that is more serious about evaluation than almost any other — I doubt that a rousing chorus of cheers went up in the offices of the American Evaluation Association.
My approach to evaluation over the years was pretty close to that of my friend James Piereson, the director of the now-ended John M. Olin Foundation, a conservative philanthropy at the opposite end of the political spectrum from me, which systematically built the Federalist Society, the law and economics movement and other institutions that shaped the political and judicial landscape in the last thirty years – and not, in my view, in a good way. When I had a public conversation with Jim Piereson at OSI as part of our tenth anniversary symposium series in 2006, he said that Olin never did anything to evaluate the success of their grantees, or thought very much about it. He said he could see the impact of Olin’s philanthropy just by turning on the television set. Indeed.
And yet here I am. You’ve asked me to start off this symposium with my thoughts on how to maximize the impact of charities and non-profits, how to evaluate progress against mission, and how to make the most of lessons learned along the way.
And actually, I am enthusiastic about addressing these questions, because I have had the opportunity to learn a lot and reflect on what I’ve learned in my first months at Atlantic. Atlantic is one of a relatively small number of foundations – Annie E. Casey, the California Endowment, and the Robert Wood Johnson Foundations are a few others — with an in-house staff and budget devoted to what we call “Strategic Learning and Evaluation.” It has the unfortunate Atlantic acronym “SLAE,” which suggests a violent approach to assessment of grantees, but in fact nothing could be further from the truth.
Atlantic has what you might call a holistic approach to grantmaking that, whatever refinements or improvements I might bring about in the coming years, is very much on the right track. First, we do most of the things that critics of foundations are always saying foundations should do. We make substantial grants, over a multi-year period, and generally for operating support, not program-limited. This gives grantees the tools to truly build their capacity, enough running room not to be constantly obsessed about fundraising, and support that is flexible enough for them to adjust to their needs as they perceive them and as they evolve.
But even more interesting, Atlantic not only puts the program officer – we call them programme executives, with a double M-E at the end, because we’re a Bermuda, not a U.S. foundation, and we like to be different – at the center of the decision, as she or he should be, but adds the benefits of a team of staff from our finance, communications, SLAE and other departments. These essential sources of expertise in reviewing budgets, assessing public education strategies, and strengthening organizational development and assessment mechanisms are involved in key grants from the get-go, not well after the fact, as is too often the case in philanthropy. And they often work closely with grantees and applicants as well as our own program staff.
I think that this interdisciplinary approach to grantmaking – not just as the grant is being considered, but during the full life of the grant – is distinctive and worth studying and replicating. But of course it’s a path that is only open at that scale to fairly large foundations, not to small, lightly-staffed ones or most individual donors.
Evaluation in some form, however, is something every organization of any size has to think about and plan for. Indeed, it pervades virtually every aspect of our personal and political lives. How do you know you’ve succeeded in a goal? Whether it’s winning a primary or caucus, reaching customers, losing weight, or teaching kids to read, we accept the fact that you don’t know how you are doing if you don’t have some means of measurement. Why should what foundations and charities do be any different? Looked at that way, not to have an understanding of evaluation from the start of a venture seems reckless – a form of organizational malpractice.
Before getting more deeply into this, however, I ought to say that I am ambivalent, as Atlantic’s staff knows, about the very word “evaluation.” The dictionary tells us that to evaluate is:
- to determine or set the value or amount of; appraise: as in to evaluate property, or
- to judge or determine the significance, worth, or quality of; assess: as in to evaluate the results of an experiment.
That’s not quite the way I would want to look at evaluation, so while we all strive to come up with a different word that doesn’t conjure up a process like Antiques Roadshow or The Apprentice, let me try to redefine this one. For a grantmaker to evaluate its grants – or, to use the more fashionable word of the moment, its “investments” – and for a non-profit to evaluate its own impact, either in total organization terms or in terms of a particular project or initiative, is really, most simply, to learn. That’s why the first two letters of Atlantic’s SLAE acronym stand for “strategic learning.” Foundations, and the organizations they support, should strive always to be learning centers, rooted in a clear mission but constantly reflective about practice.
Many things go into that learning, of which what we usually call evaluation is but one part. One’s own professional judgment – informed by all the observation, experience and data you can draw upon — is in my view the most significant part. An organization which at the board and staff levels provides the opportunity for regular honest discussions about practice and impact, where a tone is set from board and senior management that rewards candor and self-examination and sees setbacks or even failure not as things to be hidden or punished, but things to be studied and built upon, is the core upon which any other kinds of evaluation must rest.
I want to talk next about the different kinds of evaluation efforts that Atlantic and its grantees engage in, to unpack the umbrella term a bit and give a sense of the diverse approaches available. It’s not so much a matter of choosing one as your organizational brand as it is determining which is the right approach for which initiative at which time. Then I want to set out a little list of what I have come to think of do’s and don’t’s about evaluation. It’s highly personal and somewhat arbitrary, but will give you a sense of what I think I have learned along the way.
Finally I want to conclude with a few bits of advice to donors, particularly families and individuals of wealth who are considering starting or expanding their philanthropy, but unsure of how to get the most bang for their buck. It’s with this audience in mind that I have begun to rethink some of my formerly breezy attitude toward evaluation. Newer, often younger donors successful in the business world want answers about impact, and while traditional philanthropy needs to help them understand the ways in which social initiatives are often quite different from profit-making ventures, we also need to be responsive to these quite legitimate concerns, and to raise our own game.
I’m indebted to my colleague Jackie Williams Kaye of SLAE, who worked in an evaluation job at Edna McConnell Clark before coming to us, for walking me through her taxonomy of the different evaluation approaches that Atlantic uses. The core question for us in all cases, as I suggested a moment ago, is “what is useful to learn.” So a number of Atlantic’s direct service grantees, such as the Experience Corps funded by our ageing program, where retired teachers and civil servants tutor inner-city school kids in reading and math, or Citizen Schools, a terrific afterschool provider supported by our disadvantaged youth program , we employ a fairly comprehensive and integrated evaluation system. These kinds of programs combine an internal evaluation system focused on quality with an external evaluation focused on effectiveness. An integrated system designed for continuous improvement provides a range of analysis to let staff look at trends as well as a number of targeted questions.
For example, when participation patterns among kids enrolled in the Children’s Aid Society Carrera program were tracked, Carrera learned that boys, and in particular older boys, were more challenging to engage than girls. The program rethought its strategy for reaching and keeping boys involved, beginning with engaging them at earlier ages. The program also learned that the attendance of older participants dropped substantially because they needed jobs after school and on weekends. Carrera responded by creating paid internships and integrating outside employment into the jobs component of the program. Instead of losing these kids because they couldn’t attend, the program figured out how to stay connected to them. Having good data about why kids weren’t attending allowed them to do this.
By tracking retention rates, Citizen Schools learned early on that they lost kids after the first semester. They explored possible reasons and discovered that starting second semester programming at the beginning of February was too late. They moved the second semester start date to soon after the holidays and implemented more intentional outreach to ensure that participants stayed engaged.
Another approach is to use an “embedded” outside evaluator – someone trusted by the grantee and the funder who stays with the initiative over a period of time and provides periodic reports on a regular basis that can affect the course of the work in real time. In 2004, Atlantic funded a two-year campaign in five states to restore voting rights for those with felony convictions, and soon added a recommendation to add an evaluation and learning component provided by Wolf Keens and Associates.
A key question for the evaluation focused on the coordination among the national groups involved and the coordination between the national and state groups. States play a key role in setting policies that affect disadvantaged people and in informing federal policy. Lessons from this work are of interest across Atlantic’s programs in the US, since the learning from this project not only enhances the Right to Vote Campaign but could potentially strengthen other efforts involving state campaigns.
Bill Keens, the evaluator, worked closely with the many participants in the Right to Vote Campaign, attending meetings and providing regular feedback. The detailed documentation and his work overall have helped all partners understand the greater effort and learn from both mistakes and successes. For instance, Keens found that the campaign’s “original design” — and honesty requires me to make it clear that the design was funder-driven – “ensured that problems embedded in the structure would be hard to overcome. In the absence of a decision-making hierarchy or more adaptive culture, “ Keens wrote, “ those involved had no real means of operational revision. Groups that may have worked shoulder to shoulder in other contexts sometimes found themselves at odds, with both money and reputations at stake and no adjudicating authority to which to turn.” Hard words for the funder to hear, and less likely to be expressed directly by the groups funded, no matter how trusting a relationship had been developed. Turns out when you entice groups into a partnership based not on their organic desire to collaborate but on the prospect of money, it doesn’t work out so well. But both the JEHT Foundation, the prime mover of the Right to Vote Campaign, and Atlantic, OSI and other key backers learned from these lessons and have handled subsequent multi-organizations, multi-state campaigns differently.
An ongoing assessment of another Atlantic-funded human rights campaign – the vitally important one to enact comprehensive immigration reform – faulted the Comprehensive Campaign for Immigration Reform for“miss[ing] the boat on the change in climate which happened in relatively short period of time and had enormous impact on the policy and legislative debate. In effect, CCIR was not enough attuned to the country’s debate on discomfort with change in their neighborhoods, which aided the anti-immigrants to hijack that discomfort and push legislators to vote out of fear.”
The important thing about findings like these is not that sometimes our grantees, in the heat of intense political battles, are sometimes caught unprepared or outgunned. The important thing is that, as in any campaign – military, political or legislative – they learn from these experiences to do better the next time. And as we gear up to stay the course on immigration reform, despite the changed political climate, Atlantic’s grantees are doing just that.
Case studies are another form of evaluation that is particularly useful in advocacy campaigns, of which Atlantic – which because of its tax status has an unusual capacity to support legislative initiatives – is a leading funder. These include our Ageing program’s North Carolina Campaign for better jobs for direct care workers caring for older adults, our Rights program’s support of indigent defense reform in Texas and other states, and our Youth program’s efforts to promote access to integrated services for middle-school youth.
In policy change work there is often less documented information than available in other fields about strategies that are more successful than others, or about what strategies seem to work best in particular contexts or situations. Given this gap, we decided that case studies of “successful” policy efforts could provide useful models for others doing this type of work, and indeed to buttress the case for other funders to join us in public policy advocacy, which is often seen as too edgy – or too soft – for many donors to feel comfortable with. Our thinking was that building a library of case studies could provide a kind of “check list” that could be used to assess strategy and serve as a growing resource for advocates to draw on.
So what kinds of things have we learned? For example, one campaign element that has proven very important is the ability to engage “strange bedfellows”. In death penalty work, it can be abolition advocates from law enforcement working with abolition advocates who are family members of victims. In North Carolina, the coalition working for legislative policy to improve long term care for older adults included a consumer group and long term care associations. These groups had encountered one another previously only in adversarial roles. After their successful collaboration on the Better Jobs Better Care project, they started working together on another policy effort where they had a common interest despite their different perspectives and priorities. Our hope is that multiple case studies can provide useful information on how these campaigns managed the unusual alliances. By developing cases across areas, we should be able to pull out strategies that always seem useful as well as those that work well within a particular policy area but perhaps not in others. Because the case studies focus on state campaigns, we also hope to identify state contexts that influence how policy efforts play out.
These three approaches – evaluation integrated into the work of an organization, an “embedded” evaluator from the outside but working closely with an organization or an initiative over an extended period of time and providing contemporaneous feedback, and retrospective case studies that draw out lessons that can benefit not only the subjects of the studies but others engaged in similar work – do not constitute an exhaustive list of approaches, and Atlantic also supports in many instances intensive data collection – for example, on our efforts to reduce smoking in Viet Nam – aimed at improving quality and enabling a successful program to be increased in scale and reach. And while Atlantic-supported evaluations mostly focus on our grantees’ programmatic work, we also support them in strengthening their own organisational capacity. This is particularly apt when NGOs are well positioned — in terms of their knowledge and experience — about the work itself, but are trying to develop a stronger infrastructure in order to expand their effort, make others more aware of their work or operate more efficiently. Sometimes, organizations are considering whether a function or role should be performed by hiring new staff or using external consultants. In these cases, data and assessment can inform decisions about the organization itself. We’re doing this, for example, for a number of groups doing essential work to combat the Bush Adminstration’s many incursions on human rights and civil liberties since September 11, 2001.
Finally, I should note one other thing that is interesting and important about Atlantic’s approach to evaluation. We recognize that work in certain fields – often, we play a role in building a field, as in civic engagement of older adults –requires assessments not just of individual grants or initiatives, but sometimes of groupings of grantees working toward some common objective. In another semi-violent metaphor, reminding me of ordnance, we call these “cluster” evaluations. We try to build communities of grantees in these clusters and provide them with regular opportunities for convening exchange, and our external evaluators meet with Atlantic teams and grantees biannually.
So that’s a short walk through some of the different approaches Atlantic takes to helping its grantees assess their impact and improve performance. Since one of the other distinctive things about Atlantic is that we are going out of business in eight or nine years, heeding the desire of our donor and board to take our $4 billion in assets and bring it down to zero, it’s particularly important for us to share the learning we are gathering, and to that end we are about to launch a series of publications of varying length and depth to make these lessons accessible to a broad audience, and to beef up our website to make the detailed evaluation reports available to anyone who wants them.
Now I want to set forth a few things to keep in mind about evaluation, at least the way I look at it from a funder’s perspective.
First and foremost, evaluation should be based on a shared understanding of what is important to measure and learn. Ideally the organization seeking funding should be asked to set forth what it thinks constitutes success, in stages or in total, how it plans to measure it, and what it needs to get it done. Evaluation is a learning tool for the organization and the funder, not a stick to beat grantees with. And this holds true for almost all measurement efforts, from No Child Left Behind to your assistant’s performance review. If evaluation is coupled with punishment, fear will overwhelm learning.
Second, doing this right takes money. Hats off to organizations that do this with their own dollars because they understand its importance. When groups have to choose between providing services and devoting resources to assessment, they often, understandably, give the latter short shrift. Funders should recognize and support their grantees in their efforts to learn. Evaluation should not be, as Peter York put it in A Funder’s Guide to Evaluation in 2005, “an unfunded mandate.” Or, as Alana Conner Snibbe wrote in the Stanford Social Innovation Review wrote in 2006, “something that funders want to be seen doing, but not what they value being done.”
Third, evaluation should measure only what is important. Data should never be collected for its own sake. The “metrics” obsession that has overtaken some funders in recent years has not always recognized this. More is not necessarily better. Funders should never commit the cardinal sin of makinggrantees jump through a series of hoops, distracting them from the actual work of advancing their core mission and costing valuable staff time, for unnecessary paperwork and reporting on trivial things. And there is nothing more demoralizing, from the grantee’s perspective, than doing all this to have it ignored or filed away without being engaged in any way.
Fourth, make sure that whoever is conducting an evaluation understands the context in which they are working. This is particularly important when experts in one field, such as youth services, try to work in another, like social justice advocacy. In recent months I read two evaluation documents about legal advocacy efforts supported by Atlantic, and perhaps because my own career in public interest law organizations makes me in this realm The Man Who Knew Too Much, I was struck that one of them, written by people without much experience in the field, had a gee-whiz aspect to it that few people more knowledgeable about comparable efforts would have succumbed to. The other, written by two of South Africa’s leading human rights lawyers about Atlantic-funded efforts to build the public interest law infrastructure there, was a pleasure to read because the authors combined deep experience in the field with sufficient critical distance and a knack for conveying issues that are often seen as technical to non-technical reader. Among other things, the report “cautioned against too strongly separating one area of public interest litigation from another … while it was possible to have organizations focusing on children’s rights, health rights or gay and lesbian rights … it was essential that general public interest organizations were also funded and supported in order to enable them to operate across a wide range of issues.” That kind of lesson is almost universally useful.
Fifth, whether you’re a program officer in a foundation or an executive in a non-profit, don’t use evaluation to outsource your own judgment. Use it to inform your judgment, and then stand behind it. It’s again worth stressing that evaluation is a tool in the quest for impact and effectiveness in everything from helping kids make a successful transition from middle school to stopping the Bush Administration’s embrace of torture. It is not like a Magic 8 ball that will tell you what to do.
Sixth, and I address this both to funders and the organizations they support, have a reasonable sense of humility about cause and effect. For organizations working for social or policy change, understand that no significant change was brought about by one organization working alone. For funders, understand that the tendency of organizations to claim disproportionate credit for some policy advance has a lot to do with their need to impress funders who are too often more concerned with seeing the grantee quoted in the newspaper than with understanding the group’s actual role and impact, which may be more effective behind the scenes. Funders, as well, are too often quick to claim credit for things that others had a hand in. And it is often not possible, particularly when the funding that is provided is not nearly enough to make the critical difference, to attribute something to the impact of one grant by one funder. When a neighborhood turns around or climate change is reversed, your $50,000 grant will have played a role, but not likely a pivotal role that can be isolated with scientific precision.
Finally, I propose a three- year moratorium on logic models, theories of change and the like that use geometric shapes and arrows, particularly when arranged in a circular or oval form. I’ve never seen one of these that is not absurdly reductionist. I just threw that in to upset people, particularly those among my own staff and consultants. But if it results in a world with fewer PowerPoint slides, I feel I will have accomplished something important in my time on Earth.
I promised to close with a few words for donors who may wish to step up their philanthropy but who have been concerned about making an impact. The first thing I want to say is so obvious it can often be forgotten: start with what you believe. If you have a passion about the death penalty or art programs for kids or the isolation of older people, or whatever it is, find a way to advance it, and worry about what you care about first and how you measure it second.
The next thing is that there is a shortage of capital for social initiatives – proven youth mentoring programs and senior volunteer programs and legal assistance clinics – but no shortage of sound due diligence. In the youth arena alone, foundations like Edna McConnell Clark and Atlantic have a wealth of expertise and knowledge about what works, constantly refined by the kinds of sophisticated approaches to evaluation I’ve described above, and there is little need to recreate or replicate it. Better to marry your capital to due diligence that you can trust.
The third thing is to remember that the lessons of business, the experience of the private sector, have much to teach non-profits but many limitations as well. Many emerging philanthropists who have been successful in business too quickly assume that all non-profits need to do is become more like businesses in order to succeed and thrive. I had in my office the other day a social justice advocate who seems to be a little late in jumping on this bandwagon, and who was touting the benefits of this approach as applied to health, reminding me that a business can’t be successful unless it turns a profit. I would argue that most enduring successful business ventures must also have social value, but it’s also true that you can be successful in making money, at least for a while, by riding roughshod over community values – look at Wal-Mart’s impact on small business in small towns and rural communities, or the rapacious gains of certain extractive industries. Social investments, on the other hand, can’t be measured only in dollars and cents, and the bottom line has many components.
Finally, please think about engagement in policy and advocacy work as well as service organizations. All the private funding for afterschool and arts programs, for community development and re-entry initiatives, for clinics legal and medical – all of it is dwarfed by the potential of government, democratically controlled and directed, unlike largely unaccountable foundations, to invest in broad community needs. Advocacy can help shape the flow of those investments, modeling programs and practices that government can learn from but not using scarce philanthropic dollars to assume an obligation that is properly a public responsibility. Funding advocacy takes staying power and a tolerance for failure amid gains over the long haul. As Susan Hoechstetter and Marcia Egbert have written in The Evaluation Exchange, a useful publication of the Harvard Family Research Project, when it comes to advocacy, “flexibility is a strength, and ‘failure’ to reach a big goal may actually produce important incremental gains.”
I usually like to end my talks with some kind of odd pithy quote, but since I am relatively new to the topic of evaluation, I had a hard time coming up with one. So I did what any careful researcher would do, I went straight to Google. And I found this gem by none other than Friedrich Nietzsche, who wrote in 1883:
“Evaluation is creation: hear it, you creators! Evaluating is itself the most valuable treasure of all that we value. It is only through evaluation that value exists: and without evaluation the nut of existence would be hollow. Hear it, you creators!”
There you have it – the evaluator as artist. Now I think I hear a cheer going up in the offices of the American Evaluation Association!
Thank you.
View Original