That moral feeling
Kennislands visiting scholar Sarah Schulman buigt zich over de voor KL zeer relevante vraag: hoe weten we dat wat we doen ook echt waarde toevoegt? In het geval van Kennisland: maken we Nederland echt slimmer door wat we doen? Doen we genoeg goede dingen om het geld dat daaraan besteed wordt te rechtvaardigen? En wat is goed doen eigenlijk en wie bepaalt dat? Vragen, die op zijn tijd zeker gesteld moeten worden.
Klik hier voor links naar alle artikelen van haar hand op de KL-site.
Luann firmly pounded on the window. An hour before, her boss handed her the notification. 3 brothers – ages 6, 7, and 9 – hadn’t shown up to school in over two weeks. When their teachers last saw them, they were wearing dirty clothes and had been sent home with head lice. The door opened a crack. Luann introduced herself as a social worker from child protection. The mum reluctantly widened the door. What Luann saw disgusted her. Dirty diapers piled high, food smeared on the floor, a strong stench. She felt compelled to act. And the more information she gathered, the more her resolve grew. This was a ‘clear-cut’ case of neglect. Of a mum without the capacity to keep her kids safe. The kids would have to be removed.
As soon as Rebecca received the call from the hospital, she knew what she had to do. Her dad had fallen, again, only this time he’d broken his hip. When her dad got out of surgery, Rebecca would move him into a nursing home. A nice one. That didn’t smell too bad. Where the people were friendly. And he could play bingo. He loved bingo. Most importantly, he’d be safe. The house was just too big for him, and the 20-hours of care he received a week was no longer enough. Dad’s insistence that he live on his own had put him in harm’s way. Rebecca could not continue to allow that.
Neither Rebecca nor Luann felt they had a choice. They had to intervene. And make a ‘wrong’ situation a little more ‘right.’
When we classify a situation as ‘wrong’ versus ‘right’, we gain a moral impetus to act. We apply what we believe to be a universal value set to others. To our mum and dad. To vulnerable kids. To neighbors. Even to strangers. This is the basis of the welfare state.
Not all situations trigger a moral reaction. Many trigger an aesthetic reaction. Or a prudential reaction. When a friend you’re shopping with professes to love the skinny jeans you hate, you might call her taste into question, but you’re not likely to treat her preference as universally wrong. Similarly, when your son and daughter neglects to study for their maths exam, you’re likely to view their behavior as unwise, but not morally repugnant.
But, how do we determine what is right/good and wrong/bad? And how do we change our moral standards? Can we? When should we?
These are questions I’ve skirted in most of my blog posts. Questions that I will, by no means, answer fully here. They are darn hard questions. That remain mysterious to philosophers, moral psychologists, and neurobiologists. That ‘we’ social innovators often ignore. If anything, the methods we use and the solutions we create tend to look at social challenges from an aesthetic or prudential view. If only social workers worked smarter – more collaboratively and without all the paperwork – they’d make better decisions. If only our nursing homes were re-designed, then older people and their families would have a better experience….
Only ‘better’ is a by-product of a particular set of moral beliefs. Social workers, nursing homes, and most of our modern welfare machinery is predicated on moral beliefs about harm and fairness. That people are vulnerable and need protection. These beliefs are so deeply embedded that, often, we’re not aware of them. Aware that they are biasing us to take certain actions over others. Or cognizant that there are viable alternatives.
And herein lies a big difference between radical and incremental social innovation. At least for me. Radical innovation goes deep. It illuminates our generally hidden, taken-for-granted moral beliefs. And it does at least one of two things:
First, it helps us to re-classify social situations. From moral to prudential. From prudential to moral. From moral to aesthetic. From aesthetic to moral. Back in the 1950s, smoking was seen as an aesthetic choice. An issue of taste. Now we view smoking from a moral lens. It causes harm – and so ‘we’ are justified to legislate for social good.
Second, radical innovation enables us to re-classify the behaviors we deem to be harmful, fair, just, etc. To redefine what our values actually look like, and to re-prioritize them. Restorative justice programs, for example, re-classify forgiveness as just. And place this version of ‘justice’ over other moral values like ‘loyalty.’
So, how do we bring about such reclassifications?
There’s a few different theories about where are moral values come from, and therefore how we go about changing what we believe.
Jonathan Haidt’s Social Intuitionist Model posits that our morals come from emotional reactions – like disgust and empathy. He differentiates between moral intuition and moral reasoning, and argues that we reason after the fact. He notes that, “People have nearly instant implicit reactions to scenes or stories of moral violations; that affective reactions are usually good predictors of moral judgements and behaviors; and that manipulating emotional reactions…can alter moral judgements (p.998).” Remember social worker Luann? She felt disgust, and subsequent information only confirmed her gut reaction.
Paul Bloom puts more stock in our ability to reason – offering up what might be called the Reflective Equilibrium model. He writes, “Emotional responses alone cannot explain one of the most interesting aspects of human nature: that morals evolve. The extent of the average person’s sympathies has grown substantially and continues to do so. Contemporary readers…have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s…Rational deliberation and debate have played a large part in this development.”
If you buy into the Social Intuitionist Model, then, changing our moral beliefs requires changing our emotional responses. If you buy into the Reflective Equilibrium model, then, changing our moral beliefs also requires increasing our ability to reason. It’s not either/or.
Design-led approaches to social innovation tend to appeal to our emotional selves whilst analytic-led approaches tend to appeal to our rational selves. Over the past few years we tried not to take an either/or approach, but to blend design and analytic methods. So we used ethnographic stories to spark empathy, and numbers to spark deliberation.
Only I’m not sure we really changed a whole lot of people’s underlying beliefs. Or more importantly, built their capacity (or our capacity) to continuously identify, question, and reclassify their beliefs. Sure, our interventions often got people feeling or thinking something different. In the moment. But not all that long after. How would we have enabled Rebecca and her dad to articulate their different values and reclassify what was harmful and fair to whom? Could that have led to different decisions? To looking at shared living arrangements, for instance? And how could we have enabled Luann to recognize her emotional biases, try out different reasoning, and explore alternatives to removal? Wealthy folks often have nannies and cleaners…
Research offers us a few clues for how to build awareness, and bring about deeper moral shifts.
Anthony Appiah argues for increased and continued exposure to new social groups, and new moral codes.
Wheately and Haidt (2005) show that reconditioning people’s emotional responses through hypnosis shifts their moral judgements. When people feel disgusted, their moral judgements are more severe.
Pastotter et al (2013) discovered that inducing a positive mood influences moral judgements. Using music and autobiographical priming (having people think of a nice memory), the researchers were able to shift how people made decisions.
Rudman, Ashmore and Gary (2001) found that time, space, and structure to measure our biases; to document our day-to-day judgements; to forge social connections; and to engage in heated debate can alter our implicit and explicit prejudices. Implicit prejudices consist of automatic associations (e.g between Blacks and criminality) that are unavailable to introspection – meaning people don’t realize they hold these beliefs. Explicit prejudices are the ones we can name (sometimes reluctantly).
Students joined either a seminar course on prejudice and conflict, or a more generic research methods course. Every week, for 3 months, students who enrolled in the prejudice and conflict seminar learned about intergroup conflict, kept a journal documenting instances of their own biases, got to know Black students and faculty members, engaged in heated conversations, and used an assessment tool to measure their implicit biases. Students in the seminar scored significantly lower on implicit prejudices compared with students in the generic course.
This finding is striking when compared with the results of most ‘diversity education’ classes. Most multicultural workshops have proven to be ineffective. Interventions that focus on ‘color-blind’ strategies, where people are asked to suppress their category-based judgements about people and situations – are particularly ineffective. It seems suppression leads to rebound. It’s better to be explicit.
All of these studies have their limitations. Most were conducted in laboratory settings, or in universities. The participants were open to exploring their beliefs. Still, they give us some starting points for prototyping interventions that just might scratch beneath the murky moral surface.
Starting points like…
- Measuring implicit beliefs
- Emotional reconditioning
- Direct contact with groups who hold different moral views
- Positive mood inducing experiences (e.g music)
- Explicit conversation and debate
- Dosage: weekly exposure, prolonged exposure (at least 3 months)
Some of these starting points are rather different to the change mechanisms I’ve helped to prototype in past projects. A big premise of interventions like Family by Family and Weavers is getting to know people a lot like you. These are people who come from a similar background and can validate your feelings. They may model different behaviors, but not necessarily different moral values. If anything, people select peers who hold similar views. When folks have shared different values, they’ve often found conversation to be stifled, tricky, unnatural.
This is one of the conundrums of bottom-up moral change. It’s can be deeply uncomfortable. And confronting. Plenty of people will not engage. That’s why we sometimes need top-down moral change. We need strong leadership to re-classify social issues and behaviors. To say gay marriage is a moral right. To say assault-gun ownership isn’t about harm reduction, or about individual fairness. The question is: when, and who decides?
Social innovation can’t just be about technical solutions and better designed stuff. It’s got to also be about values, politics, and leadership.