If you’re like most people, chances are you ignore most of the buttons on your digital camera most of the time. Sure, you could make careful decisions about aperture size, shutter speed, focal point, and ISO each time you took a picture. But you’ve probably got a pretty smart little machine on your hands, so no one could blame you for leaving your camera on automatic mode and letting it make quick and dirty choices. And if you’re dealing with a particularly tricky circumstance—low light, or a moving subject—why, then you can always switch over to doing all that manually.
Our brains, says neuroscientist Joshua Greene, have two similar modes of thinking. In the automatic mode we use most of the time, behavior is dictated by gut-level instincts that don’t cost much in the way of processing power. In the more resource-consuming manual mode, we make decisions based on slower, higher-level cognitive operations.
Greene is a trained philosopher who’s most interested in how these two modes of thinking affect our moral and ethical judgements. What does the gut say about dilemmas of right and wrong? Do things change when we listen to our cerebral cortex?
In 2012, working with David Rand and Martin Nowak, Greene conducted a clever little series of experiments that worked to challenge the idea that people are wired to automatically do what’s best for themselves, rather than serve the greater good.
The trio used a familiar psychological setup called “the public goods game.” In the game, each participant is given a sum of money and asked to donate as much as they want to a shared pool. Whatever’s in the pool is be doubled and divided equally among the whole group. A “me-centered” player will be stingy, because they’ll benefit from everyone else’s generosity without giving up their personal pot. “Us-centered” give more, because that maximizes the return for everyone including themselves.
Greene didn’t just want to know how people would behave under these circumstances. He, Rand, and Nowak wanted to see if their decisions changed depending on the “mode” their brains were in. Somewhat counter-intuitively, it turned out that forcing people to make swift choices made them put more money into the pool. So did having them think of a time when intuition had led them in the right direction. But when people remembered benefiting from careful reasoning or reflection, they kept tighter hold of their cash.
What this suggests, Greene explains, is that (most people, under most circumstances) have a deep instinct to behave cooperatively with other members of the group to which they belong. It’s only by slowing down and consider our options on a more cognitive level that our brains prod our guts and remind us to take care of our selfish needs. It’s not hard to see why this kind of behavior might have evolved: Human societies wouldn’t have made it very far if they had failed to solve what economics call “the tragedy of the commons”—the problem of how to divide shared resources without depleting them.
Unfortunately, the cooperative instinct doesn’t extend very far—quite literally. When we have to choose between our own happiness and the well-being of people across the globe, or even in another state, that same automatic thinking kicks in. It blurts out what it’s said for millennia: Do you really want to help someone from another tribe?
That, of course, is a question science alone cannot answer. But if the values we live by now tell us the answer is yes, Greene’s work suggests we ought to try switching our brains into manual mode.