The Code of Hammurabi – History.com
I don’t get to do much philosophy these days,
so it’s nice to have a space to jot these thoughts down. With that said, what I want to say here feels more philosophy-adjacent than anything else.
This is the first in a series of posts exploring the idea that treats moral practices as ways to solve coordination problems for groups of agents. Before I go on to say what I mean by that, it might be wise to say what I don’t mean.
I’m not interested in making anthropological claims about the ways that moral practices developed in societies across the world. Neither am I interested in making any claims of the sort moral practice “X” allows a group to solve coordination problem “Y”, therefore it is good/right/justified etc. Finally, I’m not trying to give a reductive analysis of any particular moral practices; I’m not trying to say that moral practice “X” just is a way to solve coordination problem “Y” in the sense that someone might say water just is H20.
Instead, I want to examine a particularly interesting story about groups of agents. It probably helps to imagine these agents as human, though I see no reason why they would have to be. I find myself imagining them as living in a kind of nascent society as well (post-apocalyptic or neolithic, depending on the day) though this is not necessary either. In order to examine this story, of course, I’ll first have to tell it.
The story goes something like this.
Imagine us as members of some group of agents (again, neolithic or post-apocalyptic, dealer’s choice). For our group, there will be problems that require a cooperative solution. There might be more than one such solution, but if we want to overcome this hurdle, we’re going to need to work together.
Now, if we have time to get together and talk things over, we’d likely be able to work out a solution, make a plan, and carry it out together. But imagine that our group has figured out how to sustain our population in this dangerous and unpredictable world. And imagine that this problem keeps on cropping up, sometimes often, sometimes not for generations. The solutions to this problem are not obvious, and they need to be taken quickly. They might even require great sacrifice. A group that can prepare itself to take on a crisis like this will, all else equal, do better than a group that cannot.
Say we develop a way to reliably influence each other to take certain courses of action when the right words were spoken. Such a practice allows us to get on the same page, and our group is able to confront this problem each time it arises. Perhaps this allows us to avoid some existential threat, or perhaps it just allows us to collect a few more resources than the other groups we interact with. Of course, such a practice might be abused by some in our group to manipulate others and assume a position of power for themselves. But groups in which such abuse occurs will be at a disadvantage compared to groups in which it does not. And so these practices experience a kind of selection effect, as the groups that employ them and experience success presumably keep on using them, with the occasional tweak here and there.
This, then, is the story of moral practices that captivates me. I’m not sure I completely buy it, but it’s been one of those ideas I keep coming back to, year after year. The angle I want to explore is that certain high-profile types of views in moral philosophy correspond with ‘powers’ that a group might have, much in the way that a superhero or a video-game character might have the power to fly, heal quickly, or see things far away.
A group with the Utilitarian power can endure sacrifice on the part of some of its members for the success of the whole. A group with the Kantian power can confer a special status to its members, fostering group resilience and cohesion. A group with the Aristotelian power can produce highly effective individuals, who deploy specialized skillsets to achieve amazing feats.
On the one hand, I think these ideas are very silly, and in a way they’re meant to be. On the other hand, I’ve always found it silly that Utilitarian, Kantian, and Aristotelian philosophers find so little common ground in contemporary philosophical discourse. So even if I’m a bit spotty on the details, I hope to capture something worthwhile here: the idea that the moral practices we have allow groups to do certain things. No particular moral practice has any special status over the other. They are all just good enough to get the groups that used them to where they are today.
P.S. I apologize for dropping so much jargon at the tail-end of this post. If you swallowed your frustration and made it to this point in the piece, thank you; I’ll do my best to explain things a bit more carefully the next time I take up this topic.