This is the second part of our interview with Joshua Knobe, who currently holds the position of Associate Professor in the Program in Cognitive Science and the Department of Philosophy at Yale University. Knobe was interviewed by three doctoral students, Michael Schapira, Jon Lawhead, and Tim Ignaffo.You can read part one of their discussion here.
Ignaffo: You have an effect named after you: the Knobe effect. How did this come about?
Knobe: When I was an undergraduate, I was interested in questions about how people determine whether someone did something intentionally or unintentionally. We just thought of it as a question of psychology, and didn’t realize that this had some sort of implication for philosophy. So we did a bunch of studies about how people decide on whether something is intentional or unintentional, and then we sent them to a psychology journal and they published them. But meanwhile, I was reading a bunch of work in philosophy, especially the work of Friedrich Nietzsche. He had these ideas that our whole way of understanding the world is colored by our moral views, so our whole perspective on the way the world works is in some way reflected in our values. As stupid as this might seem, I never connected these ideas from Nietzsche to these experiments we were doing and publishing in psychology journals.
We had the idea that maybe people are injecting these moral notions into questions about how to understand whether we behave intentionally or not, so we ran a study in which we randomly selected people and gave them one of two cases that differed just in their moral status, so they’d be the same in all other respects.
In one case, participants were given a story where they supposed that the vice president of a company goes to the Chairmen of the Board and says, “Okay, we’ve got this new policy. It’s going make huge amounts of money for our company, but it’s also going to harm the environment.” The Chairman of the Board says, “I don’t care at all about the environment. All I care about is making as much money as possible.” So they implement the policy, and it does harm the environment. We asked the participants, “did the Chairman of the Board harm the environment intentionally?”
The other case was exactly the same, but we just changed the word “harm” to “help.” The VP goes to the Chairman of the Board and says, “I’ve got this new policy, it’s going to make huge amounts of money, and it’s going to help the environment.” The chairman of the board responds, “Look, I know it’s going to help the environment, but I don’t care about that. All I care about is making as much money as we possibly can, so let’s implement the policy.” So they implement the policy and, sure enough, it helps the environment.
The only thing that is essentially different is the moral status of the act. In the first case, people say the Chairman of the Board acted intentionally, and in the second case, they said he acted unintentionally. It seems as though our moral judgments are somehow affecting our understanding of the way these issues are understood.
Ignaffo: I was actually wondering about your use of questionnaires in studies like these. Have you found that there are certain parks, or times of day, or even states of weather when you find it’s easier to get people to participate in these questionnaires? Did these things ever factor into your studies?
When I was a graduate student, I spent many hours just wandering through these public parks in Manhattan trying to get people to answer questionnaires. In general, I found that in ones that were more downtown, people were happier to do these studies than the ones that were more uptown. In one particular case, we ended up doing a study in which you encouraged people to enter a certain moral framework: we were telling people a story about someone from the Arab world that had a very anti-American attitude, and you could answer the questions we were asking in different ways. Depending on your own psychology, you could either try to really enter the mindset of this person from the Arab world, or you could just answer from your own moral perspective. After we gathered all the data, we did an analysis where we looked at the difference between people from Central Park and Washington Square Park, and there actually was a statistically significant difference. People in Washington Square Park were more likely to give the answer where you were abandoning your own moral perspective. So maybe location does really make a difference.
There have been a number of studies that show different effects of mood in how people respond to questions. For example, one question we’ve been very interested in is the question of whether people think it’s possible to truly to be free or to be morally responsible if the world is completely determined. One theory that’s been developed is that people’s answers depend on how they think about the question. You can think about the question in a kind of abstract, theoretical, or reflective way, or in a more emotional, engaged way. One study looked at that very theory just by altering the font in which the question was written. So if you give the question in a really easy to read font, like Arial, people just breeze right through and give an intuitive answer. If you give it in a difficult to read font, then people stop and think. People are more likely to say that you can’t be morally responsible in a deterministic universe if the questions are written in a difficult font.
Schapira: It seems like you are very open to experimenting with different methods and even the form of how you present your ideas. For example, in the book you edited with Shuan Nichols, you two wrote a manifesto for Experimental Philosophy. How did you arrive at the need, or the desire, to write a manifesto? And was it a playful gesture?
Well, the Pixies might have played a central role here. Clearly, there is something a little bit preposterous in this kind of work, writing a manifesto. But I think there is a general sense that we don’t want to keep doing the same thing, but always sort of change and do different things.
Shortly after writing that, I started collaborating with someone doing linguistics — very formal work, filled with Greek letters. I hope we continue to constantly change in that way. I feel like a secret to not settling into something is that there are always young people who are trying to push you in different ways. Hopefully our students will be able to stop us from becoming calcified and ossified in our approaches, and force us to go into different directions.
Lawhead: All three of us have taught philosophy at the pre-collegiate level. It seems important for kids to be exposed to philosophical questions in a way that makes it obvious that these questions are driven by considerations that are eminently practical. Maybe your multidisciplinary background gives you some unique insights on how philosophy might be presented to people who haven’t been inculcated into the priesthood of “dead white guys.” Do you have any thoughts about how this looks at the pre-collegiate level?
The obvious way to answer that question would be to say that I could not, from the armchair. But rather, we should try a bunch of different approaches and see which ones actually work. So I feel like you’d be the one who’s actually done that. You’ve tried a bunch of different approaches, so you have some empirical data now. What seems to have worked?
Lawhead: That’s a good question. The first response I would give would be to say that kids are naturally curious. Part of the problem that I’ve seen is that a lot of pre-college instruction, especially, and college, to some extent, is structured so that kids come out of high school having lost a lot of that curiosity. This is one of the reasons that I was attracted to the Brooklyn Free School, and enjoyed that setup. Philosophy at its best encourages the kind of curious thinking that is very much endemic to children, and doesn’t seem to be endemic to a lot of adults, so the thing that seems to me to work the best to encourage children to think philosophically is to let it come organically. To present interesting problems that are really obviously relevant.
It takes very little motivation to see why a question like, “How is your cultural upbringing relevant to your decision about what is morally right or wrong?” is important. On the other hand, with something like “the problem of universals,” it takes much more motivation for someone to see why it’s a problem that ought to be tackled. But then I’m tempted to say something like, “Well, that just speaks to the fact that we should give these kids these sorts of questions in other classes, and philosophy ought not to be its own discipline. Philosophical thinking should be injected in all of these other classes that kids are exposed to. So rather than have pre-collegiate philosophy classes, get teachers in physics, chemistry, math, and all these other disciplines to encourage that kind of philosophical thinking.” But that comes with its own challenges too.
Ignaffo: Coming back to some of the studies that have been done in Experimental Philosophy, could you describe how you looked at our views of moral culpability?
We were curious about the age-old question of free will. For thousands of years, philosophers have been wondering, “If something is completely determined, or is completely caused by prior events, then can we still be held morally responsible for it?” There have been people who have thought that the answer is clearly no, that we can’t be held morally responsible. Others have thought the clear answer is yes: even if the world was completely determinate, it doesn’t matter at all. You can still be held responsible for your actions all the same.
We thought that there are good reasons that might be in our minds, pulling us in different directions. There is something in our minds pulling us towards the view that the answer is “no,” and there is something else pulling us to the view that the answer is “yes.” In particular, we thought that maybe the more you think about it as this abstract, theoretical, philosophical question, the more you will answer no. And the more you engage the question emotionally, the more you will answer yes.
We tried to randomly assign people to different conditions that would hopefully demonstrate these different modes of thought. In one condition, we described this completely deterministic universe, Universe A, and then asked the abstract question, “Can anyone in Universe A be held responsible for anything that they do?” And people almost always answered, “No, absolutely not.” In the other condition we described this totally deterministic universe, Universe A, and the participants were told to imagine this one guy in that universe, Bill. Bill falls in love with his secretary, so he decides to leave his wife and his family. He sets up a secret device in his basement and burns them all to death. Is that one guy in universe A morally responsible for what he did? And there, people overwhelmingly said, “Yes, he is.”
So participants are seeing on the one hand that no one can be held morally responsible for anything they do in this universe, but on the other hand that this one concrete guy, Bill, is morally responsible. This seems to suggest that depending on how you are thinking about the question — which part of the mind that you are using — you can offer radically different responses.
Lawhead: Have you ever pointed this out to people who were participating in the experiment? Have you ever given them both versions of it at different times and said, “Look, there’s a tension in your answers,” and explored how they explained that or resolved that within themselves?
Yeah, there have been a bunch of different studies trying to use this method. So one thing we did with this free will question was tell people, “We just did this experiment: some people, we gave this question and they overwhelmingly said yes. Some people, we gave this question and they overwhelmingly say no. What do you think about this? It can’t be both because they contradict each other, so what’s the right answer?” And here, the results were just 50/50.
There was also a study using the question that I raised earlier, about the Chairman of the Board who helps or hurts the environment. Two philosophers gave people both versions, in one order or the other, to see what they’d do when they gave them both. What they found was that there was a massive gender difference in how people responded. Women showed a desire to be consistent, and as a result showed a massive order effect: if you give them the harm version first, they say it’s intentional, and if you give them the help version first, it’s unintentional — if you give them the help version first, they will say its unintentional, and [then] with the harm version they will say it’s unintentional, too. Men are far less consistent: they just say, “If I contradict myself, I contradict myself. I contain multitudes.”
Schapira: How interested are you in how your data gets interpreted? For example, someone may point to this study and say, “Look, this proves what I’ve been saying about gender differences.”
In philosophy, there is a long tradition of what some people call “view xers.” View xers are people who have some kind of view, view x, and then associate themselves with that view. “I’m the guy who has view x.” When some new evidence comes along that supports whatever that view is they say, “Look, that proves that I was right. View x is correct.” So you could imagine certain philosophers developing their own views y and z and saying, “I’m the man with view y.” I’d hope that we wouldn’t be like that. One thing that’s been really notable about people within Experimental Philosophy is that they don’t develop some theory that they very much associate with and tenaciously hold no matter how the evidence turns out. Instead, there has been this surprising willingness for people to change their minds in light of new data.
Lawhead: Does that seem to have been working out so far?
So far it seems to have held. Maybe it’s some kind of thing that has to deal with selective group membership. There might be this sense of community that says, “We are people who don’t do that.”
Some of the recent work we’ve done is on pornography. One view that people have sometimes had about these questions is that people intuitively can think of things as physical objects or as psychological. I can view someone as a genuine human being, with a mind and emotions and beliefs. Or I can look at something, like this cup, as a mere physical object. Some people thought maybe insofar as you see someone in a pornographic image, you start to think of them more like a physical object, and less like a genuine human being.
But we thought that the idea that there is something like this capacity to see someone as a mind, and therefore think of him or her psychologically, is a mistake from the beginning. Instead, we found that there were two different psychological processes. A process whereby you think of someone as having states like beliefs, desires, plans, intentions, and then separately a process whereby you think of them as having emotions, sensations, feelings, and so forth. And these things seem to be independent. You can think of some objects as having one or the other or neither or both. So we conducted a series of studies looking at what happens to your perception of someone when you see that person in a pornographic image.
What we found is that in keeping with these traditional theories you decrease your tendency to think of that person as having intentions and goals and so forth, but you actually increase your tendency to think of them as having emotions and sensations. So if people are seeing you too much as a machine and someone who has no emotions, just this driving force to resolve questions in the philosophy of education, all you have to do is take off your clothes.
Schapira: Where do these questions come from? Do they come from your students, or are they sort of picked out of the ether?
One of the things that has been central in Experimental Philosophy has been the influence of young people. If you look at a lot of areas of academic research, they are driven by very senior figures — but Experimental Philosophy is not like that at all. With this work on pornography that I was just talking about, the entire project was completely driven start to finish by someone who, at that point, was a graduate student. Many of the other projects we do have a similar trajectory. Every week I have a meeting — sort of an Experimental Philosophy lab meeting — mostly filled with graduate students in psychology, but also philosophy students and undergraduates, and we just try to think about ideas. All the presentations are from students. It seems like a great percentage of the new insights are coming from these very young people — many of them undergraduates.
Schapira: How formal are they? Are they on campus, in a department? Are there couches or are they around a table?
I have to admit that we are meeting on campus, but it would be cooler if they were in the forest.
Ignaffo: How pedagogical do you see Experimental Philosophy as being? What would be your mission statement?
There are many things where people think that the only way to know about them is to go out and study them. Like, if you think about how the planets move, everyone will think that we just have to go out and study that. But when people think about their own intuitions, sort of ordinary folk psychology, they think, “Well, I don’t need to study that. There are no empirical mysteries to studying that, because I’m just an ordinary person.”
I feel like one of the main lessons of our research is that people are just drastically mistaken about how their own intuitions work — if you were to just ask me, “How do you decide whether someone does something on purpose?” I would have had a certain view, but my view may have been completely wrong on how I, myself, was doing these things.
Lawhead: Has anybody in this discipline been called upon to integrate this into policy decisions?
There are people who are not experimental philosophers who have drawn on Experimental Philosophy. But experimental philosophers themselves seem to have this stance of uncertainty; there is this culture in the field to resist saying that, “The answer is, the U.S. government should do this!” There is, rather, an ethos of saying, “This is a difficult question, we don’t really know how this works, and we’re studying the phenomenon.”
When I talk with people outside of experimental philosophy, sometimes our level of uncertainty is seen as preposterous. At one point, a journalist was interviewing me about the events in the Gulf with BP, and he asked, “In light of your research, what does that show about what the policy should be?” And of course there is the temptation to say, “The answer is X!” But I said, “I don’t really know very much about politics.” To my great embarrassment, he then published that in his article. It said, “We asked Associate Professor Knobe, of Cognitive Science at Yale University, and he said ‘I don’t know very much about politics.’” I hope that, despite the continuing embarrassment that we suffer from not saying that we know the answers to those questions, we can continue acting like that.