Calls for increased transparency and oversight are common in the public realm. Our guest today, the philosopher C. Thi Nguyen argues that transparency can actually erode important parts of community life. He claims that while transparency might root out corruption, it also has a sort of chilling effect on the very work people are required to be transparent about.
Send questions or comments to email@example.com.
For the episode’s transcript, click here.
I’m trying something new with the shownotes for this episode. Instead of a list of links, I’m providing the transcript (which can be downloaded in Word doc form here) and hyperlinking parts of the discussion that need some background information.
Transparency is Surveillance: C. Thi Nguyen
Christiane Wisehart, host and producer: I’m Christiane Wisehart. And this is Examining Ethics, brought to you by The Janet Prindle Institute for Ethics at DePauw University.
[music: Blue Dot Sessions, Lina My Queen]
Christiane: Calls for increased transparency and oversight are common in the public realm. My guest today, the philosopher C. Thi Nguyen argues that transparency can actually erode important parts of community life.
C. Thi Nguyen: Transparency undermines expertise in a significant way, because it confines experts using the kinds of reasons that nonexperts can understand. But if you think about it for a second, you should realize that a lot of expert reasoning is by its nature, incomprehensible to the public.
Christiane: Stay tuned for our discussion on today’s episode of Examining Ethics.
[music fades out]
Christiane: I work at a university, and our faculty-staff email listserv is often filled with requests for increased transparency. And outside of academia, there’s a general sense that transparency is a necessary and important part of public life. So it really never occurred to me to even question the good of transparency until I read C. Thi Nguyen’s article, “Transparency is Surveillance.” In it, he argues that while transparency might root out corruption, it also has a sort of chilling effect on the very work people are required to be transparent about.
Christiane: So, what’s your take on transparency?
C. Thi Nguyen: So, in general, I’m interested in what you might call public transparency. So, public transparency is any kind of transparency from some institution or group or experts for the public in general and I’m particularly interested in the case in which it’s done for oversight. So, you think that some politician, some group of experts might be wasting public funds or going off mission or corrupt in some way. So, you ask them to justify themselves and justify their actions. The central idea is those cases in which politicians, people using public funds, people working on some kind of technology make information available for the purposes of oversight, for the purposes of accountability, for the purposes of people monitoring, whether they are corrupt or doing good.
And that is definitionally surveillance. And I think we forget … Think about it that way. And we forget to think that surveillance is a downside, too, which we know about well under that description.
Christiane: So, you write that “transparency pressures experts to act within the range of public understanding,” and as somebody who I’m pro accessibility, I’m pro inclusivity. So, I’m wondering what’s wrong with that? What’s wrong with making things accessible to the public?
C. Thi Nguyen: I mean, there’s a danger here. I’m going to try to make this seem reasonable. But I understand when I first explained this paper, I sound like an elitist asshole. Because isn’t accessibility to the public all for the good? Usually it seems that way. I mean, when I’m part of the public, I’m always like, “Yeah, yeah. More accessibility. I want that data. I want to be able to have oversight.”
But when I’m the person overseeing, it often seems grueling. So, a lot of the idea for this paper came from my personal life as someone working in my philosophy department in a role known as assessment officer. For the purposes of public transparency university departments have to do something called assessment.
And it’s reporting success about student education in quick quantifiable terms that are readily comprehensible in my case, by the Utah state legislature. And so, what we end up doing is doing things like trying to justify student education in terms of, oh, how many of our students get jobs? How many of our students make money? How many of our students can do well on some publicly available standard like the LSAT? But of course, that’s not really the value of a philosophical education.
I mean, what I really want to say is, what’s important is making students more reflective, making more students more curious, having students explore the intellectual landscape. But that’s incredibly hard to translate into terms that are readily comprehensible and readily accessible by everyone in the world.
Here’s a basic thought. Excluding people that are vaccine deniers and COVID denialists, we all believe that there’s such a thing as expertise, particularly scientific expertise. And we understand that science and scientific expertise runs outside of our understanding. There are significant aspects of that expertise in which we can’t understand.
So, what now looks to me like this very perverse situation, when we understand that the experts are doing something that involves an expertise that we don’t have. And we want them to be doing that precisely because of that expertise. But then we ask them to justify themselves in terms that we can understand–in non-expert terms. So, Onora O’Neill is one of my favorite philosophers. She wrote a lot about bioethics and trust and autonomy. And she has this passage from the BBC Reith Lectures on trust, where she says, people think that trust and transparency go together, but actually they’re in tension because transparency asks you to constantly explain every action to outsiders.
And we know that in that case, since not all the reasons that a particular person uses are understandable to a quick outside public, then in many cases, experts will have to conceal their reasons or invent false reasons. So, she thought the transparency encourages deception.
So, the point of my work has been to expand this and build it out. I call this the epistemic intrusion argument. So, the argument goes, a lot of expert reasons are not available to non experts, and there are two forms of this.
So, sometimes experts reason in a way that requires some kind of formal training to understand. So, if I write down a math proof or if I write down some computer code, most people without training can’t even follow that. But there’s something more extreme that I think everyone that works in the philosophy of expertise and the psychology of expertise and the sociology of expertise understands, which is a lot of expertise doesn’t even come in the forms of explicit reasoning, but it comes in what we want to call tacit knowledge, a kind of instant, intuitive, immediate perceptual, synthetic grasping of things.
There’s plenty of evidence, for example, that a trained pediatrician can just walk into a room, like a waiting room and instantly just spot, “That kid’s really sick. We should see that kid immediately.” And what’s going on is it has not yet reached the point of reasoning. It’s that with an enormous amount of experience an enormous amount of training, like various subtle features just pop out: the kid just looks sick.
So, experts reason with all kinds of reasons that are non-accessible to non-experts. And then, transparency, ask experts to justify themselves to non-experts. But a lot of those reasons don’t translate. So, I think the deeper worry for me is that if experts actually think to themselves, “No, I can’t act unless I justify to the public,” then experts won’t be able to use a lot of the resources of their expertise. Transparency undermines expertise in a significant way, because it confines experts using the kinds of reasons that nonexperts can understand. But if you think about it for a second, you should realize that a lot of expert reasoning is by its nature, incomprehensible to the public.
To me, there’s this deep tension. So, the point isn’t, get rid of transparency, it’s that transparency is necessary because we know that experts can get corrupt. We know that experts can get biased. At the same time, we know that experts reason in a way that the rest of us cannot. And so, I think there’s basically these two forces. One force is saying, you need to trust experts, so let experts do things that you can’t understand. And there’s another force that says, “No, you can’t trust experts because they might be corrupt.”
And so, my basic thought is that these two forces are in conflict, they’re in tension. We can’t go for all one or all the other. If you go all trust, you open yourself up for corruption and bias. If you go all transparency, you eliminate expertise. In every case you have to choose. In every case, there’s a sacrifice on either side.
Christiane: And it reminds me of an ongoing kind of discussion that I’ve had with an academic friend of mine. And I’m all accessibility, accessibility, accessibility. My job is translating complicated ideas for the public. But her thought is, that’s great, but there are literally some ideas that I cannot talk about and that I cannot reason through without this specialized language. Like, the language is the idea sometimes. is there a way to translate that or is that just part of the tension that you’re talking about?
C. Thi Nguyen: Part of what I want to say is basically not everything is translatable. One of my favorite philosophers, Elijah Millgram puts it is this way: The epistemic moment that we’re in, the most important thing about it is the radical hyperspecialization of human knowledge. Not everyone can master every field. There are so many scientific fields. There are so many economic fields. There are so many humanities fields. There are so many specialties that, I mean, I’m a philosopher. I’ve spent what, by this age, what, 25 years studying philosophy.
And it’s not just that I don’t understand statistics or things from other fields. Two thirds of philosophy is completely incomprehensible to me. The specializations are so intense. My wife is a chemist. And I asked her, she does organic chemistry. “How much of non inorganic chemistry do you understand?” She was like, “Nothing. I understand nothing. Those papers are incomprehensible to me.”
So, our specializations are so intense that each of us is an expert in one little thing, but we’re inexpert in everything else. I’m worried about this case in which every expert field is put under the oversight of the public in general, which includes me and you. The claim here isn’t the public is stupid. The claim is that by our nature, we have to be inexpert in the current era of the vast, overwhelming size of human knowledge.
And so, if we’re asked to do some kind of oversight, then that oversight has to be put into terms that we can understand with no training. And in so far as expert actions are guided by actual, so you want to say language, but all kinds of things, perception, understanding, particular experiences, then those kinds of justifications aren’t going to be available to the public.
So, if you have to put yourself in publicly available terms, you’re going to lose all that stuff. I mean, one way to put it is we often think whenever we imagine transparency, I think we typically imagine a corrupt, biased expert or politician and then this good-hearted, thoughtful public. But so many of the times it’s the reverse. So many of the times that you have is a small sincere good-hearted group put under the eyes of a rushed, incomprehending public.
I mean, there are so many interesting examples of this. And a lot of the examples involve how in order to get public oversight, a lot of the evaluative mechanisms need to be put in terms of quick metrics and rankings. And the reason they need to be put that way is because if the public has to oversee everything, they can do it really quickly. There are a lot of people that worry that if you give money to charity, a lot of the charities are wasting it. So, what arises is a number of charity oversight groups and famously, the most famous one is Charity Navigator. So, Charity Navigator’s oversight nonprofit that oversees other nonprofits. And they rank other charities on which ones are better and worse. And for a really long time, the main thing that they use as a ranking method is throughput. So, throughput is how much of the money that’s donated goes through the nonprofit and gets out on the other side.
So, to me, I’m moderately well-educated. I’m like, “Hey, this seems great. This is a great metric. So, let’s use this.” So, it turns out if you do more investigation, everyone in the nonprofit sector thinks this is a terrible metric because what throughput makes people do is compete to lower internal costs. At a first pass, really, really wasteful nonprofits will get shredded by this metric.
But later, once you get mostly well-functioning nonprofits, fighting to lower your internal cost is a terrible metric to rank nonprofits on because nonprofits to function well do need to spend some money internally. They do need internal employees. They do need to hire internal experts. They do need to do some internal administration. Here’s an example of, you keep getting things of metrics that look good to outsiders. We often revert to these hypersimplified metrics, because they look good us.
Christiane: You write that transparency can actually threaten community life. And it can threaten work that requires experience and mutuality and shared understanding. Could you just say a little bit more about why transparency is harmful on that kind of side of things?
C. Thi Nguyen: So, the paper has two arguments. The first is the one I’ve talked about so far, the epistemic intrusion argument. That’s put in front of experts. The second is what I call the intimacy argument. So, here’s the idea. Different communities can have something that we can call intimate understandings. An intimate understanding is some kind of understanding that requires long participation in that community to understand, that requires having some kind of shared experience.
So, in academia, in my part of philosophy, we call it standpoint epistemology. But it’s particularly the idea that people in a particular community, especially an oppressed community, have kinds of understandings that’re available only if you’ve lived life under this particular set of circumstances, under this kind of oppression, facing these particular problems.
So, this is a thought that comes to us from feminism and from studies of racial oppression in some sense, living in this era to anybody who has a basically progressive slant on their understanding of the world, you believe things like one, women understand the bite of sexual harassment more than men. Minorities understand what it’s like to not be racially privileged in a way that whoever’s in the racial majority doesn’t.
This is baked into the progressive worldview, that people in oppressed groups have special kinds of understanding. And I think one of the things transparency asks is to put reasons and to put justifications in terms that anybody can understand. And that prevents using this kind of intimate understanding. So, I watched in a previous university job an on-campus LGBTQ group have to justify decisions to the larger scale bureaucracy in the university. This was again, under the name of transparency.
And in this case, the group was trying to justify a certain kind of safe space. Only queer people, self-identified queer people could come in and non-queer people were excluded from the space. To everyone who has experience in the LGBTQ community, it’s obvious that you need something like this. When this action and its justification filtered up to the large scale university administration, who by the way are largely non-queer, white, non-oppressed groups in Utah. What they saw was something like, no, that’s exclusionary. How could you have something exclusionary as a safe space?
The reason that you have this LGBTQ group run by people who are queer is because you we need people that are appropriate representatives of the community running this thing. The reason you do that is because people who live in that community and who live in that circumstance understand something that the rest of us don’t.
If you ask for the justifications before the group can act, then you’re robbing the group of the ability to act on their special understandings that come from their particular background. So, the worry again is that transparency shreds intimacy. It prevents groups from acting on their intimate understanding in so far as their actions are constrained by justification to some large public that doesn’t share that understanding.
Christiane: I had this experience of reading your article and I was on board. I am on board. But you get to this certain point and I’m sure listeners maybe are at this point now, too, where it’s like, and what do we do? You know what I mean? It’s such a frustrating tension that you’re describing. And if you don’t mind, I want you to elaborate on some of your concluding statements in the piece, because I think they’re so powerful.
And so, you write, “We need a profound division of cognitive labor and a division of moral labor and a division of valuing labor. And to promote the full flowering of that division, we need to trust each other. And we certainly need to check on each other, but the check is there as a safeguard in the process.”
C. Thi Nguyen: One thought that you might have in response to what I’m saying, something like: Maybe we shouldn’t oversee every single little decision, but surely we can oversee the overall targets. Maybe we shouldn’t oversee every decision the doctors make, but we can oversee overall health outcomes. Or maybe we shouldn’t oversee every little budgetary decision the department makes, but we can look at how well the students are educated overall.
To do that, you have to think expert in understanding and intimate understanding is over what philosophers called instrumental reasoning. The decisions you make on the way to get to the target, but not reasoning about what’s really important: value reasoning. You might think, look, experts understand how to get to the target, but we can all understand the target. We can all understand the value. And I think that’s actually wrong.
I take a lot of inspiration from a philosopher named Tal Brewer. One of Tal Brewer’s core ideas is that actually, when you enter into activity, the value is actually really complicated. You don’t see it. It takes a long time to see the value that’s on offer. So, I did not exercise for much of my life.
And my external understanding of the value of exercise was, “Oh, you need to get in your miles to make your heart good and burn some calories.” And it’s not until you spend years that you realize, no there’s this community life. There’s the thrill of competition. So, before I started exercising, I spent my entire life at a computer and I did not think there could be anything like the joy of inhabiting a body. It took me five years of rock climbing to be like, “Oh no, no, this is wonderful.”
So there are a lot of activities where I think the more time you spend with them, the more you see the value. And I think a lot of the times, this is true of spending time with the arts, spending time learning the humanistic disciplines, like philosophy and anthropology. But I think it also applies to a lot of scientific disciplines. I think the more time you’re studying ecology, the more you see the value of conflict ecosystems in a way that you might not on the outside.
So, in this case, it’s not just that experts understand techniques better than the rest of us. It’s that an expert has a better grasp of the value available in the domain of their expertise. And so, here’s an idea. Sometimes it takes expertise to really understand the value. And if that’s true in an area, then the public in general, if they set the target and they set the values, then they’ll set the target and values in a blunt and unsubtle and inexpert way. So, I don’t think this is necessarily true of all domains. Some domains have what I want to call a litmus test. There’s a simple target that we can all understand. Bridge building, it has a litmus test. The public can tell what bridges are good because they don’t fall down.
But I think a lot of the other domains are ones in which part of expertise is grasping the value in the domain. And so, all the kinds of values that take expertise to understand are going to be stripped out of the system when you put it in a strict regime of transparency that asks all the targets to be justified to or even set by the public. And what this looks again is, for those of us in humanities, a legislature saying something like, “your department’s going to be rated on the salary of the students.” And you’re like, “Wait, wait. That’s not the value in philosophical education.”
And then they’re like, “Well, justify it.” And I’m like, “Look, I can’t. I can’t do it in quick and easy terms to show up on an institutional measure.” So, this problem before where inexperts don’t understand how much they’re missing, this can apply to the targets, too. You might think, like I did, that the point of exercise is weight loss. That’s the only point. Or you might think that the point of education is just to get a high paying job.
But you don’t understand the target is missing something profound. So, this is part of the thought, this idea that we need a division of valuing and a division of moral knowledge. There’s this deep thought that a lot of us have that values are public and that morality is public. But I think instead, the right vision of the world is in which there are different groups with different moral sensitivities. This group, there are people in this world that have sensitivity to how to take care of queer people in an oppressive community. I don’t have as much of that sensitivity. I have to trust people but have sometimes lived through that. Because I didn’t live through that.
Then there are other people that have a profound understanding of what it’s like to live as a woman in a patriarchal society or what I have knowledge of what it’s like to live as an Asian American in a world which is subtly and interestingly racist in a way that’s different from the way it’s racist, the form of racism against Asians in America is very different from the kind of racism against blacks in America.
We have to trust each other in some really significant sense to let each other’s special sensitivities and special understandings of value illuminate the world. Another way to put it is, I think we all think that the scientists need a division of labor just because there’s too much empirical knowledge to know. But I also think that our complicated moral life involves all kinds of special moral sensitivities and all kinds of special value sensitivities.
And that we can’t also imagine that one person can properly understand all of that stuff. We have to trust each other’s deeply informed moral sensitivities. But we can’t trust perfectly. Here’s a core thought. So, one of my favorite philosophers from the 20th century, is Annette Baier. She starts the entire modern conversation in philosophy about trust.
The way she puts it is that the essence of trust is that it makes you vulnerable to someone else. You’re entrusting them with something. And she thought that in many cases, children are entrusting their parents with their lives. I’m entrusting my health to my doctor. And she thought that the essential part of entrusting was vulnerability. You made yourself vulnerable to somebody else. And I think you can uptake this idea into our modern life. What it is to trust an expert is to make yourself vulnerable to them.
There’s a reason we need to do it, which is that individually, we can’t understand everything that’s required to take care of ourselves and our lives in this vastly complicated, hyper-scientific society. So, trusting each other makes us vulnerable. We also have to manage that vulnerability. Definitely bad actors, definitely biased people can take advantage of our vulnerability intentionally or unintentionally. So, we have to manage it.
But the thing I’m trying to say is that the process of managing vulnerability also undercuts the positive sides of trust. When you try to secure your vulnerability and make experts and make people with intimate understandings act within your understanding, then you are securing yourself against vulnerability and abuse, but you are also undercutting the ability for people to act with understandings that are beyond what you have. So, that’s the basic tension.
I think it’s kind of heavy medicine. And if you take too much of it, if you go all transparency, then you’ve eliminated trust and vulnerability from the system and you’re safe in a way, but you’ve also eliminated the goods that come from trust. And if you go all trust, then you might get all these goods of stitching together expertises, but you’re going to get tons of unmonitored corruption bias into the system. So, you need some kind of compromise. I mean, that’s the painful answer. The thing that people want is what’s the thing I can do that can maximize trust and maximize transparency. And I think the answer is there’s nothing. Those are in tension. The best we can do is still going to end up with some kind of compromise and you just have to choose.
[music: Blue Dot Sessions, Lina My Queen]
Christiane: If you want to know more about C. Thi Nguyen’s work, or some of the things we mentioned in today’s episode, check out our shownotes page at examiningethics.org.
Examining Ethics is hosted by The Janet Prindle Institute for Ethics at DePauw University. Christiane Wisehart wrote and produced the show. Our logo was created by Evie Brosius. Our music is by Blue Dot Sessions and can be found online at sessions.blue. Examining Ethics is made possible by the generous support of DePauw Alumni, friends of the Prindle Institute, and you the listeners. Thank you for your support. The views expressed here are the opinions of the individual speakers alone. They do not represent the position of DePauw University or the Prindle Institute for Ethics.
Thanks to Evelyn Brosius for designing our logo. “Illusion of a cycle” by Flickr user spolit.exile.
To contact us, email firstname.lastname@example.org.