Ethical Frameworks for Public Interest Technologists at MIT: An Interview with Caspar Hare.

Caspar Hare is a Professor of Philosophy and Associate Dean for Social and Ethical Responsibilities of Computing (SERC) in the MIT Schwarzman College of Computing.

Ethical Frameworks for Public Interest Technologists at MIT: An Interview with Caspar Hare.

Question 1: What technologies are you working with, or have you worked with?

My background is in Philosophy, where I am interested in traditional ethics questions around multidimensional decision-making and the tradeoffs that exist when we must reconcile between different values. Practically, these tensions emerge when we confront the challenges of AI and its emergence as a dominant socially transformative technology. If you look throughout history, the radical shifts in labor practices and economic structures across the world created by the introduction of fossil fuel technologies in the 19th and 20th century were difficult to measure in terms of impact, but still required foresight and critical engagement for us to understand the role they have played in transforming the world today. And yet, anticipating the scope of social transformation precipitated by AI requires proactivity across disciplines and sectors. As Dean for the Social and Ethical Responsibilities of Computing (SERC) within the Schwartzman College of Computing, I engage with a diverse set of researchers working on technology issues ranging from privacy to free speech to considering the impacts of advanced AI systems, such as large language models and deep neural networks. 

There's a lot of interest in large language models and agency. Moreover, due to the fact we may soon be moving towards a time in which large language models don't just say things, but also do things, the question is: what priorities are these models going to pursue? And so, there's interest in the question of how to structure goals and directions within artificial intelligence, that are, broadly speaking, also aligned with our goals as researchers, scientists, and citizens. We have groups within SERC who are interested in privacy, digital intrusion, and the question of what privacy rights exist when it's increasingly easy to monitor what people are doing. In other words, surveillance technologies through AI are becoming more and more powerful, and there is a need to reconcile old-fashioned privacy rights and monitoring practices with changes happening on the technology side. And so the question is, how can we maintain privacy in a situation like this? And, how can we maintain free speech rights in situations like these? These are some of the many questions that are of interest to me and to people involved with SERC. 

Question 2: How do you take account of MIT’s obligation to pursue the public interest in the work that you do? 

Transformative technologies in the years to come will likely be driven by now-young researchers and businesses founded by graduates of MIT. With a large percentage of MIT undergraduates graduating with a degree in Computer Science (Course 6) and an additional major, there is a need to ensure students understand that what they design or engineer is taking place in social contexts. We must help students develop a set of questions about programming such as: why am I doing this? What is the effect? Why are these goals being set? Why am I pursuing this task? What is the impact of my pursuing this task? And give people tools to think about that in a rigorous way.

As such, there are new prototype classes that will be cross-listed as 24C.401/6C.401, including a Common Ground course between Philosophy (Course 24) and Computer Science (Course 6) titled “Ethics of Computing.” Our hope is that in this course, students can learn to raise a set of questions and tools to think about public interest in a rigorous way. This will be one of several courses with this kind of general character. And the thought is that in the future, people who major in Course 6 and other departments come out of MIT with the tools to think about the social implications of what they're doing. And this is tremendously important because what they're doing is going to be changing potentially entire parts of society. 

To address MIT more broadly, there's a tremendous amount of public interest in technology which an institution like MIT is well-situated to be a thought leader on, notably given the privatization of many break-through technologies in recent years. Unlike the public funding and innovation that drove the invention of the internet, industry has been driving recent developments. Therefore, there is an interesting question of what role a public institution like MIT, as a non-profit, can play to push public interest tech forward, and at the very least what role can it play as a trusted information source on what's going on for these technologies. One that the public could appeal to or state lawmakers could appeal to in order to keep abreast of what the heck's going on or to inform those in a position to support public policy development. There are people at MIT who are well-placed to be a public source of information given that they are extremely well-informed about critical topics for our time, and quite removed from traditional private profit incentives. 

Question 3: What more could you and others do to help MIT team meet its social obligation to pursue public interest technology?

There’s a question of whether MIT could play an intermediary role or informative role to meet an obligation to pursue public interest technology through individual faculty, or perhaps, whether there could exist a more organized institutional apparatus that is designed to address public interest technologies. I would also say that at MIT, there are at least 1000 faculty who are all wonderful, brilliant, opinionated people with very different perspectives that may not be resolved through a singular institutional structure. And yet, people, notably students, will face the inherent risks of changing technologies and those technologies’ social implications. A real public good that can be done within MIT to address an obligation to pursue public interest technology is giving graduates across the institute rigorous frameworks about issues that they will likely have to face in the next decades. These issues extend beyond just the technologies they use. The real good we can do here is to give graduates, and non-MIT community members, rigorous frameworks for thinking about the kinds of questions that people and society are going to have to face over the course of the next decades.



Caspar Hare is a Professor of Philosophy in the Department of Linguistics and Philosophy. Along with Nikos Trichakis, Hare is the Associate Dean for Social and Ethical Responsibilities of Computing (SERC) in the MIT Schwarzman College of Computing. Hare and Perakis work together to create multidisciplinary connections on campus and to weave social, ethical, and policy considerations into the teaching, research, and implementation of computing.

A member of the MIT faculty since 2003, Hare’s main interests are in ethics, metaphysics, and epistemology. The general theme of his recent work has been to bring ideas about practical rationality and metaphysics to bear on issues in normative ethics and epistemology. He is the author of two books: “On Myself, and Other, Less Important Subjects” (Princeton University Press 2009), about the metaphysics of perspective, and “The Limits of Kindness” (Oxford University Press 2013), about normative ethics.