Building Together: An Interview with Nigel Jacobs
Nigel Jacob is a visiting scholar at MIT's Leventhal Center for Advanced Urbanism where he is exploring the civic uses of Generative AI.
Nigel Jacob is a visiting scholar at MIT's Leventhal Center for Advanced Urbanism where he is exploring the civic uses of Generative AI.
I work at the intersection of design, tech and public policy, as I explore how we can deploy these tools to engage communities more deeply to build a better world. My recent work has been exploring the use of Generative AI (GenAI) for civic engagement with the Leventhal Center, where we published a review of emerging GenAI projects with officials at the City of Boston. I felt there was more to explore, so my work at RethinkAI takes off from there. With genAI in community engagement, we need to ask the hard questions about governance and communities. How can we rethink, shift, or dismantle existing power dynamics so that communities have true decision-making power? How can these emerging technologies transform our current political structures to enable community governance?
A lot of claims are being made about the potential of Generative AI right now. But RethinkAI is centered on a key question: can generative AI be used to make civic life better? Let’s figure out how. So we're doing these experiments, one in Boston, one in New York and one in San Jose, and they're very tactical. The one I'm most interested in involves creating these hyper-local large language models. Imagine if you could train genAI on all the data that pertains to a neighborhood, as determined by that community. So the community presents their master plan that they have developed, rooted in demographic data from the state, city, universities, etc. But the community determines what's in that corpus, and you would use that data to train a local generative AI model. That local generative AI could be used to advocate for the community.
When you think about the way neighborhood development happens in this country, it is often the case where a powerful actor like a developer or City Hall wants to build, for instance, a mall in your neighborhood. The moment of community consultation never works very well. It kind of depends on who happens to be at the community meeting and who reads the right documents. But the developer always has a very clear motive: they're trying to get to a site, and so the community consultation is largely perfunctory, something that they have to do by law. At the very end of the development process, developers can use politicians to push their project through anyway. The community can often get over-consulted as well, leading to exhaustion. There's a dynamic where the community often loses control of the conversation and they lack influence in the development process. The idea was: what if you took a large-sized neighborhood, such as Fields Corner in Dorchester, and you train up an AI that’s specifically by and for this neighborhood? So that if, for instance, Mayor Wu wants to put a mall in Fields Corner, rather than going through the clunky and ineffective process of an initial informative community meeting, they would consult the AI model built for that neighborhood. The model would be trained on all the data from previous community meetings, data approved by the community, etc. They could get a good initial sense for what the community attitude might be for the project, so that the first community meeting would happen with a deeper level of detail and insight, starting from a place of knowing how the community sees the future of their neighborhood. When the developer or Mayor talk to the community, it’s with a different level of sophistication as these actors can meet the community where they are. We see these hyper-local AI models as agents of community advocacy.
My critique of civic tech, and I think this extends to public interest tech, is that we should let the context determine the method. So sometimes, we just need to do a better job of governing or engaging–hold a better meeting, do a better job facilitating, etc, and sometimes the solution might involve technology. One of the concerns we’re having with ReThink’s hyper-local language model is that we never want to be in a situation where we have a technology, but are in search of a problem for it to solve. At the same time, generative AI technology is out there, and we want to see if we can build something that is of use to the community, furthers the goals of community groups, helps them gain power in negotiations, and assert what they want. I see civic technology or civic innovation not only as deploying technology to address civic challenges, but also as a way of working. Civic technology is a process. While I like the idea of orienting technologies towards benefiting a multi-dimensional public, I would love to see technologists expand their methods to be more community-driven.
One challenge comes to mind from when I was leading the Mayor’s Office of New Mechanics at the City of Boston. There was a moment where tech had suddenly discovered mobility, when Uber and Lyft came onto the scene. There was a company out of Baltimore who wanted to create a social network out of public parking, and they wanted a list of all the metered parking spots around the city so that they could map them. The idea was that before you park in Boston, you would select your location and search their social network to see if a member had left an empty spot. By overlaying the social network on top of Boston’s public parking infrastructure, the hope was that it would lead to more efficient usage. This was a fascinating idea, but we had to discuss the fact that parking is a public asset. What about people who are not on your app? How will they be notified of open spaces? What does it mean when you’re staking private claim on public assets? We have to bear in mind the politics of public interest technology, because as soon as we are talking about companies doing something with public assets, that implies neoliberal privatization. Although people aren’t using the language of privatization, when you put something like a social network onto a public asset, you’re privatizing it, essentially. Technologists often don’t see it that way, they’re saying “oh, we’re making public resources more efficient.” But they’re establishing their platform as the way to access a public asset, and often, it’s not accessible to diverse populations–non-English speakers, disabled people, etc. At the Office of New Urban Mechanics, we knew that technology is not values-neutral. It has an orientation. And when it is coupled with a startup who's making money in some way, it means it's a privatization play in one way or another, a virtual transfer of assets.
A lot of political philosophy explores the relationship between public assets and technology. Economist Eleanor Ostrom was interested in edge cases. A few years after The Tragedy of the Commons came out, she explored numerous counterexamples where historically-public resources were being managed by community groups. The tragedy only happens in an unmanaged commons, and that is not the way the commons have usually worked. There's always an explicit agreement by the community members as to what the rules of the road are here. I think this is theory and history that we all need to know if we’re doing this work–that public assets are not free and unmanaged for us to throw self-driving cars onto or something.
I think the gist of it is that we have to expand what technology means, certainly. You just don't show up and start building a particle accelerator, right? You have to be open to learning from community and context as you build. We have to be intentional to not make things worse with any kind of technology.
In the case of generative AI, I want to consider how community groups are thinking about AI. Are they concerned this is just another technology their community won’t have access to? There is an opportunity to help communities think through how they can ensure that these new technologies make life better in their neighborhoods.
I think of MIT as a leader in many fields, and as a community member of Greater Boston, I've been generally pretty disappointed in all of the area universities. Having watched MIT do the work, there’s a lot happening in far-flung parts of the world, but there is a lot of need for people who live here in Boston. There’s a lot of poverty here, and this is an old American city with deeply-entrenched problems. We could use help for people that are trying to rethink these issues, and MIT has helped around the edges, but I would love them to get more deeply immersed. This is not an easy thing, and I’m very aware that getting universities to work on public issues requires changes both in the university and in the community. In collaborations with local government and community groups, we need to change our expectations that we have of each other as partners. Do we understand each other’s motivations? I think MIT is a leader in this space, but it needs to find a way to be more community-oriented in Cambridge and the broader region as well.
This would involve building capacity both on the community side and the MIT side, and a process of negotiating interests. MIT should define their community–are we only thinking about Cambridge, or the broader regional economic impacts? Thinking regionally might be interesting given MIT’s outsized economic impact on the region, and would provide a more even playing field than just engaging at the municipal level with the City of Cambridge. We also need to consider longer-term, sustained strategic impact. Akin to a teaching hospital, what if we created an institution dedicated to the application of urban technology to address issues in the local region? We need to think in multi-disciplinary ways, and to break down silos that limit our impact.
There is also an opportunity for MIT to think beyond just a series of one-off community projects. One-off projects are often great, but if you just have a class implement something, it might be done in a way that can’t be scaled or expanded by the community group. It’s a solutionist way of thinking, which is problematic. I think there are more systematic ways of thinking. We should be thinking strategically about how technology writ large at MIT is being studied, and how we can use it to make life better in Cambridge. The approach to how MIT engages the community needs to be rethought, and local government and community organizations in Cambridge should be in collaboration in building MIT’s vision over time. We need to build together.
Nigel Jacob is a designer-experimentalist of civic systems. He works at the edges of systems where community overlaps with institutions to bring value to both. He co-founded and ran for 12 years the Mayor’s Office of New Urban Mechanics in Boston City Hall, a first of its kind, civic innovation lab embedded in local government. His work in new urban mechanics has been written about in several books and magazine articles, and he has received numerous awards. Currently, Nigel is a visiting scholar at MIT's Leventhal Center for Advanced Urbanism where he is exploring the civic uses of Generative AI.