The Potential and Pitfalls of Gen AI for Civic Engagement
Cities globally have begun experimenting with using generative artificial intelligence (Gen AI) for civic engagement. Civic engagement is essential to a well-functioning government, involving the various interactions between the public and the city to share information and make decisions. Using Gen AI for civic engagement holds much potential, including the deployment of chatbots, language translation, synthesis of complex technical documents, visioning and co-creation of design ideas, data visualization, and simulation and scenario planning.
However, with these opportunities come numerous concerns. Issues of accuracy, algorithmic bias, private sector involvement, digital equity, and environmental costs all pose risks that must be carefully navigated. Without proper oversight, inaccurate results might be mistaken for fact, and the biases inherent in large language models (LLMs) might reinforce dominant narratives, thereby eroding public trust.
Given these risks, we argue that the successful integration of Gen AI for civic engagement must be powered by people who use their judgment to validate outputs, mitigate potential errors, contextualize results, and build trust between the government and the community. This people-centered approach requires developing methods to involve communities in decisions about how AI tools should shape city-resident interactions and the design of guidelines for how Gen AI can be used responsibly and ethically.
Bridging the Participation Gap with Technology
At the heart of community engagement is a transfer of information, bridging the inherent gap between those who hold power and professional expertise, and those without. Historically, traditional participation methods like public hearings and review processes have often failed to achieve genuine public involvement. The public has come to demand more active roles in urban development, not just as passive reflectors but as co-creators and decision-makers.
Technological tools have played an important role in facilitating this shift towards more collaborative design processes. Research has found that civic engagement technology platforms have allowed for greater contributions from citizens by enabling individuals to provide input directly to their government, increasing the flow of information between the public and the government. However, early work has shown that these platforms can also create exclusion, including digital divide issues, reducing their effectiveness.
Trust is central to successful civic engagement, connecting to our understanding of the level of uncertainty in a situation and the inherent risk. When a government introduces new technology, it negotiates two parallel interactions: the smoothness of a transaction, and the alignment of values and decision-making transparency. Creating value alignment is far more difficult but arguably more essential for civic engagement, necessitating transparency in data use, analysis, and how it informs decisions.
Opportunities and Risks of Gen AI in Civic Engagement
One of the most common use cases for Gen AI in civic engagement is conversational interfaces, which can simplify the information exchange between the public and the government. These largely manifest as chatbots, which have evolved from preset question-and-answer protocols to more open-ended and human-like interactions.
For example, Gen AI-driven chatbots may use LLMs that are fine-tuned on use case-specific contextual data, such as zoning regulations or permitting rules. These tools have demonstrated the potential to serve as knowledge banks, recording and disseminating information in human-like ways. The City of Helsinki has leveraged such models to provide information on urban services like parking and housing.
However, not all chatbot deployments have been successful. The New York City chatbot focused on helping small business owners access city services faced controversies just five months after release, producing inaccurate, biased, and sometimes illegal suggestions. This high-profile misstep contributed to a general mistrust of using chatbots to deliver essential information to the public.
Concerns about the ability to trust the results of Gen AI-driven chatbots were also brought up by City of Boston officials, who worried about the complexity and nuance of human interaction being lost, as well as the potential for biases and inaccuracies in training data and model outputs. Recent research has focused on developing methods for evaluating and interpreting the degree of confidence with which LLMs generate results, but human interpretation and oversight remain crucial for building trust.
Another key opportunity is using Gen AI to overcome language barriers, providing real-time translation for civic engagement activities. This ensures all community members can participate, regardless of their language proficiency. The City of Boston saw this as a way to better support their 311 system, which requires multilingual assistance. However, the fact that most LLMs have been trained largely on English-language data raises concerns about equity and representation.
Gen AI has also been proposed as a useful mechanism to synthesize and analyze complex community data, and communicate this information back to the public in an accessible way. For example, LLMs have been used to summarize notes and recorded discussions from community meetings, making that data more interpretable. The City of Boston is exploring using Gen AI to tag content across their website, improving search functionality.
Visual Gen AI tools, such as text-to-image models, offer considerable potential for fostering civic engagement in urban design. By allowing users to imagine and articulate visions, these AI-generated images can provoke reflection and spark new ideas, enriching the design discourse. This addresses the inherent gap in design expertise between professionals and the public, enabling residents to express their urban visions and stimulate meaningful dialogue.
However, the disparity between user expectations and the AI-generated results can be enlightening, prompting creative thinking about how to make these systems more accurate with better input data and request engineering. The lack of diverse sociocultural representations in the models often produces results that do not reflect the community, potentially having marginalizing effects.
LLMs can also make data more accessible by allowing anyone to analyze and visualize open data, playing an important role in creating government transparency. But greater data access and algorithmic application increases the potential for inaccurate results due to biases in the data or algorithms. Issues such as inconsistent results and unclear data sources underscore the need for reliable and transparent methods, as well as expert oversight to validate the analysis.
Navigating the Private Sector Involvement
There are serious concerns in the literature about using private sector algorithms within the public sector, echoing issues raised during our workshops with the City of Boston. Most of the current concerns focus on the bias of Gen AI training data, accuracy of results, transparency, and how the private sector might use the data inputted by the public.
Government officials are worried about uploading sensitive data to these private sector tools, as they are unsure how the data might be used. Further, there are questions about whether using private sector tools amounts to endorsing specific companies, potentially breaking ethics codes around favoritism. Copyright infringement was also mentioned as a challenge, as existing regulations do not account for Gen AI and modern digital technologies.
In response, participants suggested the LLMs could be designed for the government or more collaboratively with the public, with training data already in the public domain, ensuring transparency and enhancing trust. There is a movement to build open-source Gen AI models for governments, providing greater customization, portability, and model transparency, while mitigating risks.
However, hosting and maintaining their own local, open-source LLMs may be a way off for most governments. In the meantime, cities are already fine-tuning commercially available LLMs with retrieval-augmented generation, which allows them to cite reliable sources and reduce inaccurate results.
Bridging the Digital Divide
The use of Gen AI for civic engagement raises concerns about the digital divide, tech/data literacy, and the challenges for the government in achieving digital equity. The term “digital equity” has become more important to municipal governments due to recent legislation, with a focus on barriers to technology access and education programs.
While research has focused on Gen AI and digital equity as it relates to labor and education, there is little research on bridging the digital divide specifically for using Gen AI in civic engagement. Historically, digital equity efforts have focused on device access and internet connectivity, with limited training on Gen AI tools. Governments must develop digital equity plans that include educational programs to make Gen AI more accessible and help the public understand how to interpret and critique the results.
Environmental Costs of Gen AI
The environmental cost of using Gen AI systems is another critical issue that must be addressed. AI systems, including Gen AI, are known to come with significant energy, carbon, and water costs. While work has started on developing more energy-efficient hardware and algorithms, there is much more to be done. Understanding the true environmental impact is essential for making informed decisions about the cost-benefit trade-offs of using Gen AI for civic engagement and other government applications.
Conclusion: A People-Powered Approach to Gen AI
Gen AI can enable new mechanisms for community engagement, allowing residents to transform information in more relevant and actionable ways. However, the risks of these tools, including uncertain accuracy, bias, privacy concerns, and dependence on private sector involvement, can further misalign the community and the city government, creating mistrust.
Given these risks, we argue that the successful integration of Gen AI for civic engagement must be powered by people who use their judgment to validate outputs, mitigate potential errors, contextualize results, and build trust between the government and the community. This people-centered approach requires developing methods to involve communities in decisions about how AI tools should shape city-resident interactions and the design of guidelines for how Gen AI can be used responsibly and ethically.
Future research must interrogate the issues that cause mistrust in Gen AI, while also developing governance structures, such as guidelines for the use of Gen AI in civic engagement, which outline the risks and opportunities of this new technology. Bridging the digital divide through educational programs and addressing the environmental costs of Gen AI will also be critical for its responsible deployment.
By empowering people to collaborate with Gen AI in civic engagement, we can harness the potential of these transformative technologies while mitigating the risks, ultimately strengthening the trust and alignment between governments and the communities they serve.
Recommended Guidelines for Responsible Use of Gen AI in Civic Engagement
-
Transparency on Potential Harm: Civic engagement strategies employing Gen AI must clearly describe the potential risks associated with the underlying models, including biases, reliance on proprietary algorithms, environmental impacts, and the potential for misinformation. There should be a focus on weighing the potential harm with the potential benefits.
-
Identification of Use Cases and Associated Biases: Guidelines should outline potential use cases of Gen AI in civic engagement, such as chatbots, visual interpretation, translation tools, and synthesis of meeting minutes, while also addressing the biases inherent in these applications. City officials should be encouraged to highlight inaccurate results and use those as opportunities to discuss alternative strategies that focus on local community values.
-
Encouragement of Creative Applications: City guidelines should encourage innovative approaches to leveraging Gen AI in civic engagement beyond conventional case studies. Creative solutions should seek to show how other forms of knowledge and community-centered local knowledge can be included in models while also protecting the community’s privacy.
-
Teaching the Community to Use Gen AI: There is a clear digital divide among those who use Gen AI tools. Therefore, it is important to create a plan to educate the public on how to ethically and responsibly use Gen AI, including knowledge about search engineering and updating training data.
-
Developing Open-Source Alternatives: Explore the feasibility and potential benefits of building open-source generative AI models for municipal use, providing greater transparency, customization, and control over the technology.
-
Evaluating Reliability and Effectiveness: Conduct rigorous testing and evaluation of Gen AI-generated outputs for municipal services, including translation, information dissemination, and data analysis, to better understand the accuracy and limitations of these tools.
-
Assessing Environmental Impact: Develop methods for obtaining the necessary data to quantify the true environmental cost of using Gen AI, including electricity and water usage, to inform decision-making and mitigation strategies.
-
Fostering Community Involvement: Engage community members in the development, adoption, use, and regulation of Gen AI tools, ensuring their values, needs, and perspectives are represented in the design and implementation of these technologies.
By implementing these guidelines, cities can harness the transformative potential of Gen AI for civic engagement while mitigating the risks, building trust, and strengthening the alignment between governments and the communities they serve.