The Rise and Risks of the “Large-Scale Era” of AI
The launch of ChatGPT in late 2022 unleashed a veritable revolution in artificial intelligence (AI). Within weeks, the unprecedented industrial scaling of generative AI (GenAI) technologies had transformed the digital landscape, with large tech firms rapidly integrating these powerful language models into their core products and services. The ability of ChatGPT and similar systems to generate human-like text, images, and even code in response to natural language prompts ushered in a new era of AI’s impact on society, upending established norms and practices across a wide range of industries.
This explosive growth of GenAI applications, however, has also exposed a troubling gap in the international AI policy and governance landscape. Despite decades of effort by stakeholders across industry, academia, government, and civil society to develop ethical frameworks, standards, and governance mechanisms for responsible AI development, the meteoric rise of large-scale foundation models (FMs) and GenAI systems has left policymakers and regulators largely unprepared. As the risks and harms associated with the hasty industrialization of these technologies have become increasingly evident, the lack of effective and binding governance measures has become a pressing concern.
At the heart of this crisis lies a fundamental disconnect between the strengthening thrust of public criticism over the societal threats posed by GenAI and the absence of practicable regulatory interventions to address such threats. As the Future of Life Institute’s open letter warned, the “out-of-control race to develop and deploy ever more powerful digital minds” has outpaced the development of “planning and management” – a failure that has triggered widespread calls for immediate action.
In this article, we will explore the key drivers behind this international AI governance crisis, tracing how the unprecedented scaling and centralization of AI capabilities have given rise to a new order and scale of risks and harms. We will then examine the initial policy and governance responses, highlighting both their promising steps and their shortcomings in effectively addressing the full scope of the challenges presented by the GenAI revolution.
The Ingredients of Future Shock
The rapid industrialization of GenAI did not necessarily catch the AI policy and governance community completely off guard. For years, stakeholders had been building a foundation of standards, policies, and governance mechanisms to ensure the responsible development and deployment of AI systems. From regional data protection frameworks and cybersecurity conventions to national AI strategies and ethical guidelines, a robust conceptual basis had been established to confront the expanding risk surface presented by the growing digitalization and datafication of society.
However, the convergence of several key factors ultimately triggered a profound sense of future shock within the international AI policy and governance ecosystem:
-
Enforcement Gaps in Existing Digital and Data Regulation: Despite the proliferation of relevant laws and regulations, significant gaps have persisted in their real-world implementation and enforcement. Disparities between legal protections and prevalent patterns of unimpeded bad behavior have undermined the efficacy of measures designed to safeguard digital rights and data privacy. Regulatory capacity deficits have further exacerbated these enforcement challenges, allowing well-resourced corporate actors to outpace the ability of public authorities to keep up with emerging risks.
-
Democratic Deficits in AI Standards Development: The translation of ethical principles and frameworks into practicable standards, governance mechanisms, and binding regulations has proven to be an arduous task. Challenges in reaching consensus on the meaning and operationalization of key normative concepts, such as fairness and trustworthiness, have hindered the development of codified rules. Moreover, the predominance of industry actors in standards development processes has raised concerns about the legitimacy and inclusivity of these initiatives, leading to accusations of “ethics washing” and the subordination of public interest to private sector priorities.
-
Evasionary Tactics of “Ethics Washing” and Deregulation: The pervasive use of voluntary, self-imposed AI ethics frameworks and codes of conduct by technology companies has allowed for the strategic avoidance of enforceable laws and regulations. Accompanied by the strategic alignment between corporate interests and the deregulatory impulses of some policymakers and government officials, these evasionary tactics have perpetuated a landscape of non- or self-regulation, undermining the establishment of robust governance measures.
-
Unprecedented Scaling of Data, Models, and Compute: The rapid industrialization of GenAI has been driven by the exponential growth in the scale of training data, model size, and computational power. This scaling has given rise to a range of model-intrinsic risks, including data poisoning, privacy violations, and discriminatory biases, which have far outpaced the capacity of existing techniques for AI explainability and interpretability to address.
-
Centralization of Power and Influence: The steering and momentum of the GenAI revolution have been largely concentrated in the hands of a few large tech corporations, which control the essential data, compute, and skills infrastructures required for developing these advanced AI systems. This centralization of techno-scientific and market power has enabled corporate actors to exert disproportionate influence over the direction and pace of AI development, while also intensifying corresponding dynamics of geopolitical power centralization among the big tech-hosting nation-states of the Global North.
The convergence of these factors – enforcement gaps, democratic deficits, evasionary tactics, scaling challenges, and power centralization – has resulted in a profound disconnect between the strengthening public outcry over the hazards of GenAI and the absence of effective regulatory and governance mechanisms to address them. This disconnect lies at the heart of the international AI policy and governance crisis that has unfolded in the wake of the GenAI revolution.
Responding to the Crisis: Promising Steps and Persistent Shortcomings
In the months following the initial shockwaves of the ChatGPT launch, a flurry of international policy and governance initiatives emerged, signaling a growing recognition of the urgent need to confront the risks and harms associated with the rapid industrialization of GenAI.
Initiatives such as the UK’s AI Safety Summit, the G7’s Hiroshima AI Process, and the Partnership on AI’s Guidance for Safe Foundation Models have been hailed by some as important steps toward international collaboration and the establishment of much-needed governance frameworks. These efforts have generated joint commitments to address frontier AI risks, initiated processes for risk evaluation and mitigation, and led to the formation of national AI Safety Institutes – all of which have been viewed as progress toward the kind of international regimes and institutions that AI policy researchers have argued are necessary to address the cross-border nature of FM and GenAI risks, harms, and infrastructure.
However, critics have also pointed out that many of these first-wave policy and governance activities have been ineffective and diversionary, serving to deflect attention away from the concrete societal harms inflicted by large-scale AI and further entrenching the dominance of Western and Northern perspectives and interests.
From this critical perspective, the focus on “frontier AI” and speculative doomsday scenarios about AI takeover has allowed corporate actors and their allied policymakers to marginalize the pressing need for robust regulatory controls that would confront the material realities of global AI value chains and constrain irresponsible corporate behavior. By narrowing the governance discussion to technical issues of “AI safety,” these initiatives have affirmed the status quo of non- or self-regulation, maintaining the legitimacy of unregulated black-box systems being released into the public domain without adequate safeguards.
Moreover, the critics argue that the uneven pitching of these international AI policy and governance discussions – centered around the views, positions, and interests of a handful of prominent geopolitical and private sector actors from the Global North – has had significant agenda-determining consequences. Crucial issues affecting members of the Global Majority, such as labor exploitation, widening digital divides, data sovereignty, and the disproportionate environmental impacts, have been largely absent from or deprioritized within these discussions.
This marginalization of the voices and concerns of those outside the dominant Western technological imaginary has not only led to the exclusion of important perspectives but has also enabled the perpetuation of implicit colonial logics. Concepts like “frontier AI” and “existential risk” narratives, for example, have been criticized for invoking themes of conquest, violence, and the devaluation of the Global South, reinforcing harmful dynamics between powerful Western AI producers and those most likely to experience the harms of these technologies.
In the face of these persistent shortcomings, the failure to effectively address the full range of risks and harms emerging from GenAI, the inability to establish binding governance mechanisms, and the sidelining of marginalized stakeholders have left the international AI policy and governance community with more open questions than answers. As the GenAI revolution continues to unfold, the need for a more comprehensive, inclusive, and contextually responsive approach to AI policy and governance has become increasingly urgent.
Towards a Transversal AI Policy and Governance Agenda
The international AI policy and governance crisis triggered by the GenAI revolution has exposed deep-seated issues that go beyond the immediate technical and regulatory challenges posed by these technologies. Underlying patterns of global inequality, structural discrimination, and legacies of colonial power dynamics have shaped the landscape in which AI innovation and governance efforts are unfolding.
Addressing this crisis, therefore, demands a reorientation of policy and governance thinking that prioritizes the scrutiny of how longer-term sociohistorical forces have cascading effects across FM and GenAI innovation lifecycles. It requires the mobilization of more “transversal” dialogues that disrupt the assumed core-periphery relationships in international AI policy, creating spaces for diverse voices and perspectives to shape the agenda.
A truly transversal approach would eschew the notion of a dominant “center” of the AI policy and governance discussion (situated in the Global North) that must accommodate marginalized “peripheral” voices. Instead, it would create a multitude of peripheries, elevating the unique contexts and concerns of all conversation partners as equally important. This would enable the kind of inclusive and meaningful policy dialogues that are equipped to interrogate, tackle, and repair the full range of risks and harms emerging from GenAI, while also confronting the underlying patterns of inequity that frame these policy processes and determine the distribution of hazards and opportunities within the global AI innovation ecosystem.
Some promising signs of this shift towards greater transversality can already be seen in the recent efforts of UNESCO to co-organize regional AI policy summits with stakeholders from the Global South, as well as its launch of a Global AI Ethics and Governance Observatory to build a participatory commons for sharing diverse knowledge and experiences.
As the international AI policy and governance community continues to grapple with the challenges posed by the GenAI revolution, it is clear that a fundamental rebalancing of power dynamics and a more inclusive, contextually responsive approach will be essential. Only by embracing a truly transversal dialogue can we hope to forge comprehensive, equitable, and effective governance solutions that safeguard the public interest in the face of this technological transformation.
Conclusion
The rapid industrialization of generative AI has undoubtedly triggered a profound sense of future shock within the international AI policy and governance ecosystem. The disconnect between the strengthening public outcry over the hazards of GenAI and the absence of practicable regulatory and governance mechanisms to address these hazards lies at the heart of the current crisis.
This crisis has been driven by a convergence of factors, including enforcement gaps in existing digital and data regulation, democratic deficits in AI standards development, evasionary tactics of “ethics washing” and deregulation, the unprecedented scaling of data, models, and compute, and the centralization of power and influence in the hands of a few large tech corporations.
While initial policy and governance initiatives have shown some promising steps toward international collaboration and the establishment of governance frameworks, they have also been criticized for their ineffectiveness, diversionary nature, and the perpetuation of Global North dominance and colonial logics. The failure to effectively address the full range of risks and harms, the inability to establish binding governance mechanisms, and the sidelining of marginalized stakeholders have left the international AI policy and governance community with more open questions than answers.
Moving forward, addressing this crisis will require a fundamental reorientation of policy and governance thinking, one that prioritizes the scrutiny of longer-term sociohistorical forces and mobilizes more “transversal” dialogues that disrupt assumed core-periphery relationships. Only by embracing a truly inclusive, contextually responsive, and equitable approach can the international community hope to forge comprehensive governance solutions that safeguard the public interest in the face of the GenAI revolution.
The Joint Action for Water blog is committed to fostering this kind of transversal dialogue and advocating for governance frameworks that prioritize the well-being of communities, the environment, and the global public good. As the impacts of emerging technologies like GenAI continue to unfold, we will continue to amplify diverse voices, share best practices, and work towards a more just and sustainable future.