As artificial intelligence (AI) becomes more entrenched in daily life, the question of its regulation has shifted from theoretical debates to urgent policy considerations. One significant concern at the intersection of AI regulation and democracy is the influence of large AI platforms, whose impact extends beyond mere technological innovation into realms of political and societal power.
A recent panel discussion featuring Eugene Volokh, a Senior Fellow at the Hoover Institution, and Nate Persily, Professor of Law at Stanford, highlighted how AI tools are already reshaping public discourse, notably in how people gather information on political matters. Both scholars addressed a growing concern that AI platforms—because they are designed and operated by powerful, private companies—could unintentionally or intentionally influence voter behavior.
AI’s Influence on Voter Decision-Making
Consider the role that generative AI tools such as ChatGPT now play. Instead of seeking out multiple sources, individuals might increasingly rely on AI to provide summaries or direct answers to complex questions, such as those related to political candidates or ballot measures. This shift, though efficient, raises questions about who controls the narratives shaping public opinion.
Volokh underscored the concern that AI tools are not neutral; they are designed by people, and those people may work for companies that hold tremendous economic and political power. When AI is relied upon to summarize political positions or policy decisions, its inherent biases—whether in its training data or its design—can have significant implications for democracy.
The Problem of Bias and Monopoly Power
Both Volokh and Persily acknowledged the bias problem inherent in AI systems. Persily gave a pertinent example: when asked to generate an image of a nurse, many AI models would default to depicting a woman, given that 80% of nurses in the United States are women. But this raises the question of what constitutes an "unbiased" AI response. Should AI reflect societal realities, even if biased, or should it strive for an ideal of equality by equally representing genders in such examples?
This discussion opens the door to the larger question of monopoly power. If only a few large platforms control AI tools, the biases of those platforms could disproportionately influence public opinion, especially in politically sensitive areas. Volokh pointed to recent studies showing how certain search engines have refused to provide answers that support unpopular political views, such as opposition to transgender athletes in women's sports. This selective response, whether intentional or not, demonstrates how powerful platforms can subtly influence public debates.
The Future of AI Regulation
While regulating bias in AI is a conceptual challenge, both panelists agreed that competition could be a key part of the solution. Persily suggested that creating an open-source AI ecosystem, where smaller platforms compete with giants like Google and OpenAI, might help mitigate concerns of monopoly power. Open-source models would democratize access to AI tools, allowing more diversity in the viewpoints they represent.
However, the benefits of democratizing AI must be balanced against its potential harms. For example, the proliferation of open-source AI has led to increased production of illegal content, such as child pornography. Addressing such harmful outcomes will require regulatory frameworks that ensure safety while fostering innovation.
A Legal and Ethical Frontier
The conversation also touched on legal ramifications, particularly regarding defamation and copyright issues. Persily pointed out that generative AI has already sparked lawsuits over false claims it has made about individuals, such as linking a person’s name to crimes they never committed. As AI continues to evolve, courts will be tasked with determining who holds liability for the mistakes made by these systems.
Meanwhile, the question of intellectual property looms large. As AI models are trained on vast amounts of data, much of which may be copyrighted, the legal battle over the rights of content creators versus AI developers is likely to intensify. These issues, while rooted in law, have broader implications for how AI will be integrated into society and regulated by governments.
As AI continues to shape public discourse and decision-making, the question of how to regulate these tools remains a pressing issue. Volokh and Persily’s discussion highlighted that there is no easy answer to balancing the benefits of AI with the risks it poses to democracy. However, ensuring that AI is subject to competition, third-party audits, and appropriate legal oversight will be key steps in navigating this complex landscape.
AI is here to stay, and its impact on democracy is already unfolding. The decisions made today about how to regulate these powerful tools will reverberate for years to come. The challenge will be to ensure that AI enhances, rather than undermines, the democratic process.