AI Regulation Takes Center Stage in Midterm Elections Amid PAC Showdown
By John Nada·Feb 19, 2026·6 min read
The battle between AI-focused PACs in New York highlights the growing importance of AI regulation in the upcoming elections, influencing both political and market landscapes.
In a significant development for AI regulation, two major Political Action Committees (PACs) are clashing in a New York congressional race, signaling the growing influence of AI policy in upcoming elections. The Jobs and Democracy PAC, aligned with Democratic interests, is backing Assemblyman Alex Bores, a key proponent of New York's RAISE Act, which mandates large AI developers to disclose safety measures and report serious technology misuse.
Bores is competing in a crowded Democratic primary for New York's 12th district, where the victor is likely to secure the general election. His campaign faces opposition from the Leading the Future PAC, which boasts support from prominent venture capital figures, including Andreessen Horowitz and Palantir co-founder Joe Lonsdale. This juxtaposition of PACs illustrates the deepening divide over AI regulation, with one faction advocating for stricter oversight while the other seeks to maintain a more lenient approach.
The Jobs and Democracy PAC is launching a six-figure ad buy to bolster Bores' campaign, highlighting his role as a driving force behind the RAISE Act. This legislation is groundbreaking in its approach to AI governance, requiring major AI developers to publish their safety protocols and report any serious misuse of their technologies. With this law, New York is positioning itself at the forefront of the national conversation on AI regulation.
The political landscape in New York's 12th district is particularly interesting, as it is heavily Democratic, meaning that the winner of the primary is likely to win the general election as well. This makes the stakes even higher for Bores, as he navigates a crowded field of Democratic candidates eager to stake their claim on AI policy. His opponents are not just political rivals; they represent a broader ideological divide regarding the future of AI governance.
The Leading the Future PAC has already targeted Bores with an ad campaign that began last November, attempting to sway voters by emphasizing the fears of overregulation and potential economic stifling that could accompany strict AI oversight. This PAC is significantly backed by venture capitalists who have vested interests in the growth and profitability of AI technologies, raising questions about the influence of money in shaping the regulatory framework for AI.
The broader context reveals a bipartisan effort to influence candidates' positions on AI regulation. Former lawmakers Brad Carson and Chris Stewart are leading the charge with their group, Public First Action, which aims to support candidates who advocate for increased AI regulation. This initiative has recently received a $20 million boost from Anthropic, a company that, unlike many of its peers in the tech industry, is vocal about the need for stronger regulations surrounding AI development. Anthropic’s involvement underscores a significant shift within the tech landscape, as some companies recognize the necessity for regulatory frameworks to ensure the ethical development of AI.
Public First Action is making its presence felt not only in New York but also in other states. Earlier this month, the group launched a six-figure ad campaign supporting Senator Marsha Blackburn (R-Tenn.) regarding her record on AI legislation as she runs for governor of Tennessee. This highlights the increasingly national implications of state-level AI regulation battles, as different PACs and political factions vie for control over the narrative and direction of AI governance.
Moreover, the Republican arm of this bipartisan effort, known as the Defending Our Values PAC, is also making significant moves. They have made a six-figure ad buy in support of Senator Pete Ricketts (R-Neb.), who has introduced legislation aimed at instituting stronger restrictions on exporting advanced U.S. semiconductors to adversarial countries. This indicates a broader concern within the Republican Party regarding not only domestic regulation but also the international dimensions of AI technology and its implications for national security.
As the midterm elections approach, the outcomes in key races like Bores' could set precedents for AI governance that resonate far beyond New York. The intense investment from both PACs underscores a crucial pivot in the political narrative surrounding AI, indicating that how lawmakers choose to regulate this technology will have substantial ramifications for the financial system and broader market dynamics. The stakes are high as these elections could shape the future of AI in America, affecting everything from corporate governance to consumer protections.
The congressional discourse around AI regulation is increasingly polarized. One of the central debates focuses on the proposal to temporarily ban states from implementing their own AI laws, an effort aimed at preventing a patchwork of regulations across the country that could hinder innovation and development in the AI sector. Proponents of this ban argue that without a cohesive national framework, states might enact overly strict regulations that could stifle technological advancement and economic growth. However, the proposed ban has struggled to gain traction, lacking necessary bipartisan support.
Complicating the regulatory landscape further is the Trump administration's executive order, signed in December, which penalizes states for enacting certain regulations related to AI. This move has been met with mixed reactions, primarily from those who fear that such centralized control could inhibit localized efforts to address specific concerns regarding AI technologies and their impact on communities. The executive order reflects an ongoing tension between the desire for rapid technological innovation and the need for safety and ethical considerations in AI development.
As we delve deeper into the implications of these political maneuvers, it becomes clear that the intersection of AI technology and politics is a fertile ground for conflict and contention. The investments being made by PACs indicate that AI has become a critical issue for many voters, who are increasingly concerned about how these technologies will shape their lives and the economy. The narratives being crafted by these PACs will not only influence the outcome of specific races but also set the tone for the broader national dialogue on AI regulation.
The involvement of PACs in these elections also raises questions about the transparency and accountability of political funding in the context of AI governance. As major players in the tech industry pour millions into campaigns, there is growing concern about how these contributions may shape the policy decisions of elected officials. Voters may find themselves grappling with the implications of this financial influence as they consider candidates’ positions on crucial issues such as data privacy, algorithmic bias, and the ethical use of AI technologies.
The narrative surrounding AI regulation is not merely a political issue; it intersects with broader societal concerns about technology's role in our lives. With advancements in AI rapidly evolving, the demand for effective governance has never been more urgent. The outcomes of the midterm elections will undoubtedly play a significant role in determining how the U.S. approaches the regulation of AI technologies moving forward, setting precedents that could last for years.
As candidates like Alex Bores strive to position themselves as champions of responsible AI governance, they must navigate a complex landscape where competing interests collide. The public's perception of AI and its implications will be a determining factor in these elections, as voters look for leaders who understand the delicate balance between fostering innovation and ensuring safety.
