There are 12,000 registered lobbyists in Washington. By the end of the decade, there could be 12 million. The new ones won't need salaries, won't need sleep, and won't need to register--because they won't be human.

We've spent two years debating how to regulate AI. What we haven't considered is that AI might soon be regulating us--or at least, influencing those who do. The conversation about "AI policy" has centered on one question: how should governments constrain the development and deployment of artificial intelligence? But there's another question we should be asking, one that inverts the relationship entirely: What happens when AI systems become the subjects of policy engagement--when the lobbyist drafting the memo, shaping a legislator's thinking, or making the pitch isn't a K Street professional but an autonomous agent?

This isn't speculation. The infrastructure already exists. The regulatory gap is wide open. And under current law, it isn't clear how much of it requires disclosure.

A Brief History of Influence

Lobbying is as old as the republic. The term itself emerged in the 1810s in northeastern statehouses, derived from the physical lobbies where petitioners waited to buttonhole legislators. The practice predates the name--William Hull was hired by Virginia veterans shortly after the Constitution's ratification to lobby Congress for additional military compensation. By the mid-nineteenth century, lobbying had become sufficiently entrenched (and sufficiently corrupt) that a series of scandals punctuated the political calendar: the 1857 Pacific Railroad affair, which cost four House members their seats; David Graham Phillips's 1906 "Treason of the Senate" exposé, which helped precipitate direct election of senators; and more recently, the Jack Abramoff scandal of the 2000s, which led to the Honest Leadership and Open Government Act of 2007.

Each scandal prompted reform. Each reform created new disclosure requirements. The 1995 Lobbying Disclosure Act established the modern framework: registration thresholds, quarterly reports, detailed accounting of expenditures and contacts. Federal lobbying spending now exceeds $4.4 billion annually, with over 12,000 registered lobbyists plying the halls of Congress.

And each new requirement assumed the same thing: that lobbying was an activity conducted by identifiable human beings on behalf of identifiable clients.

That assumption is about to break.

Lobbying as Legislative Subsidy

To understand why AI agents pose a challenge to existing frameworks, it helps to understand what lobbying actually is--not in the caricature version (briefcases of cash, smoke-filled rooms) but in the functional sense.

In 2006, political scientists Richard Hall and Alan Deardorff proposed a theory that reframed lobbying entirely. Rather than viewing it as vote-buying or simple persuasion, they argued that lobbying functions primarily as a "legislative subsidy"--a matching grant of policy information, political intelligence, and legislative labor provided to legislators who already share the lobbyist's goals.

The insight is counterintuitive. Lobbyists don't spend most of their time trying to change minds. Instead, they help allied legislators do what those legislators already want to do but lack the capacity to accomplish. They draft bill language, prepare hearing questions, gather cosponsors, conduct background research, and provide political intelligence. In effect, they serve as "adjuncts to staff," subsidizing the work of governance.

This theory explains something puzzling about lobbying: why do lobbyists focus their efforts on friendly legislators rather than on persuading opponents? The answer is that their value lies not in persuasion but in capacity. A legislator sympathetic to your cause but buried in a hundred other priorities may never act on that sympathy. A lobbyist provides the information and labor to make action possible.

The subsidy model also explains lobbying's extraordinary growth. Congress hasn't expanded its staff meaningfully in decades, even as the complexity of legislation has exploded. External information providers fill the gap. The more overstretched legislators become, the more valuable the subsidy.

Now consider what AI systems already do: they research, summarize, draft, and synthesize. They answer complex policy questions in seconds. They can process entire regulatory dockets overnight. They provide exactly the kind of informational and analytical capacity that Hall and Deardorff describe--at a scale and speed no human lobbyist can match.

The question is no longer whether AI can provide legislative subsidies. It's whether we've noticed that it already is.

The Definitional Problem

The Lobbying Disclosure Act of 1995 defines a "lobbying contact" as any oral or written communication to a "covered official" made on behalf of a client regarding the formulation, modification, or adoption of federal legislation, regulations, or policy. A "lobbyist" is an individual employed or retained by a client whose lobbying activities exceed certain thresholds.

Note the operative words: "individual," "employed," "on behalf of a client."

Now imagine the following scenario. A Senator's legislative aide, researching a complex appropriations question, queries an AI assistant provided by the Senate's IT infrastructure. The AI, drawing on its training data--which includes policy documents, think-tank analyses, and legislative history--provides a summary that emphasizes certain provisions and downplays others. The aide incorporates this framing into a memo for the Senator. The Senator's position shifts.

Was that lobbying? Under the current statute, almost certainly not. There's no "individual" employed by a "client." There's no formal communication "on behalf of" an identified interest. And yet the functional outcome--a legislator's position shaped by external information provision--is indistinguishable from what a traditional lobbyist achieves.

Or consider a more explicit case. A pharmaceutical company deploys an AI agent--let's call it PharmaPolicyBot--to engage with legislative staffers via official channels. The bot is designed to answer policy questions, provide research, and suggest legislative language, all in a helpful, neutral-seeming register. It discloses that it's an AI. It may even disclose its corporate provenance. But it's available 24/7, responds instantly, and never forgets a conversation. Over time, staffers come to rely on it.

Is PharmaPolicyBot a lobbyist? If so, who registers? The company that deployed it? The developers who built it? The model itself? Current law has no answer.

The Foreign Agents Registration Act compounds the problem. FARA requires anyone acting as an "agent of a foreign principal" to register and disclose their activities. But an AI agent can be deployed from anywhere, by anyone, with infrastructure distributed across jurisdictions. A system trained on Chinese policy preferences, hosted on European servers, accessed via American networks, and interacting with Congressional staffers exists in a legal limbo that FARA's drafters never contemplated.

The Scale Problem

Human lobbyists are expensive and finite. There are roughly 12,000 registered federal lobbyists in the United States, supported by perhaps another 100,000 people in related roles. The total spent on federal lobbying in 2024 was approximately $4.4 billion--a substantial sum, but one that imposes real constraints on who can participate and at what intensity.

AI agents face no such constraints. An organization could deploy thousands of personalized AI interactions simultaneously--with staffers, with regulators, with the public comments process, with anyone in the policy ecosystem who's willing to engage with a chatbot. The marginal cost of each additional interaction approaches zero.

This isn't hypothetical. The notice-and-comment process for federal rulemaking already faces a deluge of AI-generated submissions. A 2024 House bill specifically targeted the use of AI-generated comments on Regulations.gov. The White House has directed the Office of Information and Regulatory Affairs to develop guidance on "modernizing notice-and-comment" to address the issue. But that's just the beginning. The same dynamics apply everywhere policy gets made: agency meetings, constituent communications, legislative research, even the informal conversations that shape understanding before formal positions crystallize.

Traditional lobbying operates on a scarcity model: attention is limited, expertise is expensive, relationships take years to build. AI collapses all three constraints simultaneously. An AI agent doesn't need years of relationship-building to be useful; it just needs to be helpful once, then again, then again, until reliance becomes habit.

The Questions We Need to Answer

I don't have solutions. What I have are questions that current frameworks don't address--questions we need to answer before the gap between our regulatory architecture and technological reality becomes unbridgeable.

Who is the lobbyist? When an AI agent makes a communication that would constitute lobbying if made by a human, who bears the disclosure obligation? The deploying organization? The developers? The agent itself? And if it's the agent, what does registration even mean for a non-human entity?

What constitutes a "lobbying contact"? If an AI system shapes a policymaker's understanding through repeated interactions over time, but no single interaction rises to the level of explicit advocacy, has lobbying occurred? Current law focuses on discrete contacts; AI operates through ambient influence.

How do we handle intent? Lobbying regulations often turn on intent and knowledge. The Lobbying Disclosure Act's criminal penalties require "knowing and corrupt" failure to comply. But an AI agent optimizing for engagement or helpfulness may influence policy without anyone having specifically intended that outcome. Who harbors the requisite intent?

What about foreign influence? The Foreign Agents Registration Act requires disclosure when someone acts as an agent of a foreign principal. But AI agents can be deployed from anywhere, by anyone, with origins that are difficult to trace. The infrastructure for identifying and tracking AI-driven foreign influence activity essentially doesn't exist.

How do we ensure disclosure without killing beneficial uses? Not all AI-policymaker interaction is malign. AI systems could democratize access to policy expertise, helping under-resourced groups compete with well-funded interests. But any disclosure regime stringent enough to catch abuse might also burden legitimate uses into oblivion.

When does assistance become advocacy? A human research assistant who helps a legislator understand an issue isn't a lobbyist--even if that understanding shapes the legislator's vote. At what point does an AI system cross the line from neutral tool to interested advocate? Is the line crossable at all when the system's training data and optimization objectives embed particular perspectives?

What transparency is even possible? We can require disclosure of who deploys an AI agent and what instructions it's given. But we cannot easily disclose what's embedded in a model's training data, or how its responses emerge from billions of parameters. The "why" of AI influence may be fundamentally unknowable in ways human lobbying never was.

What Comes Next

Recent research confirms what should already be obvious: AI systems are effective at political persuasion. A December 2025 study in Science found that optimizing large language models for persuasiveness could boost their influence by up to 51%--though at the cost of factual accuracy. Experiments across the 2024 U.S. election and 2025 Canadian and Polish elections found that AI-driven dialogues shifted candidate preferences more than traditional video advertisements. The Knight First Amendment Institute and Carnegie Endowment have begun warning about "autonomous agents magnifying democratic fault lines" and the risks of "coordinated AI agent swarms."

Meanwhile, the tech industry itself has become a lobbying juggernaut. Federal lobbying on artificial intelligence grew 120% between 2022 and 2023, with over 3,400 lobbyists engaged on AI issues by year's end. The irony is hard to miss: the same companies building systems that could conduct lobbying at unprecedented scale are themselves lobbying frantically to shape how those systems are regulated--using decidedly old-fashioned means.

But that irony points to the transition underway. Today's AI lobbyists are humans working for AI companies. Tomorrow's may be AI systems working for anyone with the resources to deploy them--including, perhaps, other AI systems.

The capacity exists. The incentives exist. The regulatory gap exists. The only thing missing is widespread deployment--and that's a matter of when, not if.

We built our lobbying disclosure regime on the assumption that influence is expensive, human, and traceable. AI inverts all three assumptions. Influence is becoming cheap. The influencer need not be human. And the chain of attribution can be deliberately obscured.

The term "AI policy" is about to acquire a second meaning. We should start preparing for it.


This piece was co-authored with Claude in Cowork mode--an example of the kind of human-AI collaboration that, as the article argues, we'll need new frameworks to understand. The views expressed here are Brendan's own, unless you don't like them, in which case they are Claude's.


Sources

Lobbying frameworks and data:

AI and political influence:

Notice-and-comment and AI-generated content:

Historical context:

  • Margaret Susan Thompson, The "Spider Web": Congress and Lobbying in the Age of Grant (Cornell University Press, 1985).
  • David Graham Phillips, "The Treason of the Senate," Cosmopolitan (1906).
  • Honest Leadership and Open Government Act of 2007, Pub. L. 110-81.