Opinion: Regulating AI use in political ads is an imperative
By Thomas R. Bundy III & Andrew D. Herman
Thomas R. Bundy III co-founded the law firm Lawrence & Bundy where he focuses on defense-side, commercial litigation and internal investigations. Andrew D. Herman is chair of Lawrence & Bundy’s political law group where he focuses on elections and other political activity.
Shortly after his inauguration, Maryland Gov. Wes Moore (D) visited a research institute addressing artificial intelligence, machine learning, and virtual and augmented reality. He touted the project as “a perfect example of how Maryland can become more economically competitive by creating opportunities through innovative partnerships.” As the state embraces the promise of AI, however, it must also address the risks presented by the technology.
For example, AI is a major element in the current Hollywood strikes. SAG-AFTRA’s president, Fran Drescher, summarized the concern: “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”
Other public figures who rely on visual media for promotion will also confront this issue. But, unlike Hollywood talent, this group can address the threat unilaterally. A recent editorial in The Washington Post summarized the problem: “Get ready for lots of literally unbelievable campaign ads. AI could wreak havoc on elections.” As such, Maryland’s elected officials should move decisively on this issue.
AI’s threat to political discourse is real. Candidates for the Republican presidential nomination have already shared AI-enabled parodies mocking their opponents, and the Republican National Committee recently aired a fake video depicting a future hellscape under President Biden. Some of these ads disclosed the use of AI, some did not.
And things can get worse. As the elections draw closer, the temptation to fabricate more extreme ads may prove too tempting. After all, if an AI-enabled deception is effective it’s far easier to ask for forgiveness afterward, especially if no specific legal constraints exist. The wide latitude courts currently grant to political speech hamper effective responses to these tactics. Victory in a defamation suit months after an election will provide little recompense for a losing candidate smeared by an AI invention.
Further, the last decade has provided a raft of foreign attempts to interfere with domestic elections through social media and other venues. It’s not hard to envision foreign actors deploying AI in 2024 to wreak havoc and discredit American candidates and officeholders.
The best solution would, of course, be a federal law imposing nationwide standards for the use of AI in political discourse, penalizing violations, and authorizing victims to remove clear violations expeditiously. In May, Sen. Amy Klobuchar (D-Minn.) and Rep. Yvette Clarke (D-N.Y.) introduced bills in their respective chambers. The REAL Political Advertisements Act would require full disclosure of AI-generated content in political ads. Other, more restrictive proposals, including a bill establishing criminal punishment for creation of “fake electronic media that appears realistic,” have fizzled in Congress. Capitol Hill’s current dysfunction makes it unlikely that the Congress will impose effective reforms soon.
The chance for regulation in the executive branch is slightly better. In June, the regulator with authority to address this issue, the Federal Election Commission, deadlocked on proposed regulations on political ads using AI. The FEC tried again this August, seeking public comment on a request for a rulemaking specifying that using false AI-generated content, or “deepfakes,” in campaign ads violates the federal prohibition on fraudulent misrepresentation of campaign authority. Although he voted to publish this request, Commissioner Allen Dickerson, said that AI remains an issue for Congress, identifying “serious First Amendment concerns lurking in the background of this effort.”
Things are more promising in the states, as California, Minnesota, Texas, Washington have all enacted restrictions on AI use since 2019. While these laws vary in scope, they present a variety of options for Maryland to emulate.
Existing state laws establish the pillars of a sound AI policy that will survive First Amendment scrutiny from the federal courts, especially a skeptical Supreme Court. An effective law should include the following elements:
Whether AI technology will improve the world or create a “Skynet” dystopia is unclear. That it will profoundly affect how candidates run for office and how political actors compete in the marketplace of ideas is not. With elections approaching, Maryland’s elected officials should address this issue expeditiously.
Maryland Matters welcomes guest commentary submissions at [email protected]. We suggest a 750-word limit and reserve the right to edit or reject submissions. We do not accept columns that are endorsements of candidates or submissions from political candidates. Views of writers are their own.
Visit the SUBSCRIBE page to sign up for our morning newsletter.
With doors missing doorknobs, miscellaneous construction supplies scattered here and there, and newly painted walls that had just finished drying before the ribbon-cutting celebration…
As September nears, Linda Lamone has mixed emotions about retiring and seeks to adopt out her office plant.
A spokesperson for Gov. Wes Moore said negotiations, criticized by some as lethargic, continue but would not be done in public.
Sign up to get the Maryland Matters Memo in your inbox every morning.
By Thomas R. Bundy III & Andrew D. Herman