Legal experts say the case raises real constitutional red flags about the law
Elon Musk’s social media platform, X, has launched a federal lawsuit against Minnesota, challenging a 2023 law that criminalizes AI-generated deepfakes aimed at influencing elections. Filed on April 23, 2025, the suit claims the law violates the First Amendment by curbing free speech and risks censoring political discourse.
As deepfakes threaten democracy, this case pits constitutional protections against efforts to combat misinformation, raising stakes for Americans who value open debate and fair elections.
A Law to Stop Election Meddling – or Silence Speech?
Minnesota’s law, passed with bipartisan support, makes it a crime to knowingly share a deepfake within 90 days of an election if it’s intended to harm a candidate or sway voters. Penalties include fines and up to seven years in prison.
X argues the law’s vague wording forces platforms to over-censor content to avoid liability, chilling protected speech like satire or commentary.
“This system will inevitably result in the censorship of wide swaths of valuable political speech,”
X’s complaint states, citing a deepfake of Trump’s fictional arrest as an example of content at risk.
The law also clashes with Section 230 of the 1996 Communications Decency Act, which shields platforms from liability for user content, per X’s suit. Minnesota Attorney General Keith Ellison’s office is reviewing the case, while the law’s author, Democratic Senator Erin Maye Quade, calls the lawsuit “misguided,” arguing it protects elections from deliberate fraud. Legal experts, however, see constitutional flaws, with some predicting the law’s demise in court.

First Amendment vs. Election Integrity
The First Amendment fiercely guards political speech, as seen in cases like United States v. Alvarez (2012), which protects even false statements unless they cause specific harm, like fraud. X’s lawsuit, filed in Minnesota’s federal court, argues the deepfake law overreaches by punishing speech based on its perceived falsity, a role the government can’t play without risking censorship.
“The government is not free to punish speech solely because it is false,”
said Colorado attorney J. Kirk McGill, per Business Insider.
The law’s application to platforms, not just individuals, amplifies concerns. University of Minnesota law professor Alan Rozenshtein told MPR News that criminal liability for platforms could lead to “overt censorship” of anything resembling a deepfake, including harmless humor. This violates the First Amendment’s core purpose, per Texas v. Johnson (1989), to prevent the government from dictating truth in public discourse. For Americans, this could mean less vibrant online debate, especially during elections.

Balancing Speech and Democracy
The lawsuit raises pivotal constitutional issues:
- Does the law infringe on protected speech? The First Amendment shields political expression, including satire, unless it incites imminent lawless action (Brandenburg v. Ohio, 1969). Minnesota’s law, lacking clear exemptions for parody, risks criminalizing protected content, undermining free discourse.
- Is the law too vague? The Fourteenth Amendment’s due process clause requires laws to be clear. X claims the statute’s “unintelligible” terms leave platforms guessing, encouraging preemptive censorship to avoid jail time, a violation of fair notice principles.
- Does Section 230 preempt the law? Article VI’s supremacy clause prioritizes federal law. Section 230 protects platforms from liability for user content, but Minnesota’s law imposes criminal penalties, creating a conflict that could nullify the state statute.
These questions test whether Minnesota can regulate deepfakes without trampling constitutional rights. A similar California law was blocked in 2024 for First Amendment violations, suggesting X’s case has legs.

Everyday Americans: Elections, Trust, and Online Freedom
The deepfake law affects millions who rely on social media for political information. Minnesota’s intent—to curb election misinformation, like a fake video of a candidate making inflammatory remarks—resonates with 62% of Americans who, per Pew Research, worry about AI-driven falsehoods. Yet, X’s suit highlights the cost: platforms may remove legitimate posts to avoid prosecution, limiting what voters see. For consumers, this could mean a less open internet, where humor or critique is stifled.
Economically, the law’s fallout hits content creators and businesses. A Minnesota influencer and GOP lawmaker already challenged the law, fearing it criminalizes parody; their case, appealed after a January 2025 injunction denial, underscores the chilling effect. Small businesses relying on social media ads could lose reach if platforms over-censor, impacting local economies. Public trust also suffers—if laws curb speech too broadly, 54% of voters, per Gallup, say they’ll doubt election integrity.
The Road Ahead: A Test for Free Speech
X’s lawsuit, seeking to declare the law unconstitutional and block its enforcement, follows a failed challenge by others in January 2025, though that ruling didn’t address the law’s merits. Legal experts like Rozenshtein predict a win for X, citing the law’s “significant First Amendment problems,” akin to a blocked California statute. Minnesota defends the law as narrowly tailored, with carve-outs for satire, but X counters that vague enforcement risks blanket censorship.
For Americans, the outcome will shape online speech and election safeguards. A ruling against the law could embolden deepfake creators but protect free expression. A win for Minnesota might curb misinformation but set a precedent for government overreach. As the case unfolds, it’s a stark reminder: the Constitution’s free speech guarantee is both a shield and a battleground in the digital age.
