Title? Typo? Cryptic code? Equation?
Really it is a combination of three of the four. In other words, it is not a typo (You can look elsewhere in this piece for those).
One key to understanding the reference above is the mathematical logic symbol ├, which is known as the “’turnstile’…because of its resemblance to a typical turnstile if viewed from above. It is also referred to as tee.” It is meant to mean “yields” or “entails,” and here the familiar initials “IP” are used to the right of the turnstile with question marks as an indication of “intellectual property” (that “umbrella term for a set of intangible assets or assets that are not physical in nature”) and the questions it raises. The “IP” to the left of the turnstile is not tautological—it stands for “interesting politics,” leaving the tail of the title to be a logic language shorthand for “Interesting Politics Yield Intellectual Property Questions.”
And how true that is these days. For instance, we await a US Supreme Court decision in Vidal v. Elster, a case argued November 1, 2023, and that this author has written about before (here and here) concerning whether the refusal to register a trademark (Trump Too Small) under 15 U.S.C. § 1052(c) when the mark contains criticism of a government official or public figure violates the Free Speech Clause of the First Amendment. But, since we already hit that topic and can write more about it once the Vidal decision comes down, let’s see what else is out there.
We have a great mix of other issues as the political season heats up. This ranges from concerns about “deep fakes” like the Faux Biden robocall to shallower AI-driven authorized messages like we are seeing in connection with India’s election. It also includes the recurring dust-ups between musical artists and politicians concerning songs played at campaign rallies. Beyond that, we have a copyright fight between a former Congressman and a late-night talk show host. So we can jump into these issues now.
Deepfakes or Deep Fakes ⊭ Always Something Produced By Adversaries To Do Damage
The dictionary definition of “deepfake” (sometimes written as two words) is “a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information.” While some forms of them have appeared for almost as long as photographs and movies have existed, concern over deepfakes has taken on added emphasis in the age of mass media and artificial intelligence:
deepfake technology takes its name from deep learning, which is a form of AI. In deepfake AI, deep learning algorithms that teach themselves how to solve problems with large data sets, are used to swap faces in videos, images, and other digital content to make the fake appear real.
Deepfake content is created by using two algorithms that compete with one another. One is called a generator and the other one is called a discriminator. The generator creates the fake digital content and asks the discriminator to find out if the content is real or artificial. Each time the discriminator correctly identifies the content as real or fake, it passes on that information to the generator so as to improve the next deepfake.
[All You Need to Know About Deepfake AI, Nov. 21. 2022]
On the internet, you can actually watch an example of this process as the creation evolves into a better and better deepfake in real-time. Deepfakes can obviously raise concerns in the political realm, as a candidate’s opponents could create a video or audio tape including a “candidate” making statements that he or she never made that would place that candidate in a bad light in the eyes of supporters.
For instance, as noted by law enforcement officials in that state, in connection with this year’s New Hampshire presidential primary, numerous New Hampshire residents received a robocall phone message that appeared to be a recording (it was not) of President Joe Biden urging them not to vote in the January 23, 2024, New Hampshire Presidential Primary Election. Shortly after that, the US Federal Communications Commission issued a declaratory ruling making the use of AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act. As the FCC noted in its press release at the time, “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice….State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”
As noted elsewhere (with footnotes), this is but one recent legal response among many to deepfakes in the US and around the world:
In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to the US Senate,[215] and in 2019 the DEEPFAKES Accountability Act was introduced in the House of Representatives.[16] Several states have also introduced legislation regarding deepfakes, including Virginia,[216] Texas, California, and New York.[217] On 3 October 2019, California governor Gavin Newsom signed into law Assembly Bills No. 602 and No. 730.[218][219] Assembly Bill No. 602 provides individuals targeted by sexually explicit deepfake content made without their consent with a cause of action against the content’s creator.[218] Assembly Bill No. 730 prohibits the distribution of malicious deepfake audio or visual media targeting a candidate running for public office within 60 days of their election.[219]
In November 2019 China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered a crime the Cyberspace Administration of China stated on its website.[220] The Chinese government seems to be reserving the right to prosecute both users and online video platforms failing to abide by the rules.[221]
In the United Kingdom, producers of deepfake material can be prosecuted for harassment, but there are calls to make deepfake a specific crime;[222] in the United States, where charges as varied as identity theft, cyberstalking, and revenge porn have been pursued, the notion of a more comprehensive statute has also been discussed.[208]
In Canada, the Communications Security Establishment released a report which said that deepfakes could be used to interfere in Canadian politics, particularly to discredit politicians and influence voters.[223][224] As a result, there are multiple ways for citizens in Canada to deal with deepfakes if they are targeted by them.[225]
In India, there are no direct laws or regulation on AI or deepfakes, but there are provisions under the Indian Penal Code and Information Technology Act 2000/2008, which can be looked at for legal remedies, and the new proposed Digital India Act will have a chapter on AI and deepfakes in particular, as per the MoS Rajeev Chandrasekhar.[226]
In Europe, the European Union’s Artificial Intelligence Act (AI Act) takes a risk-based approach to regulating AI systems, including deepfakes. It establishes categories of “unacceptable risk,” “high risk,” “specific/limited or transparency risk”, and “minimal risk” to determine the level of regulatory obligations for AI providers and users. However, the lack of clear definitions for these risk categories in the context of deepfakes creates potential challenges for effective implementation. Legal scholars have raised concerns about the classification of deepfakes intended for political misinformation or the creation of non-consensual intimate imagery. Debate exists over whether such uses should always be considered “high-risk” AI systems, which would lead to stricter regulatory requirements. [227]
Additionally, “[a] few states have also passed legislation addressing deepfakes in political ads, with Minnesota and Texas criminalizing the use of deepfakes to influence elections. Washington State passed a law last year addressing ‘synthetic media,’ referring to audio or video recordings used in political ads.” Ivan Moreno, 3 Takeaways On How AI Is Forcing Publicity Rights To Evolve, Law360 (April 24, 2024, 8:35 PM EDT).
One might read the present section of this blog up to this point and wonder what the IP (i.e. intellectual property) angle is up to this point, as this seems more about new political norms than traditional intellectual property rights. But that is not the case—inherent in the blocking of deepfakes of politicians (and of others) is the notion that the candidate (or the media star) has an inherent right to control his/her own name, image, voice, and likeness. So there is an IP basis for some of these emerging policies. Caution, however, demands that one consider the use of this technology by those who do actually enjoy the right to control that name, image, voice, and likeness being manipulated by AI.
Whatever do I mean by that? To answer that question, we turn to Indian politics. In India, as reported by the New York Times recently, Indian politicians are now using the same AI tools that deepfakers might use to make unauthorized use of another’s name, image, voice, and likeness to make on the politician’s behalf audio-visual presentations of things that candidate never actually said in languages that candidate has never actually spoken. The hope is that such candidates can use such communications to endear themselves to the particular constituency receiving such messages—even though the candidate may never have said such things in any language, much less in the primary language of the recipient.
In other words, the person controlling through the right of publicity his or her own name, image, voice and likeness can use AI to project to the public positive images and statements that are just as synthetic and unreal as a damaging image created by opponents. Of course, from an IP perspective, one can fictionalize one’s own life to a certain respect as part of controlling one’s own name, image, voice and likeness. But if the person never said those words or spoke that language, the image of them doing so remains as fake and co-opting as J. Peterman’s biography based on Kramer’s exploits, and leads us to view deepfakes in a new light—they can “misinform voters” when created by the candidate just as easily as when created by the candidates’ opponents and when created by the holder of the right to publicity just as much as when created by the usurper of those rights. This becomes another level of falsity that can perhaps multiply the so-called Liar’s Dividend, a whole other aspect of AI-related ethics, because one can create evidence of positive statements never uttered while at the same disputing actual evidence of what was said—sounds like a politician’s dream (As noted by Cat Casey, “Liar’s Dividend” or “Deepfake Defense” is a term coined by law professors Bobby Chesney and Danielle Citron that “refers to the idea that bad actors making false claims about deepfake evidence are increasingly believable as the public learns more about deepfake threats. Simply put, the more the public learns about the believability of deepfakes, the more credible false claims become, even in the face of real evidence. As people’s trust in the veracity of visual content is further eroded, the challenge of evidence authentication increases. False claims of AI-generated content and deepfake detours have already graced the courtroom…. ‘The most insidious impact of #deepfakes may not be the fake content itself, but the ability to claim that real content is fake.’”)
Hence, the title of this subsection deploys the “negated double turnstile” logic symbol ⊭, which means “does not semantically entail” as in “A ⊭ B says ‘A does not guarantee the truth of 𝐵. In other words, 𝐴 does not make 𝐵 true.” Thus, this section heading “Deepfakes or Deep Fakes ⊭ Always Produced By Adversaries To Do Damage” reads “Deepfakes or Deep Fakes Are Not Always Something Produced By Adversaries To Do Damage.” Deepfakes can just as easily be created by proponents to falsely enhance or endear one to a particular constituency by making it seem like the candidate actually said something that they wanted to hear. The notion of finding liability for the misuse of one’s own right of publicity is an interesting intellectual property concept upon first stating it. But it is close enough to traditional notions of false advertising and unfair competition to seem like a prohibition with sufficient intellectual property roots to take hold.
Playing Song⇒ Having The Endorsement, Or Does It?
Campaigning politicians love to enter or exit venues walking through floating balloons and falling confetti as some popular song enhances the moment and vibe. Often selected as part of conveying a message about, or an image of, the candidate, the list of campaign songs goes way back across US history, as just one example. Whether it was Walter Mondale looking for underdog mojo needed to score a knockout by turning to the Rocky theme “Gonna Fly Now” in 1984 against an incumbent Ronald Reagan, or Nikki Haley looking for the same thing by using “Eye of the Tiger” in 2024 Republican primaries against Donald Trump, politicians have often set their messages to music.
But musicians often object to such use. For instance, Kim Davis, the embattled county clerk jailed for refusing to issue marriage licenses for same-sex couples, had many rally around her, and she emerged during at least one of those rallies to Survivor’s “Eye of The Tiger” being played in the background. That drew a rebuke from the song’s co-author, as noted in Variety. Similarly, as also noted by Variety, “Neil Young objected to Donald Trump’s use of “Rockin’ in the Free World” at his presidential campaign announcement in June [2015].” As Associated Press has noted, Bruce Springsteen, Phil Collins, and John Fogerty have also objected to Donald Trump’s using their songs at campaign events, just as Springsteen had objected to Ronald Reagan’s using “Born In the USA years ago; other artists have also expressed discontent with their songs being played at Trump events. Some artists and songwriters have gone beyond simply voicing displeasure. For instance, as Entertainment Weekly noted, ABBA sent John McCain a cease and desist letter during the 2008 Presidential campaign. Neil Young actually sued Donald Trump in an effort to stop Trump’s use of “Keep on Rockin’ in the Free World,” as one commentator noted.
Commentator Jay Gabler summed up the question and answer quite nicely before the last presidential election:
So why is this still a thing? Why can’t musicians get political candidates to stop using their music at campaign events? Well, it’s complicated, but essentially, the reality is that in almost all cases, artists actually don’t have the legal right to tell a campaign to stop playing their songs at events. The reason for that is that most major artists sign licensing deals with performance rights organizations like ASCAP and BMI.
Any public event, if they want to play music, needs to pay for a license, but once organizers have paid for that license, they can pretty much play any songs they want. If that kind of agreement weren’t in place, Neil Young and ABBA would have to strike a deal with every restaurant, bowling alley, and wedding event that wants to use their music. Obviously, that’s way too complicated, so that’s why artists sign rights agreements that cover a wide range of venues and events — including rallies, whether we’re talking politicians or monster trucks.
[Gabler, “Music News: Why can’t musicians get politicians to stop playing their songs?,” The Current, October 1, 2020]
Of course, Gabler noted that his answer still left issues for discussion:
So just telling a candidate to cut it out basically isn’t good enough. The next thing some artists have tried is revising their licensing agreements to make certain songs unavailable specifically for political events. That’s what Neil Young has done with his song “Rockin’ in the Free World,” which has been a favorite of the President’s. But there’s some legal ambiguity there, too, because music rights licensing is regulated under federal law due to antitrust concerns, so even if Neil Young wants to make exemptions to his license, it may not really be up to him.
[Gabler, “Music News: Why can’t musicians get politicians to stop playing their songs?,” The Current, October 1, 2020]
The Artist Rights Alliance in fact asked the major political parties to require their candidates to “seek consent from featured recording artists and songwriters before using their music in campaign and political settings. This is the only way to effectively protect your candidates from legal risk, unnecessary public controversy, and the moral quagmire that comes from falsely claiming or implying an artist’s support or distorting an artist’s expression in such a high stakes public way.” As the Copyright Alliance has noted, questions of fair use are also implicated.
But developments since 2020 have added some clarity. For instance, in 2020, Guyanese-British singer Eddy Grant brought suit against former President Donald Trump for Trump’s unauthorized use of Grant’s iconic song “Electric Avenue” in a video endorsing Trump’s reelection campaign posted to Trump’s personal Twitter page, as Diane Nelson reported. The Court refused to dismiss the matter on fair use or other basis. Similarly, Carolyn Wimbly Martin and Ethan Barr have noted that copyright pre-emption may complicate resort to the right of publicity for artists:
The use poses this hot-button campaign question: Can politicians freely use popular music at live rallies and live events without regard to the interests of the artists that created and performed the works?… Some experts have opined that copyright law is not the only arena in which to recover for improper political use of music. They have focused on a panoply of legal principles designed to protect celebrity image and reputation. In their view, trademark dilution, right of publicity, and false endorsement claims may provide redress. While all three legal theories are particularly viable (along with copyright law) when it comes to recorded content like TV commercials, there is a strong argument that for live events, copyright law may be a sole avenue for relief. This notion requires a closer look.
***
Similarly, approximately half of U.S. states have enacted laws to protect the reputation of prominent people, granting them a right of publicity, which permits them to sue for misappropriation of their name, image, or likeness. Accordingly, in those states, the question is likely whether an artist may claim that a live political event improperly used recordings of his or her voice (a distinctive characteristic) at a rally, or whether the use of the music may be deemed a “false endorsement” of that campaign event. See Browne v. McCain, No. CV 08-05334-RGK (Ex), 2009 U.S. Dist. LEXIS 141097, at *20 (C.D. Cal. Feb. 20, 2009). For example, California Civil Code § 3344 permits a cause of action for “knowingly us[ing] another’s name, voice, signature, photograph, or likeness, in any manner” without consent for advertising purposes. While the law is still being written here, it is possible that the publicity claims, which merge the content of the music with the performers’ persona, may be viewed as copyright claims by another name. In that case, courts may reasonably conclude that the publicity and false endorsement claims are preempted by federal copyright law. Hence, musicians are best served by starting with copyright law as the most suitable framework for their case.
[Carolyn Wimbly Martin and Ethan Barr, Notes and Votes: Use of Copyrighted Music at Live Political Events, Copyright Law,October 22, 2020]
So this leaves us in a bit of a quandary. Artists have certainly embraced the logic of “Playing Song⇒Having The Endorsement,” which reads as “Playing the song implies the performer’s or songwriter’s endorsement,” and used that as a basis to challenge, formally and informally, such use. But the question really is whether it does or not.
That very notion may go too far. It does not seem that one would suggest that the clothing designer has endorsed a candidate who prefers that designer’s suits or ties and wears them when attending rallies. Similarly, the poet or the author quoted during a candidate’s speech would have a hard time claiming that the candidate’s quoting the poet/author’s work is readily understood to be an indicium of the writer’s support for the candidate. In fact, it is the opposite—it is the candidate endorsing the author or performer. A candidate can be a fan of an artist without creating an impression that feeling or status is mutual—think Chris Christie and Bruce Springsteen, at least in the time before the more recent friendliness some see. But, this would not be the first time that I concluded that musicians’ claims seemed to stretch the boundaries of IP law a little further than necessary, as I did here (in discussing claims against Ed Sheeran) and here (discussing claims against Led Zeppelin). Of course, that conclusion is based on a US law upbringing and perspective, and I might feel very differently if raised in a full-fledged moral rights jurisdiction where a creator’s connection to a work is more likely recognized as personal and undetachable. It will be interesting to see how this plays out, pun intended.
Fair Use ⟥ A Fact-Sensitive Determination That ⟣ Easy
George Santos and Jimmy Kimmel are adversaries in an IP+ lawsuit in the United States District Court for the Southern District of New York. Santos is a former United States Congressman from Long Island. First elected in 2022, Santos became famous, or infamous, for the false and inflated resume on which he had been elected, and the criminal and ethical cloud that led to his expulsion from Congress on December 1, 2023. After being “expelled from the House of Representatives for a number of alleged crimes and falsehoods,” he remained a celebrity of sorts. Cashing in on that celebrity, Santos began providing, for a fee, personalized messages to “fans” and others through Cameo.com. Late-night talk show host Jimmy Kimmel (likely through a staff member) posed as fans of Santos and sought a series of video messages from Santos celebrating various fictitious life events such as asking Santos to “congratulate a friend for coming out as a ‘furry’ and adopting the persona of a platypus mixed with a beaver.” See Kimmel Atty Defends Airing ‘Patently Ridiculous’ Santos Clips, Law 360, April 18, 2024. The Kimmel-requested, Santos-created videos were then edited by Jimmy Kimmel Live! staff and aired on that show and were posted to social media. Santos filed a four-count complaint on February 17, 2024, alleging copyright infringement, fraud in the inducement, breach of contract, and unjust enrichment that some have described as silly. Kimmel moved to dismiss on April 29, 2024 (and Santos has to respond to the motion or amend the complaint by May 24, 2024). While we cannot know yet exactly what Santos will do in response to the motion, we can focus on the copyright claim pled and the motion made against that claim (and perhaps even blend in the fraud and contract arguments that may ultimately prove decisive).
Santos claims willful infringement by Kimmel, who does not deny rebroadcasting all or parts of the videos he solicited. The question is whether Kimmel’s broadcasting and posting is fair use. And as readers of this blogger know, we have repeatedly struggled here with fair use questions because that doctrine demands such fact-sensitive application and serves somewhat divergent interests. Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 577, 581 (1994)(“[t]he fair use doctrine thus ‘permits [and requires] courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster'” and the fair use inquiry requires that any particular use of copyrighted material “be judged, case by case, in light of the ends of the copyright law”). As Lloyd Weinreb noted, “fair use depends on a calculus of incommensurables.” Fair Use, 67 Fordham L. Rev. 1291, 1306 (1999). As he went on to say, “[t]he great objection to…fair use is that it affords no predictability” and “unpredictability is costly, if for no other reason than that it engenders litigation.” Id. at 1309.
Kimmel raises a solid fair use defense since commentary, criticism, and parody lie at the heart of what he did and at the heart of fair use, though other uses are recognized as typical fair ones by the Copyright Office, statute and judicial opinions (like those collected in the US Copyright Office Fair Use Index). Indeed, Kimmel asserts that his conduct “represented a “paradigmatic example of protected fair use” and “classic fair use” because his use involved “political commentary on, and criticism of, video clips created and disseminated by a major public figure.” Kimmel Memorandum of Law (“KMOL”) at 1-2. He argued that his works were transformative, and the use fair, even though he had used unaltered Santos cameo videos. Kimmel’s brief noted that “’faithfully reproduc[ing] an original work without alteration” can be transformative in light of the “altered purpose or context of the work, as evidenced by surrounding commentary or criticism.’ Grp. Mgmt. Servs. Ltd. v. Bloomberg L.P., 756 F.3d 73, 84 (2d Cir. 2014) (holding that publication of full audio recording of plaintiff company’s investor phone call was transformative fair use).” KMOL at 11. As Kimmel notes at KMOL at 13-14:
the Videos are the “target[]” of the criticism and commentary conveyed by the “Will Santos Say It?” segments. The “purpose” of using the Videos – to mock the shameless willingness of a former Congressman to make internet videos saying the absurd things requested of him while under indictment for theft and fraud – is “completely different” from Santos’ ostensible purpose in creating the Videos to congratulate or otherwise communicate with the requester’s friends or family. Indeed, the Supreme Court has recognized that biting – even offensive – mockery of public officials through “exploitation of … politically embarrassing events” has long “played a prominent role in public and political debate.” Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 54 (1988). In short, the use of the Videos was for a specifically enumerated statutory purpose of criticism and comment and was highly transformative.
These arguments and the case law cited concerning similar late-night and comedy situations, leave the fair use assertion well supported. KMOL at 9-21. As one commentator notes, quoting from pages 1 and 9 of the motion papers, “’In this context,’ the motion continues, Kimmel’s decision ‘to test whether, even after being expelled and indicted, there was anything Santos would decline to publicly say in exchange for a few hundred dollars’ served as ‘a quintessential example of a fair use,’ since he was using the videos “to comment on and mock a controversial political figure to a broad audience.’”
But Kimmel will face some challenges here, as “fair” is an element of “fair use,” and he inspired the creation of these works under false pretenses. As Lloyd Weinreb also presciently observed (at 1308) in 1990, “[a]nother incommensurable element of fair use is what is typically referred to as ‘unclean hands’ but is, I think, more accurately described simply as fairness. That, I think, was the élément gris[e] that drove the factor analysis in Harper & Row [v. Nation Enters., 471 U.S. 539, 542-43, 562-63 (1985)]. As the Court perceived the facts, the editor of The Nation was a chiseler, a category only a little removed from crook or, in the nineteenth-century idiom, pirate-not so much because of the use itself but because of the manner in which he obtained the manuscript. If that perception is unchallenged, the case is over; theft is not a fair use. In the Hustler [Magazine, Inc. v. Moral Majority, Inc., 796 F.2d 1148 (9th Cir. 1986)], the centrality of plain fairness is even more apparent.” Additionally, Kimmel lured Santos into creating these videos and exposing himself to ridicule—while that in and of itself seems part of the comment and criticism on which Kimmel’s argument relies, it does not appear that Kimmel has some of the contractual protections that insulated Sacha Baron Cohen from liability after he induced Judge Roy Moore into an interview, another item that we have discussed here before.
As the section head notes, “Fair Use ⟥ A Fact-Sensitive Determination That ⟣ Easy,” as in “fair use will always be a fact-sensitive determination that will never be easy.”
∴ Stay Tuned As This Election Year Unfolds ∵ More IP ⊢ IP Issues Are Likely To Follow
The “therefore” ∴ and related “because” ∵ symbols illustrate the conclusion to be drawn from the forgoing—”therefore stay tuned as this election year unfolds because more interesting politics are likely to yield more intellectual property issues.”