‘Dangerous, Anticompetitive Behavior’: Big Brands Colluding to Control Online Speech
A new U.S. House Judiciary Committee report claims the Global Alliance for Responsible Media has been coordinating efforts to demonetize disfavored content related to vaccines, political views and more — activity that may have far-reaching implications for online discourse and consumer choice in media.
A congressional investigation uncovered allegations that some of the world’s largest brands and advertising agencies are colluding to control online speech through coordinated boycotts and content demonetization schemes.
A 39-page interim staff report, released Wednesday by the U.S. House of Representatives Committee on the Judiciary, claims that the Global Alliance for Responsible Media (GARM) — an initiative of the World Federation of Advertisers (WFA) — is using its market power to silence disfavored voices in possible violation of antitrust laws.
“Through GARM, large corporations, advertising agencies, and industry associations participated in boycotts and other coordinated action to demonetize platforms, podcasts, news outlets, and other content deemed disfavored by GARM and its members,” the report states.
The committee’s investigation, which focused on GARM’s activities since its creation in 2019, examined its influence over major social media platforms, news outlets and content creators.
The report suggests that GARM’s actions may have far-reaching implications for online discourse and consumer choice in media.
‘Sounds a lot like a cartel to me’
GARM was established in 2019 by the WFA, which represents over 150 of the world’s biggest brands and more than 60 national advertiser associations globally.
According to this week’s congressional report, GARM’s influence stems from the collective power of its members. “WFA members represent roughly 90% of global advertising spend, or almost one trillion dollars annually,” the document states.
The alliance includes major players in the advertising industry:
- Every major advertising agency holding company.
- GroupM, the world’s largest media buying agency, on its Steer Team.
- Four large corporations — Unilever, Mars, Diageo, and Procter & Gamble — that together spend billions annually on advertising.
GARM’s Steer Team, which acts as a board of directors, is closely involved in day-to-day operations. The initiative reports to the WFA Executive Committee, which includes representatives from major corporations such as AB InBev, L’Oréal, Nestlé and IBM.
Robert Rakowitz, GARM’s initiative lead and co-founder, plays a central role in the organization’s activities. The report cites internal emails in which Rakowitz expressed views on free speech, describing an “extreme global interpretation of the US Constitution” as problematic.
GARM claims to focus on “content monetization” rather than “content moderation.” However, the report argues that these areas are “inextricably linked,” suggesting that GARM’s work effectively influences what content appears online.
In Wednesday’s congressional hearing on GARM, Rep. Jim Jordan (R-Ohio), while questioning the CEO of GroupM and GARM board member Christian Juhl, said that GARM “sounds a lot like a cartel to me.”
Alleged antitrust violations and examples
The congressional report alleges that GARM’s activities may violate Section 1 of the Sherman Anti-Trust Act, which prohibits unreasonable restraints of trade. The committee report cites several examples of GARM’s alleged coordinated actions:
1. Twitter boycott after Elon Musk acquisition. Following Musk’s acquisition of Twitter (now known as X) in October 2022, GARM allegedly orchestrated a boycott of the platform. According to the report, GARM recommended its members “stop … all paid advertisement” on Twitter in response to the takeover.
Internal documents show that GARM held “extensive debriefing and discussion around [Musk’s] takeover of Twitter,” providing opportunities for the boycott to be organized. The report claims that GARM later boasted about “taking on Elon Musk” and noted that Twitter was “80% below revenue forecasts” as a result.
2. Pressure on Spotify over Joe Rogan podcast. In early 2022, GARM and its Steer Team allegedly pressured Spotify over content on Rogan’s podcast, “The Joe Rogan Experience.” The report states that GARM members urged action against Spotify due to alleged misinformation on Rogan’s show, particularly regarding COVID-19 vaccines after Rogan said that young, healthy people didn’t need them.
Rogan later featured Dr. Robert Malone on his podcast, which prompted GroupM to reach out to Spotify after musician Neil Young removed his content from the platform in protest over vaccine-skeptical material.
Internal emails cited in the report show Rakowitz coordinating with member companies to formulate responses to Spotify. In one instance, he wrote that he “can’t publicly advise all clients to do X — that gets us into hot water by way of anticompetitive and collusive behaviors.”
3. Efforts to demonetize certain news outlets. The report alleges that GARM and its members discussed strategies to block certain news outlets, including Fox News, The Daily Wire and Breitbart News.
An internal email from a GARM Steer Team member describes monitoring these outlets closely. The email states that as much as he “hated their ideology and bulls**t,” his company “couldn’t really justify blocking them for misguided opinion[s]” but that it “watched them very carefully and it didn’t take long for them to cross the line.”
The congressional committee argued that these coordinated actions if proven, could constitute illegal restraints of trade that harm consumers by limiting their choices and access to diverse viewpoints online.
GARM’s influence on political content and elections
Through their content moderation efforts, GARM and its members attempted to influence political discourse and election outcomes — including pushing for coordinated action around the 2020 U.S. presidential election, according to the report.
In an October 2020 email, Rakowitz suggested telling Facebook it was “at a crossroads for the platform and fence sitting on content curation and moderation” and that it should apply its COVID-19 content moderation policies to election-related content.
The report cites an instance of GARM members pressuring Facebook to label a then-President Donald Trump campaign advertisement as misinformation. When Facebook refused, citing its policy of not fact-checking political candidates’ ads, Rakowitz allegedly described the decision as “honestly reprehensible” in an internal email.
The report also claims that GARM members expressed concerns about Musk’s handling of the Hunter Biden laptop story on Twitter. After Musk released internal Twitter documents about the platform’s suppression of the story, a GARM member reportedly described Musk’s actions as an “overtly partisan take.”
Misinformation definition and application. In 2022, GARM added a definition of misinformation to its framework, describing it as “verifiably false or willfully misleading content that is directly connected to user or societal harm.”
The report suggests this broad definition could be weaponized against disfavored political views.
Committee members said Wednesday that these actions demonstrate GARM’s potential to influence political discourse and election outcomes by controlling which content receives advertising revenue and visibility on major platforms.
GARM’s partnerships with ad-tech companies and AI integration
The congressional report delves into GARM’s relationships with advertising technology companies and plans to integrate its framework into artificial intelligence (AI) and machine learning tools.
According to the report, GARM partnered with several “ad-tech partners” that offer solutions to help brands understand where their advertisements appear and what content surrounds them.
The report alleges that membership in GARM was conditioned on these partners agreeing “to make commensurate changes to business operations in pursuit of GARM’s goals.”
According to the congressional committee, this arrangement allowed GARM’s biases to be “baked directly into the solutions, allowing brands to seamlessly integrate GARM’s censorship.”
AI and machine learning integration
GARM’s plans for the future involve pushing its framework into AI solutions, according to the report. The committee said it was concerned that GARM’s partners are developing AI tools that will integrate GARM’s standards seamlessly across social media platforms.
“Such an automated censorship effort could result in the demonetization of any views or voices that GARM’s advertising cartel dislikes, potentially without any human involvement at all,” the report states.
Specific examples cited in the report include:
1. Zefr, a GARM ad-tech partner, which claims its “proprietary discriminative AI is powered by years of training data on platforms, and goes beyond keyword and text-based analyses, combining AI and ground truth data from global fact checking organizations that is mapped to the industry standards” set by GARM.
2. YouTube’s incorporation of Zefr-powered solutions to prevent advertisements from appearing next to content that violates GARM’s standards.
The combination of GARM’s framework with AI-powered content moderation tools could lead to opaque and potentially biased decisions about which content receives advertising revenue, ultimately limiting consumer choice and diverse viewpoints online, according to the report.
This article was funded by critical thinkers like you.
The Defender is 100% reader-supported. No corporate sponsors. No paywalls. Our writers and editors rely on you to fund stories like this that mainstream media won’t write.
Connections to government agencies and censorship efforts
The congressional report alleges connections between GARM’s partners and government agencies involved in content moderation efforts. Specifically, it points to collaboration between GARM ad-tech partner Channel Factory and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA).
According to the report, Channel Factory worked with CISA to develop a “common lexicon” for discussing misinformation.
An email cited in the report shows Channel Factory’s global chief strategy officer sharing this lexicon with GARM’s initiative lead, stating, “The industry will need a common lexicon and detailed definitions in order to make progress … attached is the lexicon we developed with CISA/DHS … which may provide” a useful starting point.
This type of collaboration could lead to government influence over private-sector content moderation practices, the committee report stated.
The report noted that Channel Factory is also a member of YouTube’s Measurement Program, suggesting that these connections could have far-reaching implications for online content moderation.
Former U.S. Department of State official Mike Benz in a video posted on X Wednesday, alleged that U.S. government-linked efforts to control online content with groups like GARM go back at least to 2017.
GARM engages in ‘dangerous, anticompetitive behavior’
The House Judiciary Committee concluded that GARM’s actions may violate antitrust laws and threaten free speech and consumer choice online.
According to the report, GARM’s members’ collective power allows them to achieve through coordination what they could not accomplish individually.
The report states:
“If collusion among powerful corporations capable of collectively demonetizing, and in effect eliminating, certain views and voices is allowed to continue, the ability of countless American consumers to choose what to read and listen to, or even have their speech or writing reach other Americans, will be destroyed.”
The committee emphasized that antitrust laws still apply even if GARM claims to have good intentions. It states that federal antitrust laws “do not diminish because GARM or its members claim to have good intentions.”
The committee said it will continue its oversight of GARM and evaluate the adequacy of existing antitrust laws. It suggested that legislative reforms may be necessary to address what it describes as “dangerous, anticompetitive behavior.”
Watch the House Judiciary Committee’s July 10 hearing:
No comments:
Post a Comment