#onlinedating | FTC Report on Social Media Bots and Deceptive Advertising | Dorsey & Whitney LLP | #bumble | #tinder | #pof


We have previously blogged about the rise of computer generated imagery (CGI) influencers, which are one form of social media bots currently invading the online world. Bots are automated software that perform actions using a set of algorithms. Social media bots run on social media platforms and are generally online accounts that automatically generate posts and otherwise simulate human behavior. Legitimate uses include chatbots that provide automated customer support. But harmful or fraudulent uses of social media bots are widespread and can involve the creation of fake accounts that amplify false or deceptive product reviews, or that artificially inflate a company’s online following, among other misuses. Over 37% of all Internet traffic is reported to be the work of bots.

With this backdrop, the Federal Trade Commission (FTC) announced in July 2020 that it had sent a report to Congress on the topic of social media bots and deceptive marketing. The report was in response to direction the FTC had received from Congress late last year asking the FTC to describe for the United States Senate Committee on Appropriations “the growing social media bot market as well as the use of social media bots in online advertising” and “how their use might constitute a deceptive practice.” S. Rept. 116-111, 116th Congress, 1st Sess. at 70-71 (Sept. 19, 2019); see 165 Cong. Rec. S7206 (Dec. 19, 2019).

Here are some highlights of the FTC’s report to the Committee:

  • The malicious use of social media bots is “cheap and easy” and “hard for platforms to detect,” and so remains a “serious issue.”
  • 90% of social media bots are used for commercial purposes. Improper commercial use occurs when influencers use them to boost popularity, or online publishers use them to increase the number of clicks an ad receives (thus increasing revenues), among other misuses.
  • Examples of past enforcement action against social media bots by the FTC includes a 2019 complaint against Devumi, a company that sold fake followers, subscribers and views to people trying to artificially inflate their social media presence. The FTC also took action in 2018 against three different online dating services that were alleged to have fake profiles or to be using bots on their sites.
  • The FTC’s enforcement action demonstrates “the ability of the FTC Act to adapt to changing business and consumer behavior as well as to new forms of advertising.” But the FTC pointed out that its authority to stop the spread of social media bots is limited by the powers given to it under that Act, which would require it to show in any given case that the use of social media bots constitutes a deceptive or unfair practice in or affecting commerce in order for the FTC to take action.

It is unclear from the report how much the FTC will be prioritizing enforcement action against social media bots. However, a follow-up statement from FTC Commissioner Rohit Chopra on the report made clear that the FTC views the social media platforms to be engaging in insufficient policing and that “a comprehensive solution may require the imposition of specific requirements to increase accountability and transparency” possibly with the intervention of Congress. Commissioner Chopra also made clear that the FTC could explore “writ[ing] rules to ensure there is accountability for undisclosed influencer connections and deceptively formatted ads” and must “also fundamentally reform its approach to fake reviews.”

Given the increased reliance by companies on social media marketing and online sales in the pandemic era, we expect that deceptive and unfair online marketing issues will continue to be a significant problem requiring further regulatory action.

Source link

Source link

.  .  .  .  .  .  . .  .  .  .  .  .  .  .  .  .   .   .   .    .    .   .   .   .   .   .  .   .   .   .  .  .   .  .