Netflix’s The Social Dilemma, released last fall, provides a nuanced behind-the-scenes glimpse of how social media platforms curate personal data. Eye-opening interviews from former Google, Apple, Facebook, Twitter, and YouTube executives and engineers leave viewers with an understanding that if social media companies continue with their current operating paradigm, embarrassing status updates and accidental “Likes” will be the least of our worries. As the recent attacks on the U.S. Capitol have shown, social media platforms can facilitate the spread of online disinformation and even fuel real-world violence.
Computer scientist Jaron Lanier states in the film, “If we go down the status quo for, let’s say, another 20 years, we probably destroy our civilization through willful ignorance.” While this statement is arguably hyperbolic, the film details specific ways that people’s behaviors are shaped using powerful feedback loops of content and engagement, where each click, each hour of scrolling, each feeling of “missing out,” and every confirmation of strongly held beliefs pulls users further into their online worlds. His concern, it seems, is that people are unknowingly losing their free will and critical thinking to autonomous algorithms that control their online environments. In a sense, people are increasingly beholden to their online worlds and are rapidly losing the ability to curate their own realities and experiences.
Developing technology to turn a profit is not a foreign concept in a capitalistic society, but the predictive technology employed by these companies does little to protect the well-being of consumers, and instead focuses almost exclusively on optimizing monetization. Facebook dismissed the film’s criticisms, with spokespeople stating that the movie intentionally buried “the substance in sensationalism.” To a degree, this is true, as The Social Dilemma quite clearly used emotionally-heightened language to generate distrust in social media platforms and entertain viewers. However, the critical facts presented about data collection, analytic strategies, and the pervasive use of algorithm-driven ads are true. As Tristan Harris, Center for Humane Technology co-founder, states, "If you're not paying for the product, then you're the product.”
Social media companies harvest an excessive amount of user-specific data, perhaps not personally identifiable but still sensitive in nature, and when used in tandem with their proprietary algorithms, keep the user engaged. Such data are gathered under the guise of “sharing with friends’; showing “interest” in certain causes, groups, or events; or even just hovering over a post while scrolling. The longer a user remains engaged with content, the more exposure they have to targeted ads. As Facebook admits, “selling ads allows us to offer everyone the ability to connect for free,” but the reality is that this type of “freemium” manipulates both emotion and behavior without explicit user consent.
Recent research has revealed the psychological and behavioral influences that social media platforms have over individuals. In essence, social media companies strategically manipulate the psychological feedback and reward systems in ways that are purposely and carefully crafted by design and content teams to keep users scrolling and clicking for longer periods of time. These platforms intentionally employ algorithms that are meant to be biased to generate monetary success rather than be objective for the consumer. Furthermore, we need to acknowledge that predictive technology is not magic. The film made a point of stressing that predictive technology is equivalent to the waving of a wand for results. However, if algorithms are so complex that not many understand how they work, often termed “black box” algorithms, then fundamentally using such predictive technology undermines the claim that they can trustingly be used to cause no harm.
The ways that platforms engage in algorithm design and implementation, data collection, and data monetization requires an overhaul, with improved privacy standards and required impact assessments of predictive technology prior to launch. Though some headway has been made at the state level, such as the recently passed amendments to the California Consumer Privacy Act and hearings on the third iteration of the Washington Privacy Act, there is a need for the development of a federal comprehensive privacy law that protects personal and sensitive data for consumers and dictates how data are used by companies. Third-party oversight is necessary to acknowledge the dangers of maximizing a business model that doesn’t account for consumer psychological and privacy ramifications.
A potential reform measure for the industry would be the creation of third-party review boards to ensure that there is a better balance between creating profits, as is expected with any business, while protecting the consumers’ psychological well-being from potentially harmful data harvesting practices and algorithms. In research settings, third-party institutional review boards (IRBs) require that activities involving people follow strict protocols and restrictions. This is possible for Big Tech as well.
Google previously established a panel of experts to help with issues related to consumer privacy rights, including an Oxford philosopher, a civil-rights activist, and a United Nations representative. However, this was still an attempt to self-regulate rather than have an objective mediator set the regulations. Facebook works with the Oversight Board, which is composed of 40 members from diverse disciplines and across the world. The Oversight Board is not designed as an extension of Facebook’s existing content review process. Rather, it reviews select, “highly emblematic cases” to determine whether Facebook is following its own stated privacy policies. It may be difficult to establish a type of IRB system for all Big Tech companies, considering issues of confidentiality and proprietary predictive technology, but it is not impossible.
Additionally, to broaden accountability, social media companies should face penalties for violating civil liberties and privacy. In the United States, Congress needs to expand the enforcement powers of the Federal Trade Commission (FTC) and provide additional resources for investigations. In June 2019, the FTC announced an unprecedented settlement that required Facebook to pay $5 billion in civil penalties for violations of existing privacy orders in relation to its sale of user data to Cambridge Analytica.
Normally, the FTC cannot conduct these types of investigations unless it relates to violations of an existing FTC order. The 2019 settlement stemmed from an investigation of violations of the FTC’s 2012 order with Facebook, which focused specifically on misrepresentations of consumers’ privacy and sharing of user data with third parties. The FTC should be granted broader civil penalty authority to be able to investigate more instances of civil liberties and privacy violations. Additionally, the Securities and Exchange Commission should consider imposing additional requirements for greater transparency regarding advertisement counts and views, and how that data are used with the platforms’ algorithms to generate revenue. Violations of these types of securities laws could potentially instigate criminal investigations, which would add another layer of accountability.
Scrolling through our timelines is often more enticing than the dealing with the reality of busy and challenging day-to-day lives, especially when we are in the midst of an unrelenting pandemic and contentious start to the new Biden-Harris administration. When we are happy, we can share our joy publicly and win the attention of friends old and new. When we are angry, we can find fellow angry people to commiserate with. When we are anxious and want reassurance, we can follow those who at least claim to have solutions. What we need to realize is that the content we view and generate is also manipulated to keep us where we are, connected and enmeshed in an echochamber of like-minded, or even similarly prejudiced groups. Some argue that ideology rather than online echo chambers are to blame for the current divisiveness in our country. However, social media platforms undeniably play the role of willing facilitators to perpetuate the divide.
About the Author:
Divya Ramjee is a PhD candidate and adjunct professor in American University’s Department of Justice, Law & Criminology. Her research focuses on the intersection of crime and technology, including the applications of artificial intelligence in the fields of criminology and criminal justice, law, and security. Her views are her own.
Dr. Margaret Cunningham is an experimental psychologist and the Principal Research Scientist for Human Behavior at Forcepoint’s X-Lab. In this role, she serves as the behavioral science subject matter expert in an interdisciplinary security team driving the development of human-centric security solutions. Her views are her own.