The (Inevitable) Dark Side of ChatGPT

Workers in Kenya were paid less than $2 per hour to weed out hateful content to make the system ‘safe’ for Western users.

Photo: A ChatGPT prompt is shown on a device in New York on January 5, 2023 [File:AP/Peter Morgan]

25 January 2023 | James Porteous | Clipper Media News

ChatGPT is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques. Wikipedia

23 January 2023 | Nanjala Nyabola | Al Jazeera English

On January 18, Time magazine published revelations that alarmed if not necessarily surprised many who work in Artificial Intelligence.

The news concerned ChatGPT, an advanced AI chatbot that is both hailed as one of the most intelligent AI systems built to date and feared as a new frontier in potential plagiarism and the erosion of craft in writing.

Many had wondered how ChatGPT, which stands for Chat Generative Pre-trained Transformer, had improved upon earlier versions of this technology that would quickly descend into hate speech.

The answer came in the Time magazine piece: dozens of Kenyan workers were paid less than $2 per hour to process an endless amount of violent and hateful content in order to make a system primarily marketed to Western users safer.

It should be clear to anyone paying attention that our current paradigm of digitalisation has a labour problem. We have and are pivoting away from the ideal of an open internet built around communities of shared interests to one that is dominated by the commercial prerogatives of a handful of companies located in specific geographies.

In this model, large companies maximise extraction and accumulation for their owners at the expense not just of their workers but also of the users. Users are sold the lie that they are participating in a community, but the more dominant these corporations become, the more egregious the unequal power between the owners and the users is.

“Community” increasingly means that ordinary people absorb the moral and the social costs of the unchecked growth of these companies, while their owners absorb the profit and the acclaim. And a critical mass of underpaid labour is contracted under the most tenuous conditions that are legally possible to sustain the illusion of a better internet.

ChatGPT is only the latest innovation to embody this.

Much has been written about Facebook, YouTube and the model of content moderation that actually provided the blueprint for the ChatGPT outsourcing.

Content moderators are tasked with consuming a constant stream of the worst things that people put on these platforms and flagging it for takedown or further actions. Very often these are posts about sexual and other kinds of violence.

Nationals of the countries where the companies are located have sued for the psychological toll that the work has taken on them. In 2020, Facebook, for example, was forced to pay $52m to US employees for the post-traumatic stress disorder (PTSD) they experienced after working as content moderators.

While there is increasing general awareness of secondary trauma and the toll that witnessing violence causes people, we still don’t fully understand what being exposed to this kind of content for a full workweek does to the human body.

We know that journalists and aid workers, for example, often return from conflict zones with serious symptoms of PTSD, and that even reading reports emerging from these conflict zones can have a psychological effect.

Similar studies on the impact of content moderation work on people are harder to complete because of the non-disclosure agreements that these moderators are often asked to sign before they take the job.

US politics, Canada’s multiculturalism, South America’s geopolitical rise—we bring you the stories that matter.

We also know, through the testimony provided by Facebook whistle-blower Frances Haugen, that its decision to underinvest in proper content moderation was an economic one. Twitter, under Elon Musk, has also moved to slash costs by firing a large number of content moderators.

The failure to provide proper content moderation has resulted in social networking platforms carrying a growing amount of toxicity. The harms that arise from that have had major implications in the analogue world.

In Myanmar, Facebook has been accused of enabling genocide; in Ethiopia and the United States, of allowing incitement to violence.

Indeed, the field of content moderation and the problems it is fraught with are a good illustration of what is wrong with the current digitalisation model.

The decision to use a Kenyan company to teach a US chatbot not to be hateful must be understood in the context of a deliberate decision to accelerate the accumulation of profit at the expense of meaningful guardrails for users.

These companies promise that the human element is only a stopgap response before the AI system is advanced enough to do the work alone. But this claim does nothing for the employees who are being exploited today.

Nor does it address the fact that people – the languages they speak and the meaning they ascribe to contexts or situations – are highly malleable and dynamic, which means content moderation will not die out.

So what will be done for the moderators who are being harmed today, and how will the business practice change fundamentally to protect the moderators who will definitely be needed tomorrow?

If this is all starting to sound like sweatshops are making the digital age work, it should – because they are. A model of digitalisation led by an instinct to protect the interests of those who profit the most from the system instead of those who actually make it work leaves billions of people vulnerable to myriad forms of social and economic exploitation, the impact of which we still do not fully understand.

It’s time to lay to rest the myth that digitalisation led by corporate interests is somehow going to eschew all the past excesses of mercantilism and greed simply because the people who own these companies wear T-shirts and promise to do no evil.

History is replete with examples of how, left to their own devices, those who have interest and opportunity to accumulate will do so and lay waste to the rights that we need to protect the most vulnerable amongst us.

We have to return to the basics of why we needed to fight for and articulate labour rights in the last century. Labour rights are human rights, and this latest scandal is a timely reminder that we stand to lose a great deal when we stop paying attention to them because we are distracted by the latest shiny new thing.

Nanjala Nyabola is a political analyst and the author of “Digital Democracy, Analogue Politics”.

Loading

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.