ChatGPT is here to stay, and we're only now beginning to understand how it's changing the digital landscape.
As ChatGPT is used more and more, we need to think about how it affects the distributions of burdens and benefits across society. This is the question of distributive justice.
It isn't just a future-looking question.
Case: Sama & Outsourced Labor
When OpenAI was developing ChatGPT, they needed workers to train it to avoid spitting back out the worst parts of its data set, which was drawn from across the internet. In its early stages, ChatGPT would reproduce profanity, racist and sexist remarks, violence, and sexual abuse. To make ChatGPT safe for users, OpenAI had to train a detector to filter out any garbage before it reached the user. This required human labor.
OpenAI reached an agreement with the Kenyan branch of Sama, a San Fransisco-based firm that employs workers to label data for notable Silicon Valley clients. The data labelers in Kenya were tasked with classifying and filtering text about topics from child sexual abuse to bestiality to self harm, and they were paid a take-home wage somewhere between $1.32 and $2 per hour.
In the TIME article, one worker told reporters that "he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child." "This was torture," he said. Though workers were entitled to attend sessions with wellness counselors, workers reported those sessions were unhelpful and difficult to access.
Things got so bad that Sama canceled its work for OpenAI eight months earlier than planned. This was in part due to OpenAI's new request that Sama collect sexual and violent images (some illegal under U.S. law) to make its AI systems that produce images safer. OpenAI responded to this incident by stating that they never asked Sama to collect illegal images and that the issue was a result of miscommunication.
The cancellation of the deal left Kenyan workers unexpectedly without income. Because the contracts were ended early, Sama received only $150,000 of the original $200,000 deal.
The Backward-Looking Question of Distributive Justice
In the Sama case, Kenyan workers bore the brunt of the worst parts of the human labor required to make ChatGPT work. The work itself was torture. They were underpaid for their labor. They did not have access to adequate workplace support. And when the contract was terminated, they were left without income they had expected to receive.
ChatGPT, on the other hand, is doing quite well. Microsoft reportedly plans to invest $10 billion dollars into the AI tool, and I see more and more people in my LinkedIn feed and in my academic friend group starting to use ChatGPT in their day to day work.
Combine this with a history of colonialism and ongoing issues concerning the exploitation of formerly colonized, less wealthy nations, and we have a clear case of distributive injustice. The benefits are accruing to those who are already at an advantage, and the burdens are accruing to those who are already at a disadvantage.
I think this is a clear case in which OpenAI should have, at a bare minimum, provided the full pay to the Sama workers laid out in the contract. The overall price for the contract was just over some of the annual salaries I've seen for mid-level UX writers and researchers.
In my view, the call for reparations should be much more stringent and require much more on behalf of OpenAI. Importantly, the Kenyan workers themselves should be the ones to make their demands and specify the terms of OpenAI's response.
The Forward-Looking Question of Distributive Justice
ChatGPT has a number of benefits it can provide:
generating email templates to be customized for those who feel intimidated by emailing a superior
replacing lorem ipsum in designs
providing inexpensive website copy for small businesses
offering summaries and idea generation for busy writers
ChatGPT also has a number of potential burdens its can contribute to:
allowing misinformation to be produced and disseminated more easily
making it easy to cheat or plagiarize in academic and professional contexts
replacing human writers with free software that may be attractive to managers for its low cost
appearing to be a better form of google but without being held to any external standard of truth
Let's think about how these benefits and burdens might accrue to those who may be disadvantaged in society. Small businesses will have the benefit of affordable website copy. Those with less English language knowledge will benefit from a writing generator with accurate grammar and good readability.
At the same time, those who aren't highly educated will be more susceptible to the misinformation that ChatGPT puts out, especially if it appears legitimate. And those who use ChatGPT like google may not be able to spot when its outputs are wrong.
How will ChatGPT affect those who are relatively advantaged? I've read posts by several writers who have already found several generative uses for ChatGPT that have sped up their process and content output. Designers are also pleased by the fact that lorem ipsum can die.
But even those who are highly educated and financially secure are now facing some difficulties from ChatGPT. I recieved a form letter for a class action lawsuit the other day, and I couldn't tell if it was ChatGPT written or not. It took me a decent bit of internet research to figure out if it was a scam. My instructor friends are also having to revamp their classroom plans and run student papers through AI detectors.
I've certainly not caught all the potential burdens and the benefits here, but we need to collectively keep an eye on how ChatGPT is distributing new problems and resources to its users as well as to the broader information and content ecosystem. OpenAI should be doing the brunt of this ethical reflection.
In its absence, it's up to us to hold ChatGPT accountable, navigate these changes, and come up with solutions to the new problems this technology presents.
Where Do We Go From Here
I had thought initially that the threat of ChatGPT-enabled misinformation might be dealt with through a feature that would give each website an AI-writing likelihood score or percentage, like many plagiarism screening tools that I've used as an instructor. The likelihood score or percentage wouldn't be decisive on its own, but it could help determine what's AI-written or not.
The main problem with this solution is that there are people (small businesses, designers, writers) who are using ChatGPT for legitimate purposes on reputable sites. So, we can't necessarily use an AI writing filter to help distinguish what's legitimate from what's not. But, since AI content sounds legit, we'll also lose key misinformation indicators such as misspellings, grammatical errors, and poor writing form.
This is only one of the problems I discussed above, but you can see how it's not easy to solve. If you try to get rid of the burden of ChatGPT-created misinformation through an AI-generated score, you wind up removing several of the benefits ChatGPT provides. Add to that the more complex calculus of determining how ChatGPT is affecting traditionally oppressed and marginalized groups vs. how it is affecting the privileged, and it becomes even more difficult.
I think there are three main things we will have to contend with as more people use ChatGPT and as it becomes more sophisticated:
ChatGPT doesn't know what is true. It's just pulling from pattern recognition, and it can't do a critical assessment of a given discourse.
ChatGPT is easy and free to use. People will be tempted to use it even if that use is ethically dubious.
ChatGPT has hidden costs: in how it came to be, in privacy concerns, and in other yet-to-be-determined ways.
We should start thinking now about how we should adjust our online and in-person spaces to adapt to this new technology and address its effects on the distribution of benefits and burdens, both locally and globally.
How would you deal with the problems of distributive justice that ChatGPT poses?
Photo Credit: DeepMind
Comments