How Kenyan Workers Were Exploited for $2/h to Help Build ChatGPT
ChatGPT is a large language model chatbot developed by OpenAI. It’s one of the most advanced chatbots in the world, capable of carrying on conversations that are indistinguishable from those of a human. However, the development of ChatGPT has come at a cost. Kenyan workers were paid as little as $2/ hour to label and filter the data that was used to train ChatGPT. This task was outsourced to Sama, a data labeling company based in San Francisco. This form of exploitation raises important questions about the ethics of outsourcing AI development to low-wage countries.
Everything We Know About the Sama-OpenAI Partnership
In 2021, OpenAI partnered with Sama to recruit workers in Kenya to label and filter the data used in training ChatGPT. According to a report by the International Labour Organization (ILO), the workers were promised $12.50/ hour, but they were actually paid as little as $2/ hour. The ILO report also shed light on the long hours and poor working conditions these workers endured.
The ILO’s findings led to Sama terminating its contract with OpenAI in February 2022, following complaints from workers about the low pay and poor working conditions.
A Closer Look at the Kenyan Workers’ Experience
Most of the Kenyan workers involved in the project were young women seeking employment opportunities in Nairobi. As per the ILO report, these workers were subjected to exploitative and unacceptable conditions:
- Low pay: Workers were promised $12.50/hour, but they were actually paid as little as $2/hour.
- Long hours: Workers were expected to work 12-hour days, 6 days a week.
- Poor working conditions: Workers were cramped into small, poorly ventilated rooms. They were also exposed to noise and dust.
- Psychological stress: Workers were exposed to disturbing content, such as images of violence and pornography. This caused them psychological stress.
The Aftermath and OpenAI’s Response
In response to the ILO report, OpenAI expressed deep concern about the working conditions of the Kenyan workers who were involved in the development of ChatGPT. The company pledged commitment to fair treatment and wages for all of its workers.
OpenAI undertook several steps:
- Terminated its contract with Sama, the data labeling company that was found to be underpaying workers.
- Hired a third-party auditor to scrutinize its data labeling practices.
- Developed a code of conduct for its data labeling partners.
- Invested in training and development programs for its data labeling workers.
Furthermore, OpenAI has expressed commitment to collaborate with the ILO and other organizations to enhance the working conditions of data labeling workers worldwide.
While it’s too early to measure the success of OpenAI’s efforts, the company’s proactive steps in response to the ILO report indicate a promising start.
Conclusion: The Tech Industry’s Responsibility
The development of AI technology must be pursued responsibly, with due consideration to the rights and welfare of all workers involved. This incident serves as a stark reminder of the need for stringent ethical guidelines in AI technology development, calling for industry-wide introspection and reform.
Outsourcing technology development to lower-wage countries comes with a responsibility to ensure fair treatment and equitable wages for all workers. As the tech industry continues to grow and evolve, it is essential for companies to prioritize ethics and human rights, making sure that the benefits of AI and other technological advancements are shared equitably across the globe.
As an innovation advisor, I help enterprises create intuitive and competitive products with the goal to drive revenue growth. I use a combination of usability testing, UX research data, and market trends to identify opportunities for innovation that meet the needs of users and delight them.
I am excited to learn more about your business and how I can help you achieve your goals. Let’s connect on LinkedIn and chat!