- Work With AI
- Posts
- OpenAI Underpaid Kenyan Workers And Then Fired Them
OpenAI Underpaid Kenyan Workers And Then Fired Them
1-23-2023
š° Today's Top Stories
š° News: OpenAI deal with worker exploitation and countries struggle to deal with deep fakes
š¦¾ Content: AI artwork was predicted 100 years ago and Paul Graham picks AI sector over Crypto
(4 min read) (Source: Quartz)
OpenAI, the San Francisco-based AI development firm, has been criticized for using underpaid Kenyan workers to moderate the content of its language model, ChatGPT. The company outsourced this work to Sama, a company that has been accused of union busting and inhumane working conditions. After the GPT-3 project was completed, Sama laid off 200 employees at their Nairobi office, who earned as little as $1.32 per hour, a fraction of the minimum wage in California.
Sama claims to pay almost double what other content moderation firms in East Africa pay and offers a "full benefits and pension package" which they claim is uncommon. However, the work of removing harmful content from the AI was reportedly traumatic for employees. As a result, Sama cancelled its contracts with OpenAI in February 2022. With projections of $1 billion in revenue by 2024, OpenAI will likely need to reevaluate its outsourcing practices. Check out how Sama responded to all of this here.
(10 min read) (Source: New York Times)
Deepfake technology is all the rage and it's not just for cooking up memes or making your ex look dumb in videos anymore. Unfortunately, it's also been used for malicious purposes like scamming people out of money and undressing women on Telegram. The good news is that governments around the world are starting to put rules and consequences in place for the misuse of these powerful new tools. For example, China has recently adopted regulations requiring consent and digital signatures on manipulated material. But, these regulations may also be used to curtail free speech and it's uncertain if they'll be effective since the worst offenders tend to be anonymous.
Despite the potential for abuse and the threat it represents to law enforcement, deepfakes also have promising applications in industries like entertainment and virtual reality. As with any new technology, it's important to strike a balance between protecting people and protecting our rights. Some experts predict that 90% of online content could be generated by deepfakes in the near future, so it's important to take action now before it's too late.
(12 min read) (Source: NBC News)
OpenAI's ChatGPT is a versatile AI tool that's been put to use by programmers, professors, and even news outlets. Microsoft, which has a $1 billion stake in OpenAI, plans to integrate ChatGPT into its search engine Bing. However, there are concerns that ChatGPT and similar AI technology could replace therapists. A recent experiment by the co-founder of Koko, an online mental health services not-for-profit, integrating GPT-3 AI with a Discord bot called Koko bot, raised concerns about trust in tech companies developing these services and potential privacy issues.
The experiment matched users who wanted to give anonymous advice to others and integrated ChatGPT into the process. Tech commentator Michael Kevin Spencer warns that ChatGPT and its progeny may be used to harness private mental health data for more personalized ads and consumer profiling, and further warns about the implications of language models being used in intimate settings like therapy.
If the movie Her taught us anything, building a relationship with an AI can sometimes get sad and a little spooky pretty quickly. If youād like to continue exploring this topic, a Reddit user convinced ChatGPT to act more like a human therapist.
(45 min read) (Source: Forbes)
Strap in folks, this oneās a doozy. In this incredibly thorough article written for Forbes, world-renowned artificial intelligence and machine learning expert Dr. Lance Elliot discusses the implications (ethical, legal, technological, and so on) that are presented when ChatGPTās API becomes widely available. ChatGPT, as one of the most powerful technological innovations in recent time, has been a topic of debate in many different industries.
Itās use is already being heavily contended in academia and scholarly research and many people are worried about how it might be change the work done by copywriters and journalists. Dr. Elliot goes over the potential āgenuineā and āfakeryā pairings for ChatGPTās API. Genuine pairings might offer fully integrative immersion or add-ons to help bolster other apps with increased efficiency and the potential for innovation. The fakery pairings would include the gimmickry and other deceitful schemes that could come from misuse or inaccurate/misleading outputs.
Everyone knew that this tech would come with huge benefits and huge risks, but the rate at which itās evolving could spell trouble for the ethics and law that will end up governing its use. All we can do at this point is stay informed, use this tech with caution, and encourage others to do the same.
If I create a company that makes millions from from code I learned from books in the library, do I have to give the authors money for helping me learn??
(11 min read) (Source: Salon)
GPT-3: the magic text generator that has college students, internet marketers, and everyone in between convinced it's a genius. It's an incredibly powerful tool capable of writing almost anything from children's books and bad songs to complicated python code and legal documents. But don't be fooled by its human-like responses - it's still just a tool. It remains devoid of common sense and logical reasoning (for now....).
Despite claims by its defenders that it has developed common sense, critics argue that it's just our human tendency to anthropomorphize things. GPT-3's ability to fact-check is just a mirage and it's important to remember that LLMs are just text generators and nothing more. So, let's not give GPT-3 more credit than it deserves and avoid the mistakes made by tech news site CNET, who learned very quickly that Large Language Models have learned to write before learning to think.
š¦¾ Today's Top Content:
1. H.T Webster made a cartoon in 1923 predicting that weād have automated artwork in 100 years later. (Reddit)
2. It seems like almost every huge tech company is being hit with the recent wave of layoffs but Microsoft found a way to take it to a new level. They hired Sting for a private concert for execs the day before they announced 10,000 layoffs. (Reddit)
3. While most of the world is trying to figure out how to keep AI from picking up racist and bigoted biases, some folks are worried about it becoming too āwokeā. (Reddit)
4. A Tesla Model S that had self-driving completely engaged caused an 8 car pile up in San Fransisco. (Reddit)
5. Amazing? Horrifying? You decide! AI was used to generate the cast of Family Guy as real-life humans. We think it confused Chris Griffin with Chris Farley. (Twitter)
Please stop what youāre doing and check out āFamily Guyā characters generated with AI ā as humans:
ā Rex Chapmanšš¼ (@RexChapman)
7:06 PM ā¢ Jan 22, 2023
6. Investing legend Paul Graham thinks that while crypto might be the more exciting trend right now, AI will be the one that historians will be talking about. (Twitter)
The excitement about AI is deeper and more disinterested than the excitement about crypto. There's something real to crypto, for sure, but AI seems more likely to be the thing historians consider important about this decade.
ā Paul Graham (@paulg)
3:01 PM ā¢ Jan 22, 2023
7. It turns out that Snakeās cardboard box trick doesn't just work on enemies in Metal Gear Solid, it works on DARPA robots too. Marines training the AI used in defence robots were able to sneak up on them undetected by hiding under cardboard boxes and giggling the whole time like the little rascals that they are. (Twitter)
These arenāt the marines youāre looking for.
ā Shashank Joshi (@shashj)
2:21 PM ā¢ Jan 18, 2023
Powered by AI. Edited and curated by Humans.
We'd love to hear from you! Leave us comments or feedback by replying to this email!