The Prompt is a weekly rundown of AI’s buzziest startups, biggest breakthroughs, and business deals. To get it in your inbox, subscribe here.

Welcome back to The Prompt.

Popular large language models continue to produce racist stereotypes, specifically against speakers of the African American English (AAE) dialect, according to a new study by Stanford University’s Human-Centered Artificial Intelligence and the Allen Institute for AI. Despite efforts to add guardrails and limit the models from creating harmful content, systems like OpenAI’s GPT-3.5 and GPT-4 and Google’s T5 model demonstrated “covert racism” while making decisions related to employment, legal or academic matters, the study found.

In different experimental situations, the LLMs were more likely to give speakers of African American English lower prestige jobs, describe them as “lazy,” “stupid” and “dirty” or even determine they should be convicted of a crime compared to speakers of Standard American English.

“This study suggests that instead of steady improvement, the corporations are playing whack-a-mole – they’ve just gotten better at the things that they’ve been critiqued for,” Stanford researcher Pratyusha Ria Kalluri said.

Now let’s get into the headlines.

BIG PLAYS

On Monday, Apple unveiled the iPhone 16, the first line of iPhones that are purpose-built for AI. The new phones will be embedded with “Apple Intelligence,” the tech giant’s label for an array of generative AI features like summarizing audio notes or writing texts and emails. The phone will also have an advanced version of its voice assistant Siri and include AI features for searching photos within an album with text-based prompts, generating and editing images and identifying real-world objects.

LAW + ETHICS

Michael Smith, a North Carolina-based musician was charged with allegedly using artificial intelligence to create hundreds of thousands of songs and then using bot accounts to stream the AI-generated music billions of times on platforms like Spotify and Apple Music, according to the United States Attorney’s Office. Since 2018, he has collected more than $10 million in royalty payments through the fraudulent scheme.

AI DEAL OF THE WEEK

Safe Superintelligence, an AI startup cofounded by OpenAI cofounder and former Chief Scientist Ilya Sutskever, has raised $1 billion in capital from marquee investors like Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel, Reuters reported. The company, which aims to build safe AI systems that can outperform humans at certain cognitive abilities such as executing complex tasks, is now valued at $5 billion.

Also notable: enterprise AI search platform Glean has raised $260 million at a $4.6 billion valuation.

DEEP DIVE

“AI girlfriend” applications are becoming increasingly popular worldwide, with more than 144,000 monthly searches for the term, according to Tools for Humans, a company that tracks popularity of AI tools and use cases. CEO Alec Chambers told Forbes that people, from teenagers to older men, are chatting with these AI girlfriends, with some using such tools as a replacement for relationships with human beings or to deal with loneliness. “[The apps] are an infinite generator. People can spend endless amounts of time on it,” Chambers said.

The internet is being flooded with hundreds of “AI girlfriend” chatbots and apps, including those from larger startups like Character AI. Many of these apps allow people to select and customize aspects like the body type, ethnicity and personality traits of the character. Social media platforms like Facebook and Instagram are glutted with advertisements for explicit “AI girlfriend” apps, Wired reported in April. Sexual roleplay is also one of the most popular themes of conversations between bots and humans, according to an analysis by The Washington Post. More than 7% of the 200,000 conversations analyzed were about sex.

These chatbots could also pose a serious danger of alienating individuals and potentially making them addicted to the technology, Chambers said. While some AI girlfriend apps are free, others require people to pay to continue chatting with AI or access more features. “I think that’s where it starts going into that really exploitative addictive area where users can lose track of time and also the expense really,” he said drawing parallels to porn and gambling addiction. The risks of such technologies are further exacerbated by a burgeoning male mental health and loneliness crisis, he said. 65% of men between the ages of 18 and 23 agreed with the statement “no one knows me well,” according to a 2023 study by research and social justice organization Equimundo.

AI INDEX

Counterfeits are a growing concern for the global luxury goods market with the spread of fake designer watches and copy sneaker collections. Some companies are turning to AI and computer vision to analyze minute details of a product and tell the difference between mock-ups and genuine goods, Forbes contributor Sol Rashidi reported.

$1.79 trillion

The global trade of counterfeit goods by 2030, per a report by brand data analytics company Corsearch.

$600 billion

The value of the fake shoe market in 2023, according to shoe testing company Run Repeat, increased from $46.1 billion in 2013.

QUIZ

This startup is developing technology for automated cell growth to treat diseases like diabetes and Parkinson’s.

Insitro

Cellino

Benchling

Atomwise

Check if you got it right here.

MODEL BEHAVIOR