An important thing to realize about the grandest conversations surrounding AI is that, most of the time, everyone is making things up. This isn’t to say that people have no idea what they’re talking about or that leaders are lying. But the bulk of the conversation about AI’s greatest capabilities is premised on a vision of a theoretical future. It is a sales pitch, one in which the problems of today are brushed aside or softened as issues of now, which surely, leaders in the field insist, will be solved as the technology gets better. What we see today is merely a shadow of what is coming. We just have to trust them.
I had this in mind when I spoke with Sam Altman and Arianna Huffington recently. Through an op-ed in Time, Altman and Huffington had just announced the launch of a new company called Thrive AI Health. That organization promises to bring OpenAI’s technology into the most intimate part of our lives, assessing our health data and making relevant recommendations. Thrive AI Health will join an existing field of medical and therapy chatbots, but its ambitions are immense: to improve health outcomes for people, reduce health-care costs, and significantly reduce the effects of chronic disease worldwide. In their op-ed, Altman and Huffington explicitly (and grandiosely) compare their efforts to the New Deal, describing their company as “critical infrastructure” in a remade health-care system.
They also say that some future chatbot offered by the company may encourage you to “swap your third afternoon soda with water and lemon.” That chatbot, referred to in the article as “a hyper-personalized AI health coach,” is the centerpiece of Thrive AI Health’s pitch. What form it will take, or how it will be completed at all, is unclear, but here’s the idea: The bot will generate “personalized AI-driven insights” based on a user’s biometric and health data, doling out information and reminders to help them improve their behavior. Altman and Huffington give the example of a busy diabetic who might use an AI coach for medication reminders and healthy recipes. You can’t actually download the app yet. Altman and Huffington did not provide a launch date.
Normally, I don’t write about vaporware—a term for products that are merely conceptual—but I was curious about how Altman and Huffington would explain these grand ambitions. Their very proposition struck me as the most difficult of sells: two rich, well-known entrepreneurs asking regular human beings, who may be skeptical or unfamiliar with generative AI, to hand over their most personal and consequential health data to a nagging robot? Health apps are popular, and people (myself included) allow tech tools to collect all kinds of intensely personal data, such as sleep, heart-rate, and sexual-health information, every day. If Thrive succeeds, the market for a truly intelligent health coach could be massive. But AI offers another complication to this privacy equation, opening the door for companies to train their models on hyper-personal, confidential information. Altman and Huffington are asking the world to believe that generative AI—a technology that cannot currently reliably cite its own sources—will one day be able to transform our relationships with our own bodies. I wanted to hear their pitch for myself.
Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.) “We would hear these stories where people say … ‘I used it to figure out a diagnosis for this condition I had that I just couldn’t figure out, and I typed in my symptoms, and it suggested this, and I got a test, and then I got a treatment.’”
Read: Generative AI can’t cite its sources
I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
“I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.” Huffington echoed these points, arguing that there are billions of health searches on Google every day.
That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information. “It’ll be super important to make it clear to people how data privacy works; that you know what we train on, what we don’t, like when something is ever-stored versus just exists in one session,” he said. “But in our experience, people understand this pretty well.”
Although savvy users might understand the risks and how chatbots work, I argued that many of the privacy concerns would likely be unexpected—perhaps even out of Thrive AI Health’s hands. Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “available through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.” This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of AI privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an AI, but maybe there should be.”
Read: This is what it looks like when AI eats the world
Here I was struck by an idea that has occurred to me over and over again since the beginning of the generative-AI wave. A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more importantly—on what it might presage about what is coming next.
AI is always measured against the end goal: the creation of a synthetic, reasoning intelligence that is greater than or equal to that of a human being. That moment is often positioned, reductively, as either a gift to the human race or an existential reckoning. But you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out. The models will simply get better, they say. (It is true that many of them have, though these problems—and new ones—continue to pop up.) Still, AI researchers maintain their rallying cry that the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine.
When I asked about hallucinations, Altman and Huffington suggested that the models have gotten much better and that if Thrive’s AI health coaches are focused enough on a narrow body of information (habits, not diagnoses) and trained on the latest peer-reviewed science, then they will be able to make good recommendations. (Though there’s every reason to believe that hallucination would still be possible.) When I asked about their choice to compare their company to a massive government program like the New Deal, Huffington argued that “our health-care system is broken and that millions of people are suffering as a result.” AI health coaches, she said, are “not about replacing anything. It’s about offering behavioral solutions that would not have been successfully possible before AI made this hyper-personalization.”
I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not. That very nonexistence also makes it difficult to criticize with specificity. Thrive AI Health coaches might be the Juicero of the generative AI age—a shell of a product with a splashy board of directors that is hardly more than a logo. Perhaps it is a catastrophic data breach waiting to happen. Or maybe it ends up being real—not a revolutionary product, but a widget that integrates into your iPhone or calendar and toots out a little push alert with a gluten-free recipe from Ina Garten. Or perhaps this someday becomes AI’s truly great app—a product that makes it ever easier to keep up with healthy habits. I have my suspicions. (My gut reaction to the press release was that it reminded me of blockchain-style hype, compiling a list of buzzwords and big names.)
Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound. My immediate frustration with the vaporware quality of this announcement turns to trepidation once I consider what happens if they do actually build what they’ve proposed. Is OpenAI—a company that’s had a slew of governance problems, leaks, and concerns about whether its leader is forthright—a company we want as part of our health-care infrastructure? If it succeeds, would Thrive AI Health deepen the inequities it aims to address by giving AI health coaches to the less fortunate, while the richest among us get actual help and medical care from real, attentive professionals? Am I reflexively dismissing an earnest attempt to use a fraught technology for good? Or am I rightly criticizing the kind of press-release hype-fest you see near the end of a tech bubble?
Read: The real lesson from The Making of the Atomic Bomb
Your answer to any of these questions probably depends on what you want to believe about this technological moment. AI has doomsday cultists, atheists, agnostics, and skeptics. Knowing what AI is capable of, sussing out what is opportunistic snake oil and what is genuine, can be difficult. If you want to believe that the models just want to learn, it will be hard to convince you otherwise. So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
I put that question—why should people trust you?—to the pair at the end of my interview. Huffington said that the difference with this AI health coach is that the technology will be personalized enough to meet the individual, behavioral-change needs that our current health system doesn’t. Altman said he believes that people genuinely want technology to make them healthier: “I think there are only a handful of use cases where AI can really transform the world. Making people healthier is certainly one of them,” he said. Both answers sounded earnest enough to my ear, but each requires certain beliefs.
Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling. Blind faith gives those who stand to profit an enormous amount of leverage; it opens up space for delusion and for grifters looking to make a quick buck.
The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.