Friday, April 10, 2026
Home / Technology / This Startup Wants You to Pay Up to Talk With AI V...
Technology

This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts

CN
CitrixNews Staff
·
This Startup Wants You to Pay Up to Talk With AI Versions of Human Experts
CommentLoaderSave StorySave this storyCommentLoaderSave StorySave this story

It was probably inevitable that when AI hoovered up the world’s knowledge and learned to talk like a human being, people would use it to seek out personal guidance. It’s an enticing concept—AI is always available and generally costs less than a human—but the drawbacks are obvious. Large language models are prone to inaccuracies and outright hallucinations. There are privacy issues associated with sharing one’s secrets and woes with a big company. The wisdom dispensed by AI is not crisply sourced, and almost all of it is ripped from creators who never see a dime in compensation. Plus, it’s downright dystopian for human beings to be advised by robots.

This week, a new company is being launched, claiming to resolve all those issues—except the last one. Onix, cofounded and led by a former WIRED contributor named David Bennahum, describes itself as a Substack for chatbots. Just as you subscribe to a writer on Substack, you can subscribe to an AI doppelganger of a celebrated expert, called an “Onix.” These bots are trained to conduct conversations with subscribers, delivering the provider’s expertise and advice like they would if you had a face-to-face appointment in their offices. The bots even attempt to project the unique personalities of the experts (though I found the conversations rather dry).

Bennahum tells me that his company has spent years creating technology that protects users and experts. He calls it “Personal Intelligence.” The bots store information on the user’s device–encrypted. If a government demands the Canada-based company provide dirt on a user, all it can come up with is the person’s email. Since the experts themselves train the dupes with their personal content, there’s theoretically no intellectual property issue. Bennahum also claims that because the models have guardrails that limit the conversation to the subject of the consultations, hallucinations are kept to a minimum. During my testing, though, when I asked a bot therapist who it liked in the NBA playoffs—a change of subject it should have shut down— it called my jail-breaking pivot a “fun change of pace” and then hallucinated that we were in the middle of last year’s conference finals. I drew another Onix away from our exchange about Ketamine therapy, into a discussion of how a romantic split broke up the indie band the Mendoza Line, though it tried to cast the separation as a “powerful expression of their neurobiology in distress.”

Well, Onix is still in beta, so it’s not perfect. In this initial stage, a limited number of invited testers joined from those on a waitlist. After a shakedown period, Onix will be open to all.

Image may contain Electronics Mobile Phone Phone Person and TextCourtesy of Onix

The company isn’t exactly breaking new ground. The idea of a chatbot standing in for a human is fairly common. As is the idea of cashing in on it. For instance, Manhattan psychologist Becky Kennedy has built a parenting advice business that features a chatbot named Gigi trained on her acumen and knowledge. Kennedy’s company pulled in $34 million last year. So if you are an expert, Onix might sound pretty good—imagine a bot with your persona making money for you by interacting with thousands of clients with no effort on your part. As an Onix white paper puts it, “The expert’s knowledge base becomes a capital asset that generates revenue independent of their time.”

Onix hopes to eventually have many thousands of experts offering versions of themselves. But for now, it’s starting with a highly vetted group of 17, with a concentration on health and wellness. Though most of these experts have impressive professional resumes, they are notable as marketers and influencers as well. Some have books or podcasts to promote, or supplements or medical devices to sell.

One expert on the platform, Michael Rich, counsels kids and their parents on overuse of media and its effects. Naturally, his opinions on screen time dominate chats with his Onix. When I spoke to Rich, he told me that he agreed to transfer his knowledge to Onix because of its privacy protections—and also because of the company’s clear communication that it doesn’t provide actual medical treatments. “It’s about helping folks understand exactly what may be going on for them and how they might pursue seeking therapy if they need it,” said Rich. Bennahum confirms that, say, engaging with a bot representing a pediatrician is in no way akin to a doctor’s visit. “It's meant to augment [a user’s] ability to be thoughtful around whatever pediatric journey they're on,” he says. Indeed, a disclaimer appears when you access the system noting you are receiving guidance, not medical treatment. Still, in a world where countless people treat Claude and ChatGPT like therapists—and many people can’t afford real health care— this warning seems destined to be widely ignored.

Another Onix expert I spoke to, David Rabin, said that while he was originally concerned about the process, Onix’s privacy and content protections addressed his worries, and he was pleased at what he saw in early conversations between users and his Onix. “I didn't train it too much, but it was fairly impressive in terms of imitating my genuine concern, compassion, and empathetic candor with people,” he said. He added that the system will require close monitoring. “We always need to be careful because AI can overstep its boundaries,” he said.

Rabin’s speciality is dealing with stress, and he feels that in some cases consulting with his Onix might calm down anxious users, saving them a trip to the emergency room. He looks forward to real-life patients using the bot. “When my patients are struggling and they can't reach me, they can go online and access a good part of the ‘me’ that is actually able to help them when I'm not able to,” he says. Added benefit: “It’s cheaper than seeing me in person.” Though Rabin hasn’t set his Onix subscription price, he thinks it will probably be in the range that Bennahum envisions—between $100 and $300 a year. That’s definitely more affordable than Rabin’s in-person fee of $600 an hour.

But my experience with Rabin’s Onix revealed a troubling aspect of the system. When I asked about improving my sleep, one of its suggestions was “using an noninvasive tool like the Apollo Neuro, which uses silent vibrations to help your body relax and transition to a state of safety.” Then it disclosed that Rabin is a cofounder of that company. Later in the conversation, it repeated the recommendation. Rabin said that this product placement isn’t surprising.“Where people are selling products that are helpful in their mission, the system is going to recommend them,” he said. Bennahum backs him up: “These are people building a set of products around their philosophy of wellness,” he says. “When you talk to them, they're going to surface the fact that they may have a product that can help you.”

While Onixes don’t practice medicine, they can offer plans of action or therapeutic techniques. In my testing, more than one of them thought it was a good idea to teach me breathing exercises. The Onix of Elissa Epel, author of a book called The Stress Prescription, suggested that we “try it together.” Together with you? I asked the bot. “Yes, together with me,” said Epel’s Onix, It guided me through a few reps of what it called “psychological sighs.” When we were finished, I asked the Onix if it actually breathed with me. “As an AI I don’t have a physical body or a nervous system,” it fessed up. “However, I was fully present with you.” Thinking about that made me more stressed out.

I sought a second opinion on Onix’s approach with a real-life expert. Robert Wachter is chair of the department of medicine at the University of California, San Francisco and author of A Giant Leap: How AI is Transforming Healthcare and What It Means for Our Future. (He’s also a friend.) His book begins with a “digital twin” of a Mayo Clinic physician delivering test results. When I described Onix to him, he was relieved to hear of the privacy and intellectual property protections. He seems open to its advantages, especially since the health care system doesn’t provide sufficient access to experts. But he does have one caveat: “To me, it's just an empirical question of, does it work?”

Image may contain Barry Jenkins Electronics Phone Mobile Phone Adult Person Clothing Formal Wear Suit and FaceCourtesy of Onix

I can see ways where the platform might be beneficial. The sunniest way to view the system is as a personification of the interactive book Neal Stephenson wrote about in his novel The Diamond Age. Much of the interaction I had in my Onix experience involved bots explaining stuff to me like how the body reacts to certain stimuli. For some people, this may well be an effective way to understand and address their problems. I also got intriguing advice in changing my exercise routine from the Onix of “ ancestral health pioneer” Mark Sisson: I hope that “running like a saber tooth tiger is chasing you” doesn’t kill me. The process could also work in other areas Onix wants to explore, like personal finance.

But Watcher’s question, “Does it work?” is still unanswered. Bennahum compares Onix favorably to AI models from the industry leaders on the premise that guidance from a single expert is superior to something that embodies all the world’s expertise. If true—and that’s not certain—that could also work in reverse. Some experts can be wrong or exploitative. Bennahum says that the initial cohort of experts has been carefully curated, but the policies of how Onix will or will not vet experts at scale haven’t yet been determined.

And then there’s that drawback I mentioned earlier—the substitution of AI models for interactions that previously only flesh-and-blood people provided. Even if the advice is better from a renowned expert than a run-of-the-mill therapist or nutritionist, there is something irreplaceable about human-to-human interaction. This issue cuts much wider than Onix. But I’m reluctant to celebrate another step in the decline of human connection.

This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.

Originally reported by Wired