iPod creator, Nest Labs founder and investor Tony Fadell took a shot at OpenAI CEO Sam Altman on Tuesday during a lively interview at TechCrunch Disrupt 2024 in San Francisco. Speaking about his understanding of the longer history of AI development before the LLM craze and the serious problems with LLM hallucinations, he said: “I’ve been doing AI for 15 years, people , I’m not just talking shit – I’m not Sam Altman, okay?
The comment elicited a murmur of surprised “oohs” from the shocked crowd, amid a small smattering of applause.
Fadell was on a roll during his interview, covering a number of topics ranging from what kind of “assholes” can produce great products to what’s wrong with today’s LLMs.
While admitting that LLMs are “perfect for certain things”, he explained that there remain serious concerns to be addressed.
“LLMs try to be something ‘general’ because we’re trying to make science fiction reality,” he said. “(LLMs are) know-it-alls…I hate know-it-alls.”
Instead, Fadell suggested he would prefer to use AI agents trained on specific things and more transparent about their mistakes and hallucinations. This way, people could learn everything about AI before “hiring” them for the specific job at hand.
“I hire them to… educate me, I hire them to be a co-pilot with me, or I hire them to replace me,” he explained. “I want to know what this is about,” adding that governments should get involved to enforce such transparency.
Otherwise, he pointed out, companies using AI would be putting their reputations on the line because of “some technological bullshit,” he said.
“Right now we are all adopting this solution and we don’t know what problems it poses,” Fadell stressed. He also noted that a recent report indicated that doctors using ChatGPT to create reports on patients experienced hallucinations in 90% of them. “These could kill people,” he continued. “We use this thing and we don’t even know how it works.”
(Fadell appeared to be referring to the recent report in which University of Michigan researchers studying AI transcripts discovered an excessive number of hallucinations, which could be dangerous in medical settings.)
Commentary about Altman followed when he shared with the crowd that he had been working with AI technologies for years. Nest, for example, used AI in its thermostat in 2011.
“We couldn’t talk about AI, we couldn’t talk about machine learning,” Fadell noted, “because people would be afraid — ‘I don’t want AI in my house’ — now everyone wants AI everywhere.”