Elon Musk’s foray into the world of online encyclopedias with Grokipedia, positioned as a rival to Wikipedia, has already drawn sharp criticism, even from Wikipedia’s founder, Jimmy Wales. Following Grokipedia’s less than smooth initial launch, Wales, speaking at the CNBC Technology Executive Council Summit this Tuesday, expressed skepticism about its potential.
“I’m not optimistic he will create anything very useful right now,” Wales stated flatly, signaling a deep-seated concern about the underlying technology driving the venture. His remarks weren’t limited to Grokipedia’s perceived shortcomings. Wales also directly addressed Musk’s claims of “woke bias” within Wikipedia itself. “He is mistaken about that,” Wales countered, emphasizing the platform’s reliance on mainstream sources. “His complaints about Wiki are that we focus on mainstream sources and I am completely unapologetic about that. We don’t treat random crackpots the same as The New England Journal of Medicine and that doesn’t make us woke,” he asserted, adding, “It’s a paradox. We are so radical we quote The New York Times.”
The critique extended to Grokipedia’s alleged promotional slant. “I haven’t had the time to really look at Grokipedia, and it will be interesting to see, but apparently it has a lot of praise about the genius of Elon Musk in it. So I’m sure that’s completely neutral,” Wales quipped, suggesting a potential bias within the AI-generated content.
However, Wales made clear the real issue wasn’t merely his rocky relationship with Elon Musk. His deepest concerns reside in the foundational reliance of Large Language Models (LLMs) to construct reliable online resources.
“The LLMs he is using to write it are going to make massive errors,” Wales predicted. “We know ChatGPT and all the other LLMs are not good enough to write wiki entries.”
Musk countered this assertion with his trademark confidence, stating, “Grokipedia will exceed Wikipedia by several orders of magnitude in breadth, depth and accuracy,” in a recent online post.
Wales cited concrete cases to illustrate his doubts about relying on LLMs as sources of truth. He pointed out that Wikipedia, running with relatively lean technology costs – estimated at $175 million annually – achieves a level of accuracy and reliability through community contributions that dwarfs the capabilities of algorithms that large tech companies are investing tens of billions of dollars in AI development. One Wall Street estimate sees total AI spending by hyperscalers reach $550 billion next year.
He described instances where even seemingly simple inquiries posed to chatbots produce inaccurate results. As an example, his wife, someone he describes as “not famous but known” within British politics, is consistently misidentified by current chatbot models, returning “plausible but wrong” answers. In Wales’ view, the more nuanced or in-depth the query, the greater the likelihood of AI-generated inaccuracies.
Wales also relayed a story from a member of the German Wikipedia community who deployed a verification program on the ISBNs of cited sources. The program uncovered notable mistakes tracing back to one person who ultimately admitted to using ChatGPT to find citations for text references. The model essentially “just very happily makes up books for you.”
Despite the AI advances, Wales says these examples underscore a critical flaw: the propensity of LLMs to hallucinate information and fabricate sources, undermining their suitability for building trustworthy knowledge repositories.
Wales, who recently authored “The Seven Rules of Trust,” a book detailing the history of Wikipedia and its role in navigating the current societal tensions regarding online information accuracy, emphasizes that such clashes serve as important reminders for the Wikipedia community. “It’s really important for us and the Wiki community to respond to criticism like that by doubling down on being neutral and being really careful about sources,” he said. “We shouldn’t be ‘wokepedia.’ That’s not who we should be or what people want from us. It would undermine trust.”
He further stressed that, while public and media perceptions often overstate Wikipedia’s current level of perfection, the organization is intensely focused on achieving continuous improvement. “We are not as good as they think we are. Of course, we are a lot better than we used to be, but there is still so much work to do.”
Looking ahead, Wales anticipates that the challenges posed by both technology and misinformation will only intensify, particularly as LLMs become more sophisticated at generating fake websites with believable content. Though he believes the collective scrutiny of the Wikipedia community would likely expose such fabrications, he concedes that they could successfully deceive a significant portion of the public.
He doesn’t entirely dismiss AI’s potential to help Wikipedia. He sees limited “kind of okay” domains where AI can complement human editors. Wales noted that he has been exploring specific use cases where AI could potentially uncover additional information within existing sources that might warrant inclusion in the wiki. He describes this use of generative AI as “kind of okay.”
This feedback loop could be useful for the site if it could develop its own LLM customized for its task, but the expenses associated with that project has led the organisation to suspend any formal work while it continues to test the technology, Wales explained.
“We are really happy Wiki is now part of the infrastructure of the world, which is a pretty heavy burden on us. So when people say we’ve gotten biased, we need to take that seriously and work on anything related to it,” Wales concluded.
He playfully added, “We talk about errors that ChatGPT makes. Just imagine an AI solely trained on Twitter. That would be a mad, angry AI trained on nonsense.”
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11839.html