SingularityNET is teaming up with FetchAI to leverage decentralized technology to make artificial intelligence more reliable and intelligent while eradicating inaccuracy and irrelevant outputs. The newly established partnership prioritizes curbing hallucination challenges in AI by utilizing decentralized technology.
The newly formed alliance is oriented towards resolving the tendency of large language models (LLMs) to provide inaccurate and irrelevant outputs. The two partners labeled the two challenges as the significant barriers to accomplishing AI reliability and widespread adoption.
SingularityNET matches Fetch’s decentralized platform, allowing the assembly of multi-component AI systems where each constituent element runs on a single machine without central coordinator input.
SingularityNET and Fetch AI Banks on Decentralized Platforms to Avert Hallucinations
SingularityNET chief executive Ben Goertzel indicated that decentralized platforms offer the unique capability to experiment with multiple configurations for composite AI systems. It makes it possible to experiment with neural-symbolic systems, yielding flexibility unlike the standards realized by the typical centralized AI infrastructures.
The agreement features an extended roadmap to unveil a series of unnamed products in the coming year. Goertzel observed that the upcoming products would tap the decentralized technology in developing superior and accurate AI models.
The developers drawn from the two partners will access and integrate various tools, including Fetch.ai’s DeltaV and AI APIs by SingularityNET. The integration enables AI developers to create more decentralized yet intelligent and reliable models.
Chief AGI executive Alexey Potapov disclosed that the SingularityNET team has experimented with various methods to resolve hallucinations emerging in LLMs. He indicated that the primary theme in such approaches since SingularityNET’s establishment in 2017 is neural-symbolic integration.
Popatov decried the inadequacy of LLMs, saying that their current form lacks the necessary robustness to deliver artificial general intelligence. Their present state harbors potential distraction from realizing the end goal.
Pursuit for Neural-Symbolic Integration
Fetch AI chief executive Humayun Sheikh cast neural-symbolic integration as blending neural networks responsible for learning from the input data. Such is attainable through symbolic AI that taps clear rules to reason.
The neural-symbolic integration enhances learning and decision-making, making AI more adaptable and logically sound. The Fetch AI founder considers its achievement to improve accuracy and reliability in AI applications.
Sheikh admitted the difficulty of fixing hallucinations entirely. The challenge contributes to LLM’s fun since hallucinations would likely trigger the need for innovation. He adds that creativity is inevitable to overcome existing limitations.
Sheikh acknowledges that though AI hallucinations are an unresolved problem, the emergence of AI-generated deepfakes poses significant threats. The latter worsens as AI attains sophistication while hallucinations become more convincing.
Sheikh laments that AI poses an unresolved danger that would unravel in a manner where it extrapolates erroneous ideas yet reinforces itself to a self-fulfilling” challenge.
The Fetch AI chief ranks the self-fulfilling shortcoming as the greatest threat that would unravel shortly, not AI’s thoughts dethroning humanity.
Industry Leader’s Response to AI Hallucinations
The accelerated uptake of AI into the mainstream sectors is facing AI hallucinations as the primary barrier. Such was evident in an April incident when OpenAI’s ChatGPT alleged professor Jonathan Turley sexually assaulted a student during a trip where his absence was proven.
The latest incident involves a legal team assembled for ex-Fugees member Pras Michel. The attorneys filed a motion in October seeking retrial, alleging that former attorneys relied upon AI. The AI model offered hallucinated responses, leading to the conviction of the client of 10 counts. The convict was found guilty of conspiracy to witness tampering, falsifying documents, and running an unregistered foreign agency.
Fetch AI’s Kamal Ved termed hallucination a double-edged sword in content creation. Some parties are delighted in its existence while others regret the risk, hence the purpose of addressing the problem to realize determinism.
Stanford University researchers attributed hallucinations to the failure to address transparency in the team developing the AI model. Such is inevitable with each company seeking to outdo rivals and dominate the market.
A subsequent publication by Stanford University’s Center for Research on Foundation Models (CRFM) revealed in October that few generative AI developers are seeking transparency.
The CRFM society head, Rishi Bommasani, decries that the lack of transparency in the foundation model segment ultimately hinders regulators’ ground to question and initiate action in such areas.
Collaboration Towards Initiatives Prioritizing Transparency in AI Models
The solution lies in collaborating towards realizing more transparency in AI development. Recently, Meta and IBM, jointly with over 50 entities, unveiled the AI Alliance, aiming to collaborate in the research and development of responsible and transparent models.
A similar initiative featured OpenAI teaming with Microsoft, Google, and Anthropic to form the Frontier Model Forum. The July group is pivotal towards reliable and transparent models founded upon open innovation and collaboration.
Editorial credit: Maurice NORBERT / Shutterstock.com