Ethereum creator Vitalik Buterin, on Monday, November 27, urged the need to prioritize human intention in artificial intelligence (AI) development rather than prioritizing profit maximization.
Ethereum creator delved into AI development in his reflection on techno-optimism fronted by Marc Andreessen. The latter opined about artificial intelligence in the Techno-optimism Manifesto published last month.
Although Buterin concurred with Andressen’s outlook, he expanded the scope of the mechanism of developing AI and its future direction. The Ethereum creator noted the existential risk of unchecked AI development. Such is likely to lead to the human race.
Humans Face Extinction From Unchecked AI Development
Buterin indicated that extreme climate change, nuclear war, and artificial pandemic would still leave multiple pockets of civilization islands intact to stimulate recovery. In contrast, superintelligent AI would leave no survivors if turned against humans. It would destroy humanity on Earth and not spare Mars.
Buterin cited the 2022 survey assessing the impact of AI, where 5-10% of respondents warned that humans face extinction from unchecked AI development. He urged the establishment of a security-oriented open-source forum to formulate guardrails for AI development as opposed to proprietary corporations and venture capital-funded initiatives.
Buterin iterated his support for natural coexistence between humans and superintelligence. Its accomplishment mandates active human intention to dictate direction to the desired outcome. Nonetheless, he warned that a profit-maximizing formula is never an automatic guarantee.
Buterin expressed support for technology-driven advancements since they expand human potential, as illustrated by the shift from hand tools to smartphones.
Buterin considered that AI is deployable and good if it expands humanity’s reach to other planets and stars. While such transformative technology could translate to a brighter future for humanity, he dismissed the notion of the world retaining the status quo. He warned against greed and disregard for public well-being as catastrophic of the AI.
Buterin Cautions Cabal of Technocrats From Monopolizing AI Development
Buterin acknowledged the existence of certain technology types with the potential to mitigate the adverse impact of others. However, he cautioned against increased digital authoritarianism and surveillance technology. Their usage is defying the government yet controlled by a cabal of technocrats.
Buterin backs the move to delay the highly advanced AI development to avoid monopolizing the technology to a single group. He feared that the managerial technologies allowed OpenAI to serve a hundred-million customer base from a handful of 500 employees. Such could translate to a 500-person elite and a five-person board that could exercise an iron fist to humanity.
Buterin expressed sympathy towards the adequate acceleration identified as e/acc movement. He harbors mixed feelings regarding AI integration into military technology.
Buterin observed that the enthusiasm portrayed concerning contemporary military technology should prioritize good. The dominant technological power should control guys oriented to peace during conflicts. He indicates that the US should build and control AI-powered military technology.
Erect Guardrails to Save Humanity From AI Threat
Buterin resisted the move to cede extreme and opaque power to a contingent of individuals in the hope of wise usage. Instead, it mandates nurturing the philosophy of defense, decentralization, differential, and democracy he profiled as “d/acc.”
The d/acc mindset could ultimately adapt to libertarians, pluralists, and altruists while accommodating the perspectives of blockchain advocates and solar and lunar punks.
Realizing a defense-favoring world is a critical milestone to guarantee safety, few tragedies, and less economic destruction. It saves the world time that would otherwise be wasted on resolving conflict. A defense-favoring world yields healthier, freedom-respecting, and open governance forms.
Buterin reiterated the need to build and accelerate in a manner that protects humanity. He considers the 21st century pivotal in deciding the fate of humanity.
Safeguarding humanity from extinction is a critical milestone that developers should prioritize. He laments that such could take a secondary spot if developers miss out on initiating a grand collective effort to establish answers.