top of page

Metaverse: A World in a Word.


PC: Cottonbro Studio


Motto: “Without proper moral and intellectual underpinnings, machines could control rather than amplify our humanity and trap us forever.” Henry Kissinger


The term “metaverse” was first mentioned in “Snow Crash”, a 1992 science-fiction novel by the American writer Neal Stephenson. It describes a virtual world that exists only on the internet, parallel to the physical one, an extension of human behavior where people meet and interact via avatars. “Avatar” comes from the Sanskrit word “avatara”, meaning “an incarnation, embodiment, or manifestation of a person or idea” – like in the movie“Avatar” by the Canadian film maker James Cameron. It is an immersive, persistent, 3D virtual space where humans experience life in ways they could not in the real world - for instance, by virtually witnessing a historical event instead of reading about it. Mark Zuckerberg, Facebook co-founder, considers that “Metaverse is not a thing a company builds. It’s the next chapter of the internet overall”.


Beyond movies, my first direct contact with the metaverse was in 2019, at the Peres Center for Peace and Innovation in Tel Aviv, where a headset immersed me into the virtual reality of a cockpit to perform an aircraft landing. Today, there are over 400 million monthly active users of metaverse, its global market value is 82 billion USD, and Goldman Sachs Bank forecasted a 8 trillion USD potential. The concept has been expanded to include technologies such as Web 3.0 (semantic web content that can be understood by other machines), blockchains (a digital, non-modifiable ledger to store data), Augmented Reality/Virtual Reality (a mixture of real artifacts enhanced by virtual reality), Artificial Intelligence and Machine Learning (machines that learn directly from data without human intervention). Integrated together, these technologies can make the metaverse an extremely powerful virtual environment, with almost limitless opportunities and risks – from stimulating business, education and health, to altering electoral processes, discrimination and cybercrime. Metaverse mirrors the real life: for instance, in December 2021, Barbados announced that it will open a virtual embassy inDecentraland, a metaverse world, while Tuvalu, a small island state in South Pacific, is turning to metaverse to become the world’s first digital country.


A report on “Risks and Opportunities of the Metaverse” is under preparation by the Council of Europe (CoE). In a declassified draft, the rapporteur, Andi Cristea, a member of the Romanian Parliament, highlights that: “The Metaverse has the potential to expand civil and social rights around the world and can have massive implications for the future of democracy and governance. The risks associated with this technology include privacy concerns, cybersecurity, addiction, loss of freedom, spread of misinformation, decline of public trust in traditional institutions in sectors like education, governance and media. In the case of the metaverse, policymakers need to strike a delicate balance between fostering innovation and ensuring the protection of users and society at large”. The future of the metaverse is intertwined with AI development. Since the release by OpenAI of ChatGPT in November 2022 (Generative Pre-trained Transformers, a deep learning model which creates new content based on existing data), generative AI is on the rise, and the “Holy Grail of AI” is considered to be the Artificial General Intelligence (machines that possess human-like intelligence). 12 years ago, DeepMind cofounder Shane Legg predicted that by 2028 AGI will have 50% chance to become reality. In May 2023, Tesla CEO Elon Musk, Apple cofounder Steve Wozniak, and historian Yuval Noah Harari (“Sapiens” author) were among the 1,100 personalities who signed an open letter calling for a six-month moratorium on the development of advanced AI systems. The letter compared the risks posed by AI with nuclear war and pandemics, and urged technology companies to immediately cease training any AI systems that would be “more powerful than GPT-4”.


In February 2023, former US Secretary of State Henry Kissinger had warned that: “To the extent that we use our brains less and our machines more, humans may lose some abilities. Our own critical thinking, writing and design abilities may atrophy.” (“ChatGPT Heralds an Intellectual Revolution”, Wall Street Journal). In October, Mr. Kissinger expressed again concerns about generative AI: “Once these machines can communicate with each other, which will certainly happen within five years, then it becomes almost a species problem of whether the human species can retain its individuality in the face of this competition.” He believes that science-fiction outcome of humans serving machines “can be avoided, but only by understanding the essence of this intelligence, which will also be able to generate its own point of view. AI is not understood yet. We must be thoughtful in what we ask it.” (Welt TV interview). This November, Yuval Noah Harari told The Guardian newspaper: “AI has the potential to create financial devices that only AI can understand. And just imagine the situation where we have a financial system that no human being is able to understand and therefore not able to regulate. And then there is a financial crisis and nobody understands what is happening.”


\American political scientist Ian Bremmer and British AI researcher Mustafa Suleyman believe that: “Like past technologies waves, AI will pair extraordinary growth and opportunity with immense disruption and risks. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power, as it threatens the status of nation-states as the world primary geopolitical actors. Like the internet and smartphones, AI will proliferate without respect for borders.” (“The AI Paradox”, Foreign Affairs, 16.08.2023), They propose the “techno-prudentialism” as a solution to mitigate risks without damaging AI innovation, inspired by the macro-prudential role played by global financial institutions, and suggest that AI governance should be based on five principles: precaution - do not harm; agile - the AI governance should be flexible; inclusive - institutions that govern AI will have both technology companies and governments at the table; impermeable - all AI companies must be part of it; and targeted, instead of one-size-fits-all - because AI impacts differently every sector of the global economy.


The unprecedented degree of power and influence of Big Tech generated growing calls for policymakers around the world to regulate AI. EU is in the process to create the AI Act - the world’s first set of comprehensive rules to manage AI risks, while enabling innovation. CoE is elaborating the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. G-7 launched the “Hiroshima AI process”, a forum devoted to harmonizing AI governance. UN Secretary General announced a Multistakeholder Advisory Body on Artificial Intelligence to evaluate risks and opportunities of AI. President Joe Biden issued an Executive Order which establishes new standards for AI safety and security, and requires that AI developers share the results of their safety tests with the US government. And, on 1-2 November 2023, at Bletchley Park in Buckinghamshire (the secret headquarters of codebreakers that cracked the Nazi Enigmamachine in WW2), UK convened a historic Artificial Intelligence Safety Summit that brought together around 100 politicians from 28 countries (including USA, UK, France, Germany, China, Japan, Kenya, Nigeria, Brazil, India, Saudi Arabia), academics, scientists and tech executives. Among them, US Vice-President Kamala Harris, British Prime Minister Rishi Sunak, UN Secretary General Antonio Guterres, EU Commission President Ursula von der Leyen, CoE Secretary General Marija Pejcinovic Buric, and Elon Musk - who told participants: "For the first time, we have a situation where there's something that is going to be far smarter than the smartest human. We are seeing the most disruptive force in history here." At the summit, AI developers agreed to work with governments to test new frontier AI models before they are released, and The Bletchley Declaration recognizes that there is “potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”.


The poem “Works and Days” by the ancient Greek author Hesiod tells that when Prometheus stole fire from heaven, Zeus took vengeance by giving Pandora a beautiful box as a gift, and asked her to never open it. But Pandora opened the box, and diseases, violence, greed, madness, death and other evils were released forever into the world. The only one left in the box was hope. Pandora’s Box of Artificial Intelligence is now wide open, but hope is still inside. Quoting again Henry Kissinger: “As we become Homo Technicus, we hold an imperative to define the purpose of our species. It is up to us to provide the real answers.”


Dr. Ion I. Jinga

Note: The opinions expressed in this article do not bind the official position of the author.



 

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page