Elon Musk and other tech leaders call for pause on ‘A.I. dangerous race’

 Elon Musk and other tech leaders call for pause on ‘A.I. dangerous race’

  • Elon Musk and other tech industry figures have called for a halt to the development of artificial intelligence (AI) systems that surpass the capabilities of OpenAI's latest large language model, GPT-4.
  • In an open letter co-signed by Apple co-founder Steve Wozniak, they have urged for a six-month pause in training such advanced AI, citing potential risks to society.
  • Despite being one of OpenAI's co-founders, Musk has expressed concerns about the organization's direction and believes it has strayed from its original mission.

Elon Musk and numerous other technology leaders have urged AI laboratories to put a pause on the advancement of systems that can compete with human-level intelligence.

The Future of Life Institute has issued an open letter, co-signed by Elon Musk, Steve Wozniak of Apple, and former presidential candidate Andrew Yang, urging artificial intelligence (AI) labs to stop training models that surpass the capabilities of OpenAI's GPT-4, the latest version of the large language model software.

The letter stated that "Current AI systems are increasingly able to compete with humans in various tasks, and we must contemplate whether we are willing to allow machines to flood our information channels with disinformation and propaganda."

The questions raised in the statement are: "Is it appropriate to automate all jobs, even the fulfilling ones? Should we create nonhuman minds that could potentially outnumber, outsmart, and replace us? Are we willing to risk losing control of our civilization?"

The letter made it clear that "It is imperative that un-elected tech leaders not be given the responsibility of making such choices."

The Future of Life Institute, situated in Cambridge, Massachusetts, is a non-profit organization dedicated to advocating for the responsible and ethical advancement of artificial intelligence. The organization was established by Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, a co-founder of Skype.

Previously, the organization successfully persuaded prominent individuals such as Elon Musk and Google's AI laboratory, DeepMind, to vow to never create lethal autonomous weapons systems.

As per their declaration, the institute has appealed to all AI labs to "temporarily halt the training of AI systems that are more advanced than GPT-4 for at least six months."

GPT-4, which was introduced this month, is considered to be considerably more sophisticated than its forerunner, GPT-3.

The statement further emphasized that "If an immediate pause is not feasible, governments must intervene and impose a moratorium."

The capability of ChatGPT, the AI chatbot, to generate human-like responses to user prompts left researchers stunned. ChatGPT achieved a monthly active user base of 100 million within two months of its launch, which made it the quickest-growing consumer application ever, as of January.

With the help of enormous amounts of internet data, this technology has been utilized to produce diverse outcomes, such as legal opinions on court cases and poetry that imitates William Shakespeare's style.

However, AI ethicists have expressed apprehension about potential misuse of the technology, such as the spread of plagiarism and misinformation.

The open letter issued by the Future of Life Institute stated that AI systems with human-level intelligence pose "serious risks to society and humanity," as per the concerns raised by technology leaders and academics.

According to the letter, the focus of AI research and development should be redirected towards improving the accuracy, interpretability, transparency, safety, alignment, loyalty, robustness, and trustworthiness of existing advanced systems.

Microsoft has reportedly invested $10 billion in OpenAI and integrated the startup's GPT natural language processing technology into its Bing search engine to enhance its conversational capabilities, according to sources.

Following this, Google revealed its own conversational AI product for consumers, named Google Bard, to compete with OpenAI's technology.

According to previous statements made by Musk, he believes that AI is one of the "biggest risks" to humanity.

Elon Musk was one of the co-founders of OpenAI in 2015, but he resigned from the board in 2018 and no longer has any financial interest in the organization.

Musk has recently expressed his concern that the organization is deviating from its original goals, criticizing it multiple times.

The swift advancement of AI tools has prompted regulators to hurry in developing measures to regulate their use. In a white paper published on Wednesday, the U.K. government delegated different regulators to oversee the utilization of AI tools within their respective sectors by implementing existing laws.

Post a Comment

0 Comments