Tech executives, including Elon Musk, have signed an open letter calling for a temporary “pause” on Ai development

Cac giam doc dieu hanh cong nghe bao gom ca Elon Musk da ky mot

In response to “deep hazards to society and mankind,” more than 2,600 IT industry leaders and researchers have signed an open letter calling for a temporary “stop” on further artificial intelligence (AI) development.

Elon Musk, the CEO of Tesla, Steve Wozniak, the co-founder of Apple, and several CEOs, CTOs, and researchers in the field of artificial intelligence were among the signatories of the letter, which was written by the US think tank Future of Life Institute (FOLI) on March 22.

The institute expressed fears that “human-competitive intelligence can pose serious hazards to society and mankind” and urged all AI firms to “immediately cease” developing AI systems that are more potent than GPT-4 for at least six months. The most recent version of OpenAI’s chatbot driven by artificial intelligence, known as GPT-4, was made available on March 14. It has so far achieved 90 percentile passing rates on some of the most difficult high school and law tests in the United States. It is believed to be ten times more sophisticated than ChatGPT’s initial release.

AI companies are engaged in a “out-of-control race” to create more potent AI, which “no one – not even their inventors – can understand, anticipate, or consistently control,” according to FOLI. One of the main worries was whether automation will “automate away” all employment prospects and perhaps inundate media conduits with “propaganda and falsity.” The institution also agreed with a recent assertion made by OpenAI founder Sam Altman that said an impartial assessment could be necessary prior to developing new AI systems.

In a blog post on February 24, Altman emphasized the importance of becoming ready for robots with artificial general intelligence (AGI) and artificial superintelligence (ASI). Yet, not every AI expert has hurried to sign the petition. In a tweet exchange with Rebooting author Gary Marcus on March 29, SingularityNET CEO Ben Goertzel clarified. LLMs are a type of AI that won’t progress into AGIs, which are still in the early stages of development.

Instead, he advocated slowing down research and development for items like bioweapons and nuclear weapons. Recently, Mike Novogratz, CEO of Galaxy Digital, told investors that he was surprised by the regulatory attention given to cryptocurrencies but not artificial intelligence. In the event that an immediate moratorium on AI development is not implemented, governments should step in, according to FOLI.

Stay with us for more daily crypto news. Comment your thoughts of these news on our social channels below

BarmySpace | Twitter | Telegram Channel | Group Chat | Youtube | Tiktok

Barmy VN Channel | Barmy VN Community