Elon Musk and others call for a freeze on AI because of "risks to civilization."
Introduction
In an open letter highlighting possible hazards to society and mankind, Elon Musk and a group of artificial intelligence specialists and business executives are urging a six-month halt to the development of systems more formidable than OpenAI's recently released GPT-4.
This month, Microsoft supported OpenAI's GPT (Generative Pre-trained Transformer) AI software's fourth iteration, which engages users in human-like dialogue, music production, and document summarization.
The nonprofit Future of Life Institute's letter, signed by more than 1,000 people, called for a freeze on advanced AI until safety criteria were developed, implemented, and independently audited.
- Musk has expressed concerns about Tesla's autopilot system's AI.
- The letter posed the question, "Will we develop alien brains that could ultimately outnumber, outwit, become outdated, and replace us?" Strong AI systems should only be created if we are assured that they will have good effects and that their hazards will be under control, the report stated.
- OpenAI's comment request was delayed. A Future of Life spokesman said OpenAI CEO Sam Altman did not sign the letter.
- Human-competitive AI systems can cause political and economic turmoil, and the letter asked creators to interact with regulators and lawmakers on governance and regulatory frameworks.
- Co-signatories included Stability AI CEO Emad Mostaque, DeepMind researchers, "godfather of AI" Yoshua Bengio, and AI pioneer Stuart Russell.
According to the EU's transparency register, the Musk Foundation, Silicon Valley Community Foundation, and London-based effective altruism organisation Founders Pledge fund the Future of Life Institute.
- On Monday, Europol joined the chorus of moral and legal concerns over cutting-edge AI like ChatGPT, warning that it might be used in phishing schemes, disinformation campaigns, and crimes.
- The UK. government proposed an "adaptable" AI legal framework concurrently.
- In a policy statement issued Wednesday, the administration proposed dividing artificial intelligence regulation among existing human rights, health and safety, and competition agencies.
Companies seeking "AI whisperers"
With its release in 2017, OpenAI's ChatGPT has encouraged rivals to hasten the development of comparable huge language models and companies to incorporate generative AI models into their products.
Also read: US legislators want for greater control over Elon Musk's brain chip business.
Last week, OpenAI said that it has teamed up with over a dozen businesses to integrate their services into its chatbot, enabling ChatGPT customers to place grocery orders through Instacart or make travel arrangements through Expedia.
Companies that employ ChatGPT and other AI tools are beginning to post job listings for "prompt engineers," who spend their days persuading the AI to generate better outcomes and assisting businesses in equipping their personnel with the necessary skills.
Albert Phelps, a prompt engineer at Mudano, part of Accenture in Leytonstone, UK, called it an AI whisperer. Because of wordplay, prompt engineers generally originate from history, philosophy, or English. You're attempting to condense things into a few words."
Phelps, 29, studied history at Warwick before becoming a risk and regulation consultant for banks. He became an Accenture AI researcher after hearing a seminar from the U.K.-funded Alan Turing Institute.
He and his coworkers spend most of the day creating prompts for applications like ChatGPT that may be saved as presets for users of OpenAI's playground. Phelps said that on a typical day, five prompts and fifty ChatGPT chats are created.
The letter proposes "AI summer"
The open letter emphasised that civilization had "paused" other potentially disastrous technologies including human cloning, germline alteration, and eugenics.
“Having created strong AI systems, we may now experience an ‘AI summer' in which we reap the fruits, develop these systems for the clear benefit of everybody, and allow society a chance to adjust.
"Let's enjoy a long AI summer, not hurry unprepared into a fall," it ended.
Gary Marcus, a professor at NYU, who signed the petition, said, "The letter isn't perfect, but the premise is right: We need to slow down until we better understand the repercussions. The big players are getting more and more covert, which makes it harder for society to defend itself.
Opponents called the letter's signatories "AI hype" for exaggerating the technology's promise.
"These assertions exaggerate. "It's supposed to scare people," said Umeå University associate professor and AI researcher Johanna Björklund. “No need to pull the handbrake.”
She suggested increasing transparency for AI researchers rather than pausing development. “Be upfront about AI research.”
Also read: Elon Musk says around 100 Starlinks now active in Iran