- Source: Pause Giant AI Experiments: An Open Letter
Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.
Motivations
The publication occurred a week after the release of OpenAI's large language model GPT-4. It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI". AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly.
It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research". FLI suggests using the "amount of computation that goes into a training run" as a proxy to for how powerful an AI is, and thus as a threshold.
Reception
The letter received widespread coverage, with support coming from a range of high-profile figures. As of July 2024, a pause has not been realized - instead, as FLI pointed out on the letter's one-year anniversary, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems". However, it was credited with generating a "renewed urgency within governments to work out what to do about the rapid progress of AI", and reflecting the public's increasing concern about risks presented by AI.
Eliezer Yudkowsky wrote that the letter "doesn't go far enough" and argued that it should ask for an indefinite pause. He fears that finding a solution to the alignment problem might take several decades and that any misaligned AI sufficiently intelligent might cause human extinction.
Some IEEE members have expressed various reasons for signing the letter, such as that "There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm." One AI ethicist argued that the letter provides awareness to multiple issues such as voice cloning, but argued the letter was unactionable and unenforceable.
The letter has been criticized for diverting attention from more immediate societal risks such as algorithmic biases. Timnit Gebru and others argued that the letter was sensationalist and amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI today.
Microsoft's CEO Bill Gates chose not to sign the letter, stating that he does not think "asking one particular group to pause solves the challenges". Sam Altman, CEO of OpenAI, commented that the letter was "missing most technical nuance about where we need the pause" and stated that "An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not and won't for some time." Reid Hoffman argued the letter was "virtue signalling", with no real impact.
List of notable signatories
Listed below are some notable signatories of the letter.
Yoshua Bengio (Canadian AI researcher, scientific director of the Montreal Institute for Learning Algorithms, and Turing Award recipient)
Stuart Russell (British computer scientist, author of Artificial Intelligence: A Modern Approach)
Elon Musk (businessman and investor, CEO of SpaceX and Tesla, owner of X Corp)
Steve Wozniak (American technology entrepreneur, co-founder of Apple)
Yuval Noah Harari (Israeli historian and philosopher, author of popular science bestseller Sapiens: A Brief History of Humankind)
Emad Mostaque (CEO of Stability AI)
Andrew Yang (American businessman and politician)
Laurence Krauss (Canadian-American theoretical physicist and author)
John Hopfield (American scientist known for inventing associative neural networks)
Jaan Tallinn (Estonian billionaire and computer programmer, co-creator of Skype and co-founder of the Future of Life Institute)
Ian Hogarth (British investor and entrepreneur, Chair of the UK Government's AI Foundation Model Taskforce)
Evan Sharp (American internet entrepreneur and co-founder of Pinterest)
Gary Marcus (professor emeritus of psychology and neural science at New York University)
Chris Larsen (American entrepreneur and investor)
Grady Booch (American software engineer)
Max Tegmark (Swedish-American cosmologist, founder of the Future of Life Institute and author of Life 3.0)
Anthony Aguirre (American cosmologist, co-founder of the Future of Life Institute and prediction platform Metaculus)
Tristan Harris (American technology ethicist and co-founder of the Center for Humane Technology)
Danielle Allen (American political scientist)
Marc Rotenberg (president and founder of the Center for AI and Digital Policy)
Steve Omohundro (American computer scientist, CEO of Beneficial AI Research)
Aza Raskin (co-founder of the Center for Humane Technology)
Huw Price (Australian philosopher, co-founder of the Centre for the Study of Existential Risk)
Jeff Orlowski (American filmaker, director of Chasing Ice and The Social Dilemma)
Olle Häggström (Swedish mathematician and author of Here be Dragons, a book discussing the potential dangers of emerging technologies)
Raja Chatila (professor of Robotics, AI and Ethics and former Director of Research at the French National Centre for Scientific Research)
Moshe Vardi (Israeli mathematician and computer scientist)
Adam D. Smith (computer scientist at Boston University)
Daron Acemoglu (Turkish economist and professor at MIT)
Christof Koch (German neurophysiologist and computational neuroscientist)
George Dyson (American author and historian of technology)
Gillian Hadfield (legal scholar and former Senior Policy Adviser to OpenAI)
Erik Hoel (American neuroscientist, neurophilosopher, and writer)
Bart Selman (Dutch professor of computer science, co-founder of the Center for Human-Compatible Artificial Intelligence)
Tom Gruber (computer scientist and co-founder of Siri Inc.)
Robert Brandenberger (Swiss-Canadian theoretical cosmologist)
Michael Wellman (American computer scientist and fellow of the Association for the Advancement of Artificial Intelligence)
Berndt Müller (German nuclear physicist)
Alan Mackworth (Canadian AI researcher and former president of the Association for the Advancement of Artificial Intelligence)
Connor Leahy (AI safety researcher and CEO of Conjecture)
See also
Open letter on artificial intelligence (2015)
Statement on AI risk of extinction
AI takeover
Existential risk from artificial general intelligence
Regulation of artificial intelligence
PauseAI
References
External links
Official website
FAQ
Policymaking in the Pause
Kata Kunci Pencarian:
- Pause Giant AI Experiments: An Open Letter
- Future of Life Institute
- Statement on AI risk of extinction
- PauseAI
- Roman Yampolskiy
- List of open letters by academics
- Yoshua Bengio
- Emad Mostaque
- Timeline of artificial intelligence
- Jaan Tallinn