What's the risk of AI causing human extinction?

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Very unlikely
Editors Estimate

AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive

Substack – Jun 6, 2023

Mustafa Suleyman

Cofounder, Deepmind

Mustafa Suleyman
Estimate: Very unlikely
Editors Estimate

I just think that the existential-risk stuff has been a completely bonkers distraction. There’s like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.

Technology Review – Jul 13, 2023

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Related Statement

My estimate [of AI risk without strong regulation] is: 'considerably less than most other potential causes of human extinction' Because we have agency in this.

X – Oct 31, 2023

Margrethe Vestager

Competition Comissioner, EU

Margrethe Vestager
Estimate: Very unlikely
Editors Estimate

Probably [the risk of extinction] may exist, but I think the likelihood is quite small. I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are

The Guardian – Jun 13, 2023

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Very unlikely
Editors Estimate

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Center for AI Safety – May 30, 2023

Ursula von der Leyen

President, European Commission

Ursula von der Leyen
Estimate: Very unlikely
Editors Estimate

[AI] will improve healthcare, boost productivity, address climate change. But we also should not underestimate the very real threats. Hundreds of leading AI developers, academics and experts warned recently in the following words [reads CAIS AI Safety statement]

European Parlaiment – Sep 12, 2023

Expert Survey on Progress in AI

Survey, AI Impacts

Expert Survey on Progress in AI
Estimate: Very unlikely
Clearly Stated

[Median respondent of 1,321 AI researchers gave a 5% chance of] 'Extremely bad (e.g. human extinction)' [in regard to AI]

AI Impacts – Jan 4, 2024

Mark Zuckerberg

CEO, Meta

Mark Zuckerberg
Estimate: Very unlikely
Editors Estimate

In terms of all of the concerns around the more existential risks, I don't think that anything at the level of what we or others in the field are working on in the next year is really in the ballpark of those types of risks

Business Insider – Apr 18, 2024

Eric Schmidt

Former CEO, Google

Eric Schmidt
Estimate: Little chance
Editors Estimate

My concern with AI is actually existential, and existential risk is defined as many, many, many, many people harmed or killed

Business Insider – Mar 25, 2023

Elon Musk

CEO, Tesla

Elon Musk
Estimate: Little chance
Editors Estimate

There is some chance, above zero, that AI will kill us all.

Washington Post – Nov 1, 2023

Sundar Pichai

CEO, Google

Sundar Pichai
Estimate: Little chance
Editors Estimate

It can be very harmful if depolyed wrongly and we don't have all the answers there yet and the technology is moving fast. So does that keep me up at night? Absolutely

60 Minutes – May 17, 2024

Geoffrey Hinton

Laureate , Turing Award

Geoffrey Hinton
Estimate: Little chance
Editors Estimate

So there's what I call the existential threat which is about whether [AI] will wipe out humanity. That's definitely a threat to humanity's existence.

CNN – May 29, 2023

Paul Christiano

Head of AI Safety, US AI Safety Institute

Paul Christiano
Estimate: Little chance
Clearly Stated

I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead

Bankless Podcast – Apr 22, 2023

Yoshua Bengio

Laureate, Turing Award

Yoshua Bengio
Estimate: Unlikely
Editors Estimate

Even if we manage to significantly reduce the probability of a rogue AI emerging, the tiniest probability of a major catastrophe—such as a nuclear war, the launch of highly potent bioweapons, or human extinction—is still unacceptable.

Journal of Democracy – Sep 14, 2023

Anthropic

Company, AI

Anthropic
Estimate: Unlikely
Editors Estimate

As AI models become more capable, we believe that they will create major economic and social value, but will also present increasingly severe risks. ... [This document] focuses on catastrophic risks – those where an AI model directly causes large scale devastation.

Responsible Scaling Policy – Sep 18, 2023

Lina Khan

Chair, Federal Trade Commission

Lina Khan
Estimate: Unlikely
Clearly Stated

Ah, I have to stay an optimist on this one. So I’m going to hedge on the side of lower risk there ... Maybe, like, 15%.

Hard Fork – Nov 10, 2023

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: Unlikely
Clearly Stated

My chance that something goes really quite catastrophically wrong on the scale of human civilisation might be somewhere between 10 per cent and 25 per cent.

Logan Bartlett Show – Oct 5, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Unlikely
Editors Estimate

When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful

Time – Jan 12, 2023

Metaculus

Company, Metaculus

Metaculus
Estimate: Probably not
Editors Estimate

Will there be a positive transition to a world with radically smarter-than-human artificial intelligence? [Median forecaster response 47%]

Metaculus – May 14, 2024

Jan Leike

Former Superalignment Lead, OpenAI

Jan Leike
Estimate: About even
Related Statement

[In response to a range of 10%-90%] That’s probably the range I would give too.

80,000 Hours – Aug 6, 2023

Nick Bostrom

Philosopher, University of Oxford

Nick Bostrom
Estimate: About even
Related Statement

Bigger than 5% and lower than 95%

Closer to Truth – Apr 3, 2024

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Almost certain
Related Statement

If we actually try to [build Artificial General Intelligence] in real life, we are all going to die

TED – Jul 10, 2023

Will AI automate most human labour?

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Related Statement

This is not going to put a lot of people out of work permanently

BBC – Jun 14, 2023

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Little chance
Editors Estimate

AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth

a16z – Jun 5, 2023

Rishi Sunak

Prime Minister, UK

Rishi Sunak
Estimate: Unlikely
Editors Estimate

We should look at AI much more as a co-pilot than something which is necessary going to replace someone's job. AI is a tool that can help almost anybody do their jobs better, faster, quicker.

BBC News – Nov 2, 2023

Satya Nadella

CEO, Microsoft

Satya Nadella
Estimate: Unlikely
Editors Estimate

I think there will be new job creation, new skills picked up, and yes, there will be overall displacement in the labour market, which I think will be much more dynamic than we give labour markets credit for

CNBC – Jan 24, 2024

Sundar Pichai

CEO, Google

Sundar Pichai
Estimate: Unlikely
Editors Estimate

worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth

Dice – May 17, 2023

EU

Government, EU

EU
Estimate: Unlikely
Editors Estimate

A first analysis on Automation risk in the EU labour market, which uses data on skill needs and tasks of jobs from Cedefop’s 1st European skills and jobs survey, shows that about 14% of EU jobs face a risk of displacement by computer algorithms.

Cedefop – May 18, 2024

Geoffrey Hinton

Laureate , Turing Award

Geoffrey Hinton
Estimate: Probably not
Editors Estimate

The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled, and plumbing’s that kind of a job because manual dexterity is hard for a machine to replicate

Collision 2023 – Jun 29, 2023

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Probably not
Editors Estimate

we expect these systems will be able to do all of some of today's jobs and aren't trying to hide the ball on that. confident [sic] we will find new and much better jobs when that happens

X – Sep 29, 2023

Bill Gates

Cofounder, Microsoft

Bill Gates
Estimate: Probably not
Editors Estimate

If you eventually get a society where you only have to work three days a week, that's probably OK

Business Insider – Nov 22, 2023

Mark Zuckerberg

CEO, Meta

Mark Zuckerberg
Estimate: Probably not
Editors Estimate

Over the long term, I'm actually quite bullish that all these tools will give more people the potential to kind of do what they care about

Morning Brew Daily – Feb 16, 2024

Metaculus

Company, Metaculus

Metaculus
Estimate: Probably not
Editors Estimate

If human-level artificial intelligence is developed, will World GDP grow by at least 30.0% in any of the subsequent 15 years? [Median forecaster response 56%]

Metaculus – May 14, 2024

US

Government, US

US
Estimate: Maybe
Editors Estimate

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.

Executive Order – Oct 30, 2023

Mustafa Suleyman

Cofounder, Deepmind

Mustafa Suleyman
Estimate: Maybe
Editors Estimate

In the long term...we have to think very hard about how we integrate these tools, because left completely to the market and to their own devices, these are fundamentally labor replacing tools

Fortune – Jan 17, 2024

Ilya Sutskever

Cofounder, OpenAI

Ilya Sutskever
Estimate: About even
Editors Estimate

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations

OpenAI – May 21, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: About even
Editors Estimate

And then I think a lot of jobs that are maybe somewhat undervalued today — manual jobs, manual labor jobs ... or caring jobs ... I think, are going to be much more valued in future.

NYT – Feb 23, 2024

Eric Schmidt

Former CEO, Google

Eric Schmidt
Estimate: About even
Editors Estimate

it'll be basically text to action. You'll have an idea, and you'll say I want a 'this' and the system will show you the recipe or organize the events ... The systems will be smart enough to be able to communicate, send emails, make phone calls and so forth.

Morningstar – Feb 23, 2024

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: Better than even
Editors Estimate

first [AI systems] speed up the productivity of humans, then they equal the productivity of humans, and then in some meaningful sense are the main contributor to scientific progress that happens at some point.

Dwarkesh Podcast – Aug 7, 2023

Paul Christiano

Head of AI Safety, US AI Safety Institute

Paul Christiano
Estimate: Probably
Related Statement

I think if you have a thing which works that’s incredibly labor intensive, I’m fairly optimistic about our ability to automate it.

Dwarkesh Podcast – Oct 31, 2023

Nick Bostrom

Philosopher, University of Oxford

Nick Bostrom
Estimate: Likely
Editors Estimate

conditional on [AGI] ... all work [will be automated] with the exception of work where there is a specific demand that it be performed by human [sic] or where the consumer cares about the process... [though there are] an avalanche of considerations ... for each of these ... questions.

Closer To Truth Chats – Apr 3, 2024

Elon Musk

CEO, Tesla

Elon Musk
Estimate: Highly likely
Related Statement

Well I think we are seeing the most disruptive force in history here. ... there will come a point where no job is needed. You can have a job for personal satisfaction, but the AI will be able to do everything.

Rishi Sunak and Elon Musk interview – Nov 3, 2023

Expert Survey on Progress in AI

Survey, AI Impacts

Expert Survey on Progress in AI
Estimate: Highly likely
Clearly Stated

[2488 journal authors AI were asked split into 2 groups and asked about AI capabilities] The aggregate 2023 forecast predicted a 50% chance of [Human Level Machine Intelligence] by 2047 ... The aggregate 2023 forecast predicted a 50% chance of [Full Automation Of Labour] by 2116

AI Impacts – Jan 4, 2024

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Almost certain
Related Statement

AGI will not be upper-bounded by human ability or human learning speed.

LessWrong – Jun 4, 2022

Will open sourcing advanced AI models be good for humanity?

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Very unlikely
Related Statement

the problem is that demon summoning is easy and angel summoning is much harder. Open sourcing all the demon summoning circles is not the correct solution.

Bankless Podcast – Feb 20, 2023

Holden Karnofsky

Visiting scholar, Carnegie Endowment for International Peace

Holden Karnofsky
Estimate: Little chance
Editors Estimate

I think the things we’re building could be very dangerous at some point, and I think that point can come a lot more quickly than anyone is expecting. I think when that point comes, some of the open source stuff we have could be used by bad actors in conjunction with later insights to create very powerful AI systems in ways we aren’t thinking of right now, but we won’t be able to take back later.

80,000 Hours – Jul 30, 2023

Yoshua Bengio

Laureate, Turing Award

Yoshua Bengio
Estimate: Little chance
Related Statement

Open source is great for scientific progress," Bengio said. "But if nuclear bombs were software, would you allow open-source nuclear bombs?

bankinfo security – Jul 25, 2023

Ilya Sutskever

Cofounder, OpenAI

Ilya Sutskever
Estimate: Little chance
Related Statement

it’s going to be completely obvious to everyone that open-sourcing AI is just not wise

The Verge – Mar 15, 2023

Geoffrey Hinton

Laureate , Turing Award

Geoffrey Hinton
Estimate: Unlikely
Editors Estimate

As soon as you open source everything people will start doing all sorts of crazy things with it. It would be a very quick way to discover how [AI] can go wrong

Techmonitor – May 27, 2023

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: Unlikely
Related Statement

When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model

US Senate Testimony – Jul 24, 2023

US

Government, US

US
Estimate: About even
Editors Estimate

When the weights for a dual-use foundation model are widely available ... there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model

Executive Order – Oct 22, 2023

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: About even
Editors Estimate

[Altman] said OpenAI also plans to open-source some additional large-language models developed by his company, though it has yet to decide which ones

Fortune – Feb 13, 2024

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Better than even
Related Statement

We have a long history of supporting responsible open source & science, which can drive rapid research progress, so we’re proud to release Gemma: a set of lightweight open models

X – Feb 21, 2024

EU

Government, EU

EU
Estimate: Better than even
Editors Estimate

The providers of general purpose AI models that are released under a free and open source license ... should be subject to exceptions as regards the transparency-related requirements ... unless they can be considered to present a systemic risk

Data Innovation – Mar 4, 2024

Elon Musk

CEO, Tesla

Elon Musk
Estimate: Probably
Editors Estimate

OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google

Twitter – Feb 17, 2023

Mark Zuckerberg

CEO, Meta

Mark Zuckerberg
Estimate: Highly likely
Related Statement

Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit.

Indian Express – Jan 21, 2024

Meta AI

Subdivision, Meta AI

Meta AI
Estimate: Almost certain
Related Statement

We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing.

Facebook – Jul 17, 2023

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Almost certain
Related Statement

the future has to be open source, if nothing else, for reasons of cultural diversity, democracy, diversity

Time – Feb 13, 2024

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Almost certain
Clearly Stated

We @a16z are 100% pro open source AI

Twitter – Oct 2, 2023

Do humans have a moral duty to build artificial superintelligence?

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Little chance
Related Statement

The moratorium on new large training runs needs to be indefinite and worldwide.

Time – Mar 28, 2023

Paul Christiano

Head of AI Safety, US AI Safety Institute

Paul Christiano
Estimate: Maybe
Editors Estimate

My basic view is there's a really plausible world where it's sort of problematic to try and build a bunch of AI systems and use them as tools. And the thing I really want to do in that world is just not try and build a ton of AI systems to make money from them.

Dwarkesh Podcast – Oct 31, 2023

Anthropic

Company, AI

Anthropic
Estimate: About even
Editors Estimate

If we’re in a “near-pessimistic” scenario, this could instead involve channeling our collective efforts towards AI safety research and halting AI progress in the meantime.

Anthropic – Mar 8, 2023

Holden Karnofsky

Visiting scholar, Carnegie Endowment for International Peace

Holden Karnofsky
Estimate: About even
Editors Estimate

But I think there’s a lot of uncertainty about what superintelligence means and where it could go. And I think you can raise a lot of these concerns without needing to have a settled view there.

80,000 Hours – Jul 30, 2023

David Sacks

Former COO, Paypal

David Sacks
Estimate: About even
Editors Estimate

I’m all in favor of accelerating technological progress, but there is something unsettling about the way OpenAI explicitly declares its mission to be the creation of AGI.

Twitter – Nov 22, 2023

Mark Zuckerberg

CEO, Meta

Mark Zuckerberg
Estimate: Better than even
Editors Estimate

We’ve come to this view that, in order to build the products that we want to build, we need to build for general intelligence

The Verge – Jan 18, 2024

Anthony Blinken

Secretary of State, US

Anthony Blinken
Estimate: Better than even
Editors Estimate

We want America to maintain our scientific and technological edge, because it’s critical to us thriving in the 21st century economy

US State Department – Jul 12, 2021

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Better than even
Clearly Stated

The vision is to make AGI, figure out how to make it safe ... and figure out the benefits.

The Financial Times – Feb 13, 2023

Bill Gates

Cofounder, Microsoft

Bill Gates
Estimate: Better than even
Editors Estimate

Philanthropy is my full-time job these days, and I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities.

Gates Notes – Mar 21, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Probably
Editors Estimate

We believe the right way to respond to this moment in AI is with cautious optimism—with a firm grasp of the incredible benefits that AI could create, but also a sober understanding of the near and long-term challenges that we need to prepare for.

The Atlantic – May 18, 2024

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Probably
Editors Estimate

(0) there will be superhuman AI in the future (1) they will be under our control (2) they will not dominate us nor kill us (3) they will mediate all of our interactions with the digital world (4) hence, they will need to be open platforms so that everyone can contribute to training and tuning them.

X – Nov 25, 2024

Nick Bostrom

Philosopher, University of Oxford

Nick Bostrom
Estimate: Likely
Editors Estimate

it would be tragic if we never developed advanced artificial intelligence. I think all the paths to really great futures ultimately lead through the development of machine super-intelligence

UnHerd – Nov 12, 2023

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Highly likely
Related Statement

We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder

A16Z – Oct 16, 2023

Should there be international regulation of advanced AI?

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Very unlikely
Clearly Stated

There should be no regulatory barriers to open source [AI] whatsoever.

a16z – Jun 5, 2023

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Related Statement

asking for regulations because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at near the speed of sound in 1925.

El Pais – Jan 19, 2024

Eric Schmidt

Former CEO, Google

Eric Schmidt
Estimate: Probably
Related Statement

We believe the right approach here is to take inspiration from the Intergovernmental Panel on Climate Change (IPCC)

Financial Times – Oct 18, 2023

Gary Marcus

Professor Emeritus, New York University

Gary Marcus
Estimate: Probably
Related Statement

The world needs an international agency for artificial intelligence

The Economist – Apr 17, 2023

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Probably
Related Statement

We talk about the IAEA as a model where the world has said 'OK, very dangerous technology, let's all put some guard rails.' And I think we can do both

euronews – Jun 7, 2023

Mustafa Suleyman

Cofounder, Deepmind

Mustafa Suleyman
Estimate: Probably
Related Statement

We believe the right approach here is to take inspiration from the Intergovernmental Panel on Climate Change

The Financial Times – Oct 18, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Probably
Editors Estimate

I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there.

Guardian – Oct 24, 2023

Anthropic

Company, AI

Anthropic
Estimate: Very good chance
Clearly Stated

Governments should fund and participate in the development of rigorous capability and safety evaluations targeted at critical risks from advanced AI,such as deception and autonomy.

Anthropic – Jun 5, 2023

Elon Musk

CEO, Tesla

Elon Musk
Estimate: Very good chance
Related Statement

There is a real danger for digital superintelligence having negative consequences ... I am in favour of AI regulation

Holden Karnofsky

Visiting scholar, Carnegie Endowment for International Peace

Holden Karnofsky
Estimate: Very good chance
Editors Estimate

I think even in a world where you have very powerful safe AI systems, you probably still need some kind of regulatory framework for how to use those to use force to stop other systems.

80,000 Hours – Jul 22, 2023

Paul Christiano

Head of AI Safety, US AI Safety Institute

Paul Christiano
Estimate: Highly likely
Related Statement

Regardless of whether risk mitigation takes the form of responsible scaling policies or something else, I think voluntary action by companies isn’t enough. If the risk is large then the most realistic approach is regulation and eventually international coordination.

Alignment Forum – Oct 23, 2023

Satya Nadella

CEO, Microsoft

Satya Nadella
Estimate: Highly likely
Clearly Stated

I think [a global regulatory approach to AI is] very desirable, because I think we’re now at this point where these are global challenges that require global norms and global standards

CNBC – Jan 16, 2024

US

Government, US

US
Estimate: Almost certain
Clearly Stated

The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.

Executive Order – Oct 30, 2023

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Almost certain
Related Statement

It's international action or bust; you have to stop ASI everywhere.

Twitter – Dec 20, 2023

EU

Government, EU

EU
Estimate: Almost certain
Clearly Stated

In April 2021, the Commission proposed the EU AI Act and a new Coordinated Plan with Member States, to guarantee the safety and fundamental rights of people and businesses, while strengthening investment and innovation across EU countries.

European Commission – Jan 24, 2024

Will AI be used to make Weapons of Mass Destruction (WMDs)?

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Clearly Stated

When you hear one-liners like "what if AI released a virus" your bullshit meter should be screaming.

X – Feb 25, 2024

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Probably not
Editors Estimate

Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws

a16z – Jun 5, 2023

Rand Corporation

Think Tank, US

Rand Corporation
Estimate: Probably not
Editors Estimate

The current generation of large language models (LLMs) ... do not increase the risk of a biological weapons attack by a non-state actor

Rand Corporation – Oct 24, 2024

Mustafa Suleyman

Cofounder, Deepmind

Mustafa Suleyman
Estimate: Maybe
Editors Estimate

The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible or more lethal

Diary Of A CEO Podcast – Sep 3, 2023

Eric Schmidt

Former CEO, Google

Eric Schmidt
Estimate: About even
Editors Estimate

It’s going to be possible for bad actors to take the large databases of how biology works and use it to generate things which hurt human beings

Air and Space Forces Magazine – Sep 11, 2022

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: About even
Related Statement

AI systems may become much better at science and engineering, to the point where they could be misused to cause large-scale destruction, particularly in the domain of biology

US Senate Hearing – Jul 24, 2023

EU

Government, EU

EU
Estimate: Better than even
Related Statement

international approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent; chemical, biological, radiological, and nuclear risks

European Parlaiment – Apr 18, 2024

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Better than even
Related Statement

Hassabis, the British chief executive of Google’s AI unit, said the world must act immediately in tackling the technology’s dangers, which included aiding the creation of bioweapons

Guardian – Oct 23, 2023

Rishi Sunak

Prime Minister, UK

Rishi Sunak
Estimate: Better than even
Related Statement

Get this wrong and it could make it easier to build chemical or biological weapons.

Rishi Sunak warns AI could be used by terrorists to build chemical and biological weapons – Oct 25, 2023

OpenAI

Company, OpenAI

OpenAI
Estimate: Better than even
Related Statement

One potentially harmful use, highlighted by researchers and policymakers, is the ability for AI systems to assist malicious actors in creating biological threats

https://openai.com/index/building-an-early-warning-system-for-llm-aided-biological-threat-creation/ – Jan 31, 2024

Expert Survey on Progress in AI

Survey, AI Impacts

Expert Survey on Progress in AI
Estimate: Likely
Related Statement

[1,345 AI researchers answered the following questions around concern] AI lets dangerous groups make powerful tools (eg engineered viruses). [About 75% said they had "substantial concern" or "extreme concern" with the other 25% saying "a little concern" or "no concern]

AI Impacts – Jan 4, 2024

US

Government, US

US
Estimate: Very good chance
Related Statement

The term “dual-use foundation model” means an AI model ... that exhibits ... high levels of performance at tasks that pose a serious risk to security ... such as by ... substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons

Executive Order – Oct 30, 2023

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Almost certain
Related Statement

Losing a conflict with a high-powered cognitive system looks at least as deadly as 'everybody on the face of the Earth suddenly falls over dead within the same second'

LessWrong – Jun 4, 2022

Should Governments regulate deepfakes?

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Little chance
Editors Estimate

Remember 4 years ago how LLMs & Deep Fakes were going to destroy society? They are now widely available, both as a service and as open source software. The doomsday scenarios have not happened.

Twitter – Nov 19, 2022

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Maybe
Editors Estimate

What you have to do instead is basically have a system in which real people can certify that content about them is real

YouTube – Apr 1, 2024

Bill Gates

Cofounder, Microsoft

Bill Gates
Estimate: About even
Editors Estimate

The law needs to be clear about which uses of deepfakes are legal and about how deepfakes should be labeled so everyone understands when something they’re seeing or hearing is not genuine

Gates Notes – Jul 10, 2023

Mustafa Suleyman

Cofounder, Deepmind

Mustafa Suleyman
Estimate: Better than even
Editors Estimate

Legislating AI-driven electioneering would be one concrete step towards ameliorating the spiraling political consequences of the coming wave. And it shouldn’t be the last.

Fortune – Sep 4, 2023

Geoffrey Hinton

Laureate , Turing Award

Geoffrey Hinton
Estimate: Probably
Editors Estimate

I think all governments should insist that all fake images be flagged.

El Pais – May 12, 2023

Stuart Russel

Professor, University of California, Berkeley

Stuart Russel
Estimate: Probably
Editors Estimate

I think if [deepfakes are] not regulated, we are in for a huge amount of pain.

Vox – Sep 20, 2023

Joe Biden

President, US

Joe Biden
Estimate: Very good chance
Editors Estimate

Ban AI voice impersonations and more

US State Of The Union – Mar 8, 2024

Will scaling up contemporary AI models lead to superintelligence?

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Unlikely
Related Statement

I think we need another breakthrough [...] I don't think [scaling] will do something that I view as critical to a general intelligence

the decoder – Nov 17, 2023

Gary Marcus

Professor Emeritus, New York University

Gary Marcus
Estimate: Probably not
Editors Estimate

I am still a skeptic who thinks that large language models are shallow, and not close to AGI. But they can still do real damage

Twitter – Mar 29, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Maybe
Editors Estimate

We’ve got to carry on improving the large models ... That’s clearly a necessary, but probably insufficient component of an AGI system.

Dwarkesh Podcast – Feb 28, 2024

Stuart Russel

Professor, University of California, Berkeley

Stuart Russel
Estimate: Maybe
Related Statement

I'm in agreement with Demis Hassabis, who recently gave a talk where he said he thinks we still need one or two major breakthroughs before we have the kinds of capabilities that would be a big flashing red light for the human race.

Berkeley News – Apr 8, 2024

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: About even
Editors Estimate

I think those long years of scaling experience have taught me to be very skeptical, but also skeptical of the claim that an LLM can’t do anything

TechCrunch – Sep 21, 2023

Ilya Sutskever

Cofounder, OpenAI

Ilya Sutskever
Estimate: Highly likely
Related Statement

I don't think that there is only one path to AGI. The LLM path might be not the most efficient one, but it will get the job done.

Dwarkesh Patel – Mar 27, 2023

How much should governments regulate advanced AI models?

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Clearly Stated

Regulating research and development in AI is incredibly counterproductive

Financial Times – Oct 18, 2023

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Very unlikely
Related Statement

'Regulation' of AI (math) is the foundation of a new totalitarianism.

Twitter – Dec 16, 2023

Demis Hassabis

CEO, Google DeepMind

Demis Hassabis
Estimate: Probably not
Editors Estimate

Getting this right will take a collective effort from governments, industry and civil society to inform and develop robust safety tests and evaluations

UK Government – Nov 2, 2023

Dario Amodei

CEO, Anthropic

Dario Amodei
Estimate: About even
Editors Estimate

While AI promises significant societal benefits, it also poses a range of potential harms. Critical to managing these risks is government capacity to measure and monitor the capability and safety characteristics of AI models

UK Government – Nov 2, 2023

Helen Toner

Former Board Member, OpenAI

Helen Toner
Estimate: Likely
Related Statement

Compute usage is relatively straightforward to define, measure, and verify, making it an attractive way to target regulation

CSET – Oct 6, 2023

Sam Altman

CEO, OpenAI

Sam Altman
Estimate: Very good chance
Clearly Stated

The U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities

Testimony Before The US Senate Committee – Apr 1, 2023

Joe Biden

President, US

Joe Biden
Estimate: Almost certain
Clearly Stated

A model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 10^26 integer or floating-point operations

The White House – Oct 30, 2023

EU

Government, EU

EU
Estimate: Almost certain
Related Statement

The initial FLOPs threshold for this has been set as 10^25

EU – Jan 26, 2024

Will humanity lose control of AI?

Yann LeCun

Chief AI Scientist, Facebook

Yann LeCun
Estimate: Very unlikely
Related Statement

These objective-driven architectures will be safe and will remain under our control because *we* set their objectives and guardrails and they can't deviate from them

Twitter – Oct 28, 2023

Marc Andreessen

General Partner, Andreessen Horowitz

Marc Andreessen
Estimate: Little chance
Editors Estimate

The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave

A16Z – Jun 6, 2023

Pedro Domingos

Professor Emeritus, University of Washington

Pedro Domingos
Estimate: Unlikely
Related Statement

Solving AI problems is exponentially hard, but checking the solutions is easy. Therefore powerful AI does not imply loss of control by us humans.

Forbes – Apr 12, 2023

Anthropic

Company, AI

Anthropic
Estimate: Probably not
Clearly Stated

We found that, despite our best efforts at alignment training, deception still slipped through.

Twitter – Jan 12, 2024

Stuart Russel

Professor, University of California, Berkeley

Stuart Russel
Estimate: Maybe
Clearly Stated

The idea is that, in order to show that the systems will not cross those red lines, the companies will have to be able to understand, predict and control the AI systems that they build, and at the moment they are not close to being able to do that.

Berkeley News – Apr 8, 2024

Nick Bostrom

Philosopher, University of Oxford

Nick Bostrom
Estimate: About even
Clearly Stated

I argued in the book that this is a non-trivial problem, and in fact there are proposals on how to solve this control problem that look plausible at first sight but upon closer examination, turn out not to work

Forbes – Jun 26, 2016

Eliezer Yudkowsky

Decision Theorist, MIRI

Eliezer Yudkowsky
Estimate: Highly likely
Related Statement

When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, "please don't disassemble literally everyone with probability roughly 1" is an overly large ask that we are not on course to get.

LessWrong – Mar 5, 2024