It is worth acting to avert AI risk now
This claim is important, because it bears directly on what you should do.
People who believe the Urgent AI risk Thesis tend to do so because they expect artificial intelligence to pose some kind of substantial risk to humanity, at some point in the future (not necessarily soon). They also generally think there are ways to make non-negligible progress in avoiding this risk. Note that this doesn’t mean having a good idea of what to do concretely—progress might come from thinking more about what to do. Lastly, the thesis depends on some of these actions being better done now than later.
Someone thinks something something.
The goal of this tree is to help you to clarify your opinions on AI risk, and compare them to others’.
Start at the thesis to the immediate right of this box. It is perhaps the most important, high level opinion about AI risk on which people disagree. Ask yourself if you agree with the statement.
Then look at the boxes stemming from the right of the first box (its ‘children’). Throughout the tree, the children are a set of claims that usually support the parent. If you agree with the children, you probably agree with the parent. That means if you disagree with a statement, or are unsure what you think, you can look through the children for any you disagree with. You can then go even deeper, to see the arguments that people usually use to support those claims, and whether you disagree with those too.
Along the way, you can see how public figures feel about the various positions, and find links to learn more about them.
AI will surpass humans, and this will by default destroy humanity or the like, with non-negligible probability
This claim is the focus of much debate about AI safety. It assumes the Strong AI Thesis and then usually depends on another argument for AI producing risk, such as the Values Threat Thesis or the Disruption Risk Thesis.
The envisaged destruction is often on the scale of the destruction of humanity, or the loss of most potential value that the future may hold.
If you agree with this thesis, you think that the future is pretty foreboding, but you don’t necessarily think it is important to do anything. For that, you will need Urgent AI Risk Thesis.
Eliezer Yudkowsky thinks something something.
Stuart Russell says ‘A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.’
Steven Pinker argues that it is too easy to succumb to doomsaying, presumably arguing that when we find ourselves expecting doom, we should distrust ourselves accordingly.
There are things we can do to avert risks from AI
Organizations such as MIRI, FHI, CSER, FLI and GCRI are doing things which they claim avert risks from AI.
Givewell investigawe sactua sacest effectinesshas said something about this?
It is worth investing a lot in any actions that might avert AI risk now
Opponents
http://motherboard.vice.com/en_uk/read/we-need-to-talk-about-how-we-talk-about-artificial-intelligence argues that there are costs to Musk’s fearmongering, e.g. in terms of lost funding.
At some point, there will be artificial intelligence which is more capable than a human at virtually any task.
This is not a very controversial view, especially among reductionists. A common reason to suppose AI will exist at some point is that if the human brain is a physical machine (as many people believe), there is little reason to suppose that it is optimally designed. Thus it appears to be possible to build a more capable machine along the same lines. Consequently, at least if technological civilization lasts a long time, eventually we should expect to build AI more capable than any human.
Nick Jankel believes creativity is magic that only humans are capable of.
Someone else believes humans can do hypercomputation and AIs cannot.
David Deutch says that AGI must be possible.
Strong AI will bring huge losses by default, via having non-human values
The basic idea here is that super-human AI is unlikely to have identical goals to humans, and that highly capable agents with different goals to humans are extremely dangerous.
One assumption is that AI will not tend to have the same goals as humans, even if we would want AI to. This is because for instance, it is hard to make an AI have arbitrarily complex goals, and there are incentives to make AI even without achieving this. Also, we don’t have a good specification of what we care about.
strong AI will be powerful, and so will get what it wants. It won’t naturally want what we want, and so will care about things other than human values, and that this will naturally lead to grief.
Luke Muehlhauser
http://lesswrong.com/lw/aro/what_is_the_best_compact_formalization_of_the/609c
Superintelligence
Strong AI may cause huge social disruption, which might indirectly undermine humanity’s thriving, and ultimately its survival
AI strengthens small groups’ ability to dominate others, leading to authoritarian rule, and failure of humanity to thrive.
[+If AI technology magnifies the resources of some group, this could lead to inequality, selective perpetuation of undesirable values, or accidental destruction of humanity.]
Oren Etzioni says “People who are alarmed are thinking way ahead”
Someone thinks something something.
John Bresina says “We’re in control of what we program…I’m not worried about the danger of AI… I don’t think we’re that close at all. We can’t program something that learns like a child learns even – yet. The advances we have are more engineering things. Engineering tools aren’t dangerous. We’re solving engineering problems.”
Sonia Chernova says “We are taking this seriously but, at the same time, we don’t feel there’s any kind of imminent concern right now.”
There exists X such that:
Strong AI is likely to be created with values unlike those of any human, by default (by more than X)
Technology is characteristically designed with human utility in mind. We should expect this trend to continue e.g. Oren Etzioni says “The thing I would say is AI will empower us not exterminate us…”
There will be a long period in which things are fixed and worked out, and we can modify AIs and they don’t have control yet. e.g. maybe kevin kelly in edge conversation.
If strong AI with different values (by at least X), then huge losses by default
Any error in creating values for the AI will lead to it diverging from human-values, if it is intended to have human-values. Larger errors might lead to a powerful AI with entirely different values.
Steven Pinker believes that one needs a parochial alpha male psychology to accrue resources and cause destruction.
This means that many innocuous or neutral looking values lead to us being dead, which is very bad for us
So small chance of global destruction does not necessarily affect benefits to AI creators.
In many situations, behavior determines resources allocated. Intelligence is better decision making.
Intelligence does not imply any particular values
That is, a strong AI could have almost any values. AI is not for instance constrained to have ‘good’ values if it is sufficiently smart.
Nick Bostrom argues for the thesis. John Danaher critiques twice. LessWrong Wiki summarizes.
Stuart Armstrong? (not famous)
http://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/
If some strong AI is created it will likely persist
For instance, you won’t be able to notice it is bad and turn it off or change it.