• How to use this tree

    The goal of this tree is to help you to clarify your opinions on AI risk, and compare them to others’.

    Start at the thesis to the immediate right of this box. It is perhaps the most important, high level opinion about AI risk on which people disagree. Ask yourself if you agree with the statement.

    Then look at the boxes stemming from the right of the first box (its ‘children’). Throughout the tree, the children are a set of claims that usually support the parent. If you agree with the children, you probably agree with the parent. That means if you disagree with a statement, or are unsure what you think, you can look through the children for any you disagree with. You can then go even deeper, to see the arguments that people usually use to support those claims, and whether you disagree with those too.

    Along the way, you can see how public figures feel about the various positions, and find links to learn more about them.

  • Urgent AI risk thesis

    It is worth acting to avert AI risk now

    This claim is important, because it bears directly on what you should do.

    People who believe the Urgent AI risk Thesis tend to do so because they expect artificial intelligence to pose some kind of substantial risk to humanity, at some point in the future (not necessarily soon). They also generally think there are ways to make non-negligible progress in avoiding this risk. Note that this doesn’t mean having a good idea of what to do concretely—progress might come from thinking more about what to do. Lastly, the thesis depends on some of these actions being better done now than later.

    Notable proponents


    Someone thinks something something.

  • AI Risk thesis

    AI will surpass humans, and this will by default destroy humanity or the like, with non-negligible probability

    This claim is the focus of much debate about AI safety. It assumes the Strong AI Thesis and then usually depends on another argument for AI producing risk, such as the Values Threat Thesis or the Disruption Risk Thesis.

    The envisaged destruction is often on the scale of the destruction of humanity, or the loss of most potential value that the future may hold.

    If you agree with this thesis, you think that the future is pretty foreboding, but you don’t necessarily think it is important to do anything. For that, you will need Urgent AI Risk Thesis.

    Notable proponents


    Eliezer Yudkowsky thinks something something.


    Stuart Russell says ‘A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.’

    Notable dissenters

    Steven Pinker argues that it is too easy to succumb to doomsaying, presumably arguing that when we find ourselves expecting doom, we should distrust ourselves accordingly.

  • Fruitful Action Thesis

    There are things we can do to avert risks from AI

    Notable proponents


    Organizations such as MIRI, FHI, CSER, FLI and GCRI are doing things which they claim avert risks from AI.

    Givewell has said something about this?

  • Urgency Thesis

    It is worth investing a lot in any actions that might avert AI risk now

    Opponents

    http://motherboard.vice.com/en_uk/read/we-need-to-talk-about-how-we-talk-about-artificial-intelligence argues that there are costs to Musk’s fearmongering, e.g. in terms of lost funding.

  • Strong AI thesis

    At some point, there will be artificial intelligence which is more capable than a human at virtually any task.

    This is not a very controversial view, especially among reductionists. A common reason to suppose AI will exist at some point is that if the human brain is a physical machine (as many people believe), there is little reason to suppose that it is optimally designed. Thus it appears to be possible to build a more capable machine along the same lines. Consequently, at least if technological civilization lasts a long time, eventually we should expect to build AI more capable than any human.

    People who get off here


    Nick Jankel believes creativity is magic that only humans are capable of.

    Someone else believes humans can do hypercomputation and AIs cannot.

    Notable proponents


    David Deutch says that AGI must be possible.

  • Value Divergence Doom Hypothesis

    Strong AI will bring huge losses by default, via having non-human values

    The basic idea here is that super-human AI is unlikely to have identical goals to humans, and that highly capable agents with different goals to humans are extremely dangerous.

    One assumption is that AI will not tend to have the same goals as humans, even if we would want AI to. This is because for instance, it is hard to make an AI have arbitrarily complex goals, and there are incentives to make AI even without achieving this. Also, we don’t have a good specification of what we care about.

    strong AI will be powerful, and so will get what it wants. It won’t naturally want what we want, and so will care about things other than human values, and that this will naturally lead to grief.

    Luke Muehlhauser

    http://lesswrong.com/lw/aro/what_is_the_best_compact_formalization_of_the/609c

    Superintelligence

  • OR

  • Disruption Risk Hypothesis

    Strong AI may cause huge social disruption, which might indirectly undermine humanity’s thriving, and ultimately its survival

  • OR

  • AI-Enabled Conflict Hypothesis

    AI strengthens small groups’ ability to dominate others, leading to authoritarian rule, and failure of humanity to thrive.

    [+If AI technology magnifies the resources of some group, this could lead to inequality, selective perpetuation of undesirable values, or accidental destruction of humanity.]

  • OR

  • Losing jobs etc

  • OR

  • AI dominates

  • b>[strong AI soon] if strong AI, strong AI soon

    People who disagree

    Oren Etzioni says “People who are alarmed are thinking way ahead”

    John Bresina says “We’re in control of what we program…I’m not worried about the danger of AI… I don’t think we’re that close at all. We can’t program something that learns like a child learns even – yet. The advances we have are more engineering things. Engineering tools aren’t dangerous. We’re solving engineering problems.”

    Sonia Chernova says “We are taking this seriously but, at the same time, we don’t feel there’s any kind of imminent concern right now.”

  • OR

  • [safe AI hard] if strong AI, strong safe AI extremely hard, so requires much effort to beat strong unsafe AI

  • OR

  • [strong AI pinhole] if strong AI, nothing else matters

  • OR

  • [no responding] if strong AI, little opportunity to act after it appears

  • OR

  • [no cramming] if strong AI, little opportunity to act just before it appears

  • There exists X such that:

  • Value Divergence Default

    Strong AI is likely to be created with values unlike those of any human, by default (by more than X)

    Alternative views

    Technology is characteristically designed with human utility in mind. We should expect this trend to continue e.g. Oren Etzioni says “The thing I would say is AI will empower us not exterminate us…”

    There will be a long period in which things are fixed and worked out, and we can modify AIs and they don’t have control yet. e.g. maybe kevin kelly in edge conversation.

  • Values Alignment Criticality

    If strong AI with different values (by at least X), then huge losses by default

  • [AI disruption] strong AI will change society a lot

  • [fast disruption] if strong AI changes society, this will be very fast

  • [upheaval destruction] huge social disruption will tend to be costly by default

  • Little opportunity for feedback and correction

  • High speed

  • maybe little warning

  • It will be in the interests of many people to intentionally create AI with at least some probability of non-human values

  • OR

  • AI with non-human values is very likely to be created by mistake

    Any error in creating values for the AI will lead to it diverging from human-values, if it is intended to have human-values. Larger errors might lead to a powerful AI with entirely different values.

  • [the smart inherit the world] If AI is more intelligent than us, our fate depends on what AI wants

  • [fragility of value] If future resembles what AI with different values wants, huge losses

    Steven Pinker believes that one needs a parochial alpha male psychology to accrue resources and cause destruction.

  • [AI unemployment] strong AI will displace humans in all jobs

  • [AI replacement] strong AI will displace humans as dominant species on Earth

  • [AI economic transition] strong AI will lead to massive economic growth

  • [AI revolution] strong AI will lead to an unrecognizable world

  • Treacherous turn

  • Hard to modify or switch off intelligent adversary

  • Hard to contain sufficiently intelligent adversary

  • Intelligence explosion

  • Intelligence explosion

  • Not clear what warning signs would be

  • Machines which are not guaranteed to have human values will often provide net value to their creators, over no AI

  • Machines with human values will not be strictly better to make than those without, across a wide variety of circumstances

  • [Intelligence accrues resources] More intelligent agents have a substantial edge in acquiring resources

  • Control of resources is a reasonable proxy for influence over the future state of the world

  • Human values are very specific, relative to space of possible futures

  • OR

  • [optimization is extreme] powerful optimization for something that only appears slightly off can be very bad

  • OR

  • [Convergent instrumental resource acquisition] Most values will lead to acquisition of resources, so all kinds of AI would indirectly ‘want us dead’.

    This means that many innocuous or neutral looking values lead to us being dead, which is very bad for us

  • Machines with different values to humans are frequently valuable to interact with

  • Beneficial to make AI at all, as part of AI research game

  • It will probably be costly to make a machine with high certainty of exactly human values, relative to one without such a guarantee

  • Global catastrophic risk is an externality, so value of guarantee doesn’t accrue to he who pays for it, to offset higher cost

    So small chance of global destruction does not necessarily affect benefits to AI creators.

  • [Intelligent decisionmaker thesis] Intelligence is handed decisions and decisions are resources

  • Intelligence improves decisions, and decisions affect resource acquisition

    In many situations, behavior determines resources allocated. Intelligence is better decision making.

  • Intelligence compounds (Intelligence Explosion)

    • If an AI is more intelligent than us, there will soon be an AI that is much more intelligent than all of us added together *
  • Hard to contain sufficiently intelligent adversary (i.e. they will cheat against us)

  • Machines with values are often more useful than those without

  • We don’t know human values

  • Human values are complicated

  • Superior adversaries can be hard to deal with thesis

    If some strong AI is created it will likely persist

    For instance, you won’t be able to notice it is bad and turn it off or change it.

  • Intelligence explosion makes feedback hard

{"cards":[{"_id":"558df6a1ec8506e115d9ab76","treeId":"578ba24f24345e44b9000096","seq":2990484,"position":1,"parentId":null,"content":"### How to use this tree\n\nThe goal of this tree is to help you to clarify your opinions on AI risk, and compare them to others'.\n\nStart at the thesis to the immediate right of this box. It is perhaps the most important, high level opinion about AI risk on which people disagree. Ask yourself if you agree with the statement. \n\nThen look at the boxes stemming from the right of the first box (its 'children'). Throughout the tree, the children are a set of claims that usually support the parent. If you agree with the children, you probably agree with the parent. That means if you disagree with a statement, or are unsure what you think, you can look through the children for any you disagree with. You can then go even deeper, to see the arguments that people usually use to support those claims, and whether you disagree with those too.\n\nAlong the way, you can see how public figures feel about the various positions, and find links to learn more about them."},{"_id":"558df6a1ec8506e115d9ab91","treeId":"578ba24f24345e44b9000096","seq":2942266,"position":3,"parentId":"558df6a1ec8506e115d9ab76","content":"# <b>Urgent AI risk thesis</b>\n*<b>It is worth acting to avert AI risk now</b>*\n\nThis claim is important, because it bears directly on what you should do.\n\nPeople who believe the *Urgent AI risk Thesis* tend to do so because they expect artificial intelligence to pose some kind of substantial risk to humanity, at some point in the future (not necessarily soon). They also generally think there are ways to make non-negligible progress in avoiding this risk. Note that this doesn't mean having a good idea of what to do concretely--progress might come from thinking more about what to do. Lastly, the thesis depends on some of these actions being better done now than later.\n\n## Notable proponents\n\n<img src=\"http://www.cato-unbound.org/sites/cato-unbound.org/files/images/authors/pictures/EYudkowsky.jpg\" height=\"50\" /> \n**Someone** thinks [something](https://intelligence.org/files/AIPosNegFactor.pdf) something.\n\n"},{"_id":"558df6a1ec8506e115d9ab78","treeId":"578ba24f24345e44b9000096","seq":3514240,"position":0.5,"parentId":"558df6a1ec8506e115d9ab91","content":"# <b>AI Risk thesis </b>\n<b>*AI will surpass humans, and this will by default destroy humanity or the like, with non-negligible probability*</b>\n\nThis claim is the focus of much debate about AI safety. It assumes the *Strong AI Thesis* and then usually depends on another argument for AI producing risk, such as the *Values Threat Thesis* or the *Disruption Risk Thesis*. \n\nThe envisaged destruction is often on the scale of the destruction of humanity, or the loss of most potential value that the future may hold.\n\nIf you agree with this thesis, you think that the future is pretty foreboding, but you don't necessarily think it is important to do anything. For that, you will need *Urgent AI Risk Thesis*. \n\n## Notable proponents\n\n\n\n<img src=\"http://www.cato-unbound.org/sites/cato-unbound.org/files/images/authors/pictures/EYudkowsky.jpg\" height=\"50\" /> \n**Eliezer Yudkowsky** thinks [something](https://intelligence.org/files/AIPosNegFactor.pdf) something.\n\n<img src=\"http://www.eecs.berkeley.edu/Faculty/Photos/Homepages/russell.jpg\" height=\"50\" /> \n**Stuart Russell** [says](http://www.fhi.ox.ac.uk/edge-article/) 'A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.'\n\n## Notable dissenters\n\n[Steven Pinker](http://edge.org/conversation/the-myth-of-ai) argues that it is too easy to succumb to doomsaying, presumably arguing that when we find ourselves expecting doom, we should distrust ourselves accordingly. "},{"_id":"558df6a1ec8506e115d9ab77","treeId":"578ba24f24345e44b9000096","seq":3514849,"position":0.5,"parentId":"558df6a1ec8506e115d9ab78","content":"# <b>Strong AI thesis</b>\n<b>*At some point, there will be artificial intelligence which is more capable than a human at virtually any task.*</b>\n\nThis is not a very controversial view, especially among [reductionists](https://en.wikipedia.org/wiki/Reductionism). A common reason to suppose AI will exist at some point is that if the human brain is a physical machine (as many people believe), there is little reason to suppose that it is optimally designed. Thus it appears to be possible to build a more capable machine along the same lines. Consequently, at least if technological civilization lasts a long time, eventually we should expect to build AI more capable than any human. \n\n## People who get off here\n\n<img src=\"http://www.huffingtonpost.com/contributors/nick-seneca-jankel/headshot.jpg\" width=\"50\" height=\"50\" /> \n**Nick Jankel** believes [creativity is magic](http://www.huffingtonpost.com/nick-seneca-jankel/) that only humans are capable of.\n<img src=\"http://www.hdwallwide.com/wp-content/uploads/2014/06/hd-wallpaper-matt-damon-person-brunette-cardigan-1080p.jpg\" height=\"50\" /> \n**Someone else** believes [humans can do hypercomputation](http://www.huffingtonpost.com/nick-seneca-jankel/) and AIs cannot.\n\n## Notable proponents\n<img src=\"https://speakerdata.s3.amazonaws.com/photo/image/808137/David_20Deutsch01.JPG\" height=\"50\" /> \nDavid Deutch [says](http://aeon.co/magazine/technology/david-deutsch-artificial-intelligence/) that AGI must be possible.\n"},{"_id":"558df6a1ec8506e115d9ab7a","treeId":"578ba24f24345e44b9000096","seq":3585805,"position":2,"parentId":"558df6a1ec8506e115d9ab78","content":"# <b>Value Divergence Doom Hypothesis </b>\n*<b>Strong AI will bring huge losses by default, via having non-human values</b>*\n\nThe basic idea here is that super-human AI is unlikely to have identical goals to humans, and that highly capable agents with different goals to humans are extremely dangerous.\n\nOne assumption is that AI will not tend to have the same goals as humans, even if we would want AI to. This is because for instance, it is hard to make an AI have arbitrarily complex goals, and there are incentives to make AI even without achieving this. Also, we don't have a good specification of what we care about.\n\nstrong AI will be powerful, and so will get what it wants. It won't naturally want what we want, and so will care about things other than human values, and that this will naturally lead to grief. \n\nLuke Muehlhauser\n\nhttp://lesswrong.com/lw/aro/what_is_the_best_compact_formalization_of_the/609c\n\nSuperintelligence"},{"_id":"5afe9a542edcd7124c0000bc","treeId":"578ba24f24345e44b9000096","seq":3534073,"position":0.5,"parentId":"558df6a1ec8506e115d9ab7a","content":"There exists X such that:"},{"_id":"558df6a1ec8506e115d9ab7b","treeId":"578ba24f24345e44b9000096","seq":3534116,"position":1,"parentId":"558df6a1ec8506e115d9ab7a","content":"# <b>Value Divergence Default</b> \n*Strong AI is likely to be created with values unlike those of any human, by default (by more than X)*\n\n## Alternative views\n\nTechnology is characteristically designed with human utility in mind. We should expect this trend to continue e.g. [Oren Etzioni](http://www.computerworld.com/article/2877617/scientists-say-ai-fears-unfounded-could-hinder-tech-advances.html) says \"The thing I would say is AI will empower us not exterminate us…\"\n\nThere will be a long period in which things are fixed and worked out, and we can modify AIs and they don't have control yet. e.g. maybe kevin kelly in edge conversation.\n"},{"_id":"558df6a1ec8506e115d9ab7e","treeId":"578ba24f24345e44b9000096","seq":3584681,"position":3,"parentId":"558df6a1ec8506e115d9ab7b","content":"# It will be in the interests of many people to intentionally create AI with at least some probability of non-human values"},{"_id":"5afee0192edcd7124c0000c6","treeId":"578ba24f24345e44b9000096","seq":3584757,"position":1.625,"parentId":"558df6a1ec8506e115d9ab7e","content":"# Machines which are not guaranteed to have human values will often provide net value to their creators, over no AI"},{"_id":"5afeb5dd2edcd7124c0000c1","treeId":"578ba24f24345e44b9000096","seq":3584918,"position":1,"parentId":"5afee0192edcd7124c0000c6","content":"# Machines with different values to humans are frequently valuable to interact with\n"},{"_id":"558df6a1ec8506e115d9ab7f","treeId":"578ba24f24345e44b9000096","seq":3536772,"position":1,"parentId":"5afeb5dd2edcd7124c0000c1","content":"# Machines with values are often more useful than those without"},{"_id":"5afeb9852edcd7124c0000c2","treeId":"578ba24f24345e44b9000096","seq":3584904,"position":2,"parentId":"5afee0192edcd7124c0000c6","content":"# Beneficial to make AI at all, as part of AI research game"},{"_id":"5afeda112edcd7124c0000c5","treeId":"578ba24f24345e44b9000096","seq":3584786,"position":3,"parentId":"558df6a1ec8506e115d9ab7e","content":"# Machines with human values will not be strictly better to make than those without, across a wide variety of circumstances"},{"_id":"558df6a1ec8506e115d9ab7d","treeId":"578ba24f24345e44b9000096","seq":3536890,"position":1,"parentId":"5afeda112edcd7124c0000c5","content":"# It will probably be costly to make a machine with high certainty of exactly human values, relative to one without such a guarantee"},{"_id":"558df6a1ec8506e115d9ab7c","treeId":"578ba24f24345e44b9000096","seq":3534950,"position":1,"parentId":"558df6a1ec8506e115d9ab7d","content":"# Orthogonality Thesis\n*Intelligence does not imply any particular values*\n\nThat is, a strong AI could have almost any values. AI is not for instance constrained to have 'good' values if it is sufficiently smart. \n\nNick Bostrom [argues](http://www.nickbostrom.com/superintelligentwill.pdf) for the thesis. John Danaher [critiques](http://philosophicaldisquisitions.blogspot.com/2012/04/bostrom-on-superintelligence-and.html) [twice](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-1.html). LessWrong Wiki [summarizes](http://wiki.lesswrong.com/wiki/Orthogonality_thesis).\n\n## People who get off here\n\n\n\n## Notable proponents\n\nStuart Armstrong? (not famous)\nhttp://lesswrong.com/lw/cej/general_purpose_intelligence_arguing_the/\n\nBostrom http://www.nickbostrom.com/superintelligentwill.pdf"},{"_id":"5afeb3132edcd7124c0000c0","treeId":"578ba24f24345e44b9000096","seq":3536359,"position":5,"parentId":"558df6a1ec8506e115d9ab7d","content":"# It's not clear how to get the human values into the AI"},{"_id":"5afeb2912edcd7124c0000be","treeId":"578ba24f24345e44b9000096","seq":3536460,"position":0.5,"parentId":"5afeb3132edcd7124c0000c0","content":"# We don't know human values\n\n"},{"_id":"5afeb2e92edcd7124c0000bf","treeId":"578ba24f24345e44b9000096","seq":3536451,"position":1,"parentId":"5afeb3132edcd7124c0000c0","content":"# Human values are complicated"},{"_id":"589fe675fe35efe5cf000044","treeId":"578ba24f24345e44b9000096","seq":3535849,"position":6,"parentId":"558df6a1ec8506e115d9ab7d","content":"# Hard to correct values with feedback"},{"_id":"57e98c76a9503a335d00003a","treeId":"578ba24f24345e44b9000096","seq":2957753,"position":1,"parentId":"589fe675fe35efe5cf000044","content":"# <b>Superior adversaries can be hard to deal with thesis</b>\n*If some strong AI is created it will likely persist*\n\nFor instance, you won't be able to notice it is bad and turn it off or change it."},{"_id":"589ff2bbfe35efe5cf000046","treeId":"578ba24f24345e44b9000096","seq":3526061,"position":2,"parentId":"589fe675fe35efe5cf000044","content":"# Intelligence explosion makes feedback hard"},{"_id":"5afec58e2edcd7124c0000c3","treeId":"578ba24f24345e44b9000096","seq":3536994,"position":2,"parentId":"5afeda112edcd7124c0000c5","content":"# Global catastrophic risk is an externality, so value of guarantee doesn't accrue to he who pays for it, to offset higher cost\n\nSo small chance of global destruction does not necessarily affect benefits to AI creators."},{"_id":"5b06545e2edcd7124c0000c8","treeId":"578ba24f24345e44b9000096","seq":3584364,"position":3.5,"parentId":"558df6a1ec8506e115d9ab7b","content":"# OR"},{"_id":"5afee3472edcd7124c0000c7","treeId":"578ba24f24345e44b9000096","seq":3537157,"position":4,"parentId":"558df6a1ec8506e115d9ab7b","content":"# AI with non-human values is very likely to be created by mistake\n\nAny error in creating values for the AI will lead to it diverging from human-values, if it is intended to have human-values. Larger errors might lead to a powerful AI with entirely different values."},{"_id":"558df6a1ec8506e115d9ab80","treeId":"578ba24f24345e44b9000096","seq":3534141,"position":1.25,"parentId":"558df6a1ec8506e115d9ab7a","content":"# <b>Values Alignment Criticality</b> \n*If strong AI with different values (by at least X), then huge losses by default*"},{"_id":"558df6a1ec8506e115d9ab81","treeId":"578ba24f24345e44b9000096","seq":3526053,"position":1,"parentId":"558df6a1ec8506e115d9ab80","content":"# <b>[the smart inherit the world]</b> If AI is more intelligent than us, our fate depends on what AI wants"},{"_id":"5afdeaff2edcd7124c0000b8","treeId":"578ba24f24345e44b9000096","seq":3532199,"position":0.5,"parentId":"558df6a1ec8506e115d9ab81","content":"# <b>[Intelligence accrues resources]</b> More intelligent agents have a substantial edge in acquiring resources\n"},{"_id":"5afe067d2edcd7124c0000b9","treeId":"578ba24f24345e44b9000096","seq":3526150,"position":3,"parentId":"5afdeaff2edcd7124c0000b8","content":"# <b>[Intelligent decisionmaker thesis] Intelligence is handed decisions and decisions are resources</b>"},{"_id":"5afe3cb32edcd7124c0000bb","treeId":"578ba24f24345e44b9000096","seq":3532128,"position":4,"parentId":"5afdeaff2edcd7124c0000b8","content":"# Intelligence improves decisions, and decisions affect resource acquisition\n\nIn many situations, behavior determines resources allocated. Intelligence is better decision making."},{"_id":"589fef09fe35efe5cf000045","treeId":"578ba24f24345e44b9000096","seq":3532138,"position":5,"parentId":"5afdeaff2edcd7124c0000b8","content":"# Intelligence compounds (Intelligence Explosion)\n* If an AI is more intelligent than us, there will soon be an AI that is much more intelligent than all of us added together *"},{"_id":"558df6a1ec8506e115d9ab83","treeId":"578ba24f24345e44b9000096","seq":3534199,"position":6,"parentId":"5afdeaff2edcd7124c0000b8","content":"# Hard to contain sufficiently intelligent adversary (i.e. they will cheat against us)"},{"_id":"558df6a1ec8506e115d9ab82","treeId":"578ba24f24345e44b9000096","seq":3526058,"position":1,"parentId":"558df6a1ec8506e115d9ab81","content":"# Control of resources is a reasonable proxy for influence over the future state of the world\n"},{"_id":"558df6a1ec8506e115d9ab84","treeId":"578ba24f24345e44b9000096","seq":3534688,"position":2,"parentId":"558df6a1ec8506e115d9ab80","content":"# <b>[fragility of value]</b> If future resembles what AI with different values wants, huge losses\n\n[Steven Pinker](http://edge.org/conversation/the-myth-of-ai) believes that one needs a parochial alpha male psychology to accrue resources and cause destruction."},{"_id":"5afe11332edcd7124c0000ba","treeId":"578ba24f24345e44b9000096","seq":3534229,"position":0.5,"parentId":"558df6a1ec8506e115d9ab84","content":"# Human values are very specific, relative to space of possible futures"},{"_id":"5afea5e42edcd7124c0000bd","treeId":"578ba24f24345e44b9000096","seq":3534575,"position":0.75,"parentId":"558df6a1ec8506e115d9ab84","content":"# OR"},{"_id":"558df6a1ec8506e115d9ab85","treeId":"578ba24f24345e44b9000096","seq":3534508,"position":1,"parentId":"558df6a1ec8506e115d9ab84","content":"# <b>[optimization is extreme]</b> powerful optimization for something that only appears slightly off can be very bad"},{"_id":"558df6a1ec8506e115d9ab86","treeId":"578ba24f24345e44b9000096","seq":2990453,"position":2,"parentId":"558df6a1ec8506e115d9ab84","content":"# OR"},{"_id":"558df6a1ec8506e115d9ab87","treeId":"578ba24f24345e44b9000096","seq":3534555,"position":3,"parentId":"558df6a1ec8506e115d9ab84","content":"# <b>[Convergent instrumental resource acquisition]</b> Most values will lead to acquisition of resources, so all kinds of AI would indirectly 'want us dead'.\n\nThis means that many innocuous or neutral looking values lead to us being dead, which is very bad for us"},{"_id":"558df6a1ec8506e115d9ab88","treeId":"578ba24f24345e44b9000096","seq":2964721,"position":3,"parentId":"558df6a1ec8506e115d9ab78","content":"# OR"},{"_id":"558df6a1ec8506e115d9ab89","treeId":"578ba24f24345e44b9000096","seq":2957755,"position":4,"parentId":"558df6a1ec8506e115d9ab78","content":"# <b>Disruption Risk Hypothesis </b>\n*<b>Strong AI may cause huge social disruption, which might indirectly undermine humanity's thriving, and ultimately its survival</b>*"},{"_id":"558df6a1ec8506e115d9ab8a","treeId":"578ba24f24345e44b9000096","seq":3534128,"position":1,"parentId":"558df6a1ec8506e115d9ab89","content":"# <b>[AI disruption]</b> strong AI will change society a lot"},{"_id":"558df6a1ec8506e115d9ab8b","treeId":"578ba24f24345e44b9000096","seq":2964749,"position":1,"parentId":"558df6a1ec8506e115d9ab8a","content":"# <b>[AI unemployment] </b>strong AI will displace humans in all jobs"},{"_id":"558df6a1ec8506e115d9ab8c","treeId":"578ba24f24345e44b9000096","seq":2990439,"position":2,"parentId":"558df6a1ec8506e115d9ab8a","content":"# <b>[AI replacement] </b>strong AI will displace humans as dominant species on Earth"},{"_id":"558df6a1ec8506e115d9ab8d","treeId":"578ba24f24345e44b9000096","seq":2990440,"position":3,"parentId":"558df6a1ec8506e115d9ab8a","content":"# <b>[AI economic transition] </b>strong AI will lead to massive economic growth"},{"_id":"558df6a1ec8506e115d9ab8e","treeId":"578ba24f24345e44b9000096","seq":2990441,"position":4,"parentId":"558df6a1ec8506e115d9ab8a","content":"# <b>[AI revolution] </b>strong AI will lead to an unrecognizable world"},{"_id":"558df6a1ec8506e115d9ab8f","treeId":"578ba24f24345e44b9000096","seq":2964744,"position":2,"parentId":"558df6a1ec8506e115d9ab89","content":"# <b>[fast disruption]</b> if strong AI changes society, this will be very fast"},{"_id":"558df6a1ec8506e115d9ab90","treeId":"578ba24f24345e44b9000096","seq":2964745,"position":3,"parentId":"558df6a1ec8506e115d9ab89","content":"# <b>[upheaval destruction]</b> huge social disruption will tend to be costly by default"},{"_id":"5802a0a17dbaced8fa00003c","treeId":"578ba24f24345e44b9000096","seq":2964722,"position":5,"parentId":"558df6a1ec8506e115d9ab78","content":"# OR"},{"_id":"5802a1037dbaced8fa00003d","treeId":"578ba24f24345e44b9000096","seq":2964724,"position":6,"parentId":"558df6a1ec8506e115d9ab78","content":"# <b>AI-Enabled Conflict Hypothesis</b>\n\n<b>*AI strengthens small groups' ability to dominate others, leading to authoritarian rule, and failure of humanity to thrive.</b>*\n\n[+If AI technology magnifies the resources of some group, this could lead to inequality, selective perpetuation of undesirable values, or accidental destruction of humanity.]"},{"_id":"5819143f46e63775ad00003f","treeId":"578ba24f24345e44b9000096","seq":2964726,"position":6.5,"parentId":"558df6a1ec8506e115d9ab78","content":"# OR\n"},{"_id":"5819151a46e63775ad000040","treeId":"578ba24f24345e44b9000096","seq":2964725,"position":6.75,"parentId":"558df6a1ec8506e115d9ab78","content":"# Losing jobs etc"},{"_id":"5819180546e63775ad000041","treeId":"578ba24f24345e44b9000096","seq":2964727,"position":6.875,"parentId":"558df6a1ec8506e115d9ab78","content":"# OR"},{"_id":"581918ca46e63775ad000042","treeId":"578ba24f24345e44b9000096","seq":2957759,"position":6.9375,"parentId":"558df6a1ec8506e115d9ab78","content":"# AI dominates"},{"_id":"558df6a1ec8506e115d9ab93","treeId":"578ba24f24345e44b9000096","seq":2942267,"position":2,"parentId":"558df6a1ec8506e115d9ab91","content":"# <b>Fruitful Action Thesis</b> \n*There are things we can do to avert risks from AI*\n\n## Notable proponents\n\n<img src=\"http://aiimpacts.org/wp-content/uploads/2015/07/FLI-FHI1-290x198.jpg\" height=\"50\" /> \nOrganizations such as MIRI, FHI, CSER, FLI and GCRI are doing things which they claim avert risks from AI.\n\nGivewell has said something about this?"},{"_id":"558df6a1ec8506e115d9ab94","treeId":"578ba24f24345e44b9000096","seq":2942268,"position":3,"parentId":"558df6a1ec8506e115d9ab91","content":"# <b>Urgency Thesis</b> \n*<b>It is worth investing a lot in any actions that might avert AI risk now</b>*\n\nOpponents\n\nhttp://motherboard.vice.com/en_uk/read/we-need-to-talk-about-how-we-talk-about-artificial-intelligence argues that there are costs to Musk's fearmongering, e.g. in terms of lost funding."},{"_id":"558df6a1ec8506e115d9ab95","treeId":"578ba24f24345e44b9000096","seq":2964728,"position":1,"parentId":"558df6a1ec8506e115d9ab94","content":"# b>[strong AI soon]</b> if strong AI, strong AI soon\n\n## People who disagree\n\n[Oren Etzioni](http://www.computerworld.com/article/2877617/scientists-say-ai-fears-unfounded-could-hinder-tech-advances.html) says \"People who are alarmed are thinking way ahead\"\n\n[John Bresina](http://www.computerworld.com/article/2877617/scientists-say-ai-fears-unfounded-could-hinder-tech-advances.html) says \"We're in control of what we program...I'm not worried about the danger of AI… I don't think we're that close at all. We can't program something that learns like a child learns even – yet. The advances we have are more engineering things. Engineering tools aren't dangerous. We're solving engineering problems.\"\n\n[Sonia Chernova](http://www.computerworld.com/article/2877617/scientists-say-ai-fears-unfounded-could-hinder-tech-advances.html) says \"We are taking this seriously but, at the same time, we don't feel there's any kind of imminent concern right now.\" "},{"_id":"558df6a1ec8506e115d9ab96","treeId":"578ba24f24345e44b9000096","seq":2964729,"position":2,"parentId":"558df6a1ec8506e115d9ab94","content":"# OR "},{"_id":"558df6a1ec8506e115d9ab97","treeId":"578ba24f24345e44b9000096","seq":2964730,"position":3,"parentId":"558df6a1ec8506e115d9ab94","content":"# <b>[safe AI hard]</b> if strong AI, strong safe AI extremely hard, so requires much effort to beat strong unsafe AI"},{"_id":"558df6a1ec8506e115d9ab98","treeId":"578ba24f24345e44b9000096","seq":2964731,"position":4,"parentId":"558df6a1ec8506e115d9ab94","content":"# OR "},{"_id":"558df6a1ec8506e115d9ab99","treeId":"578ba24f24345e44b9000096","seq":2964732,"position":5,"parentId":"558df6a1ec8506e115d9ab94","content":"# <b>[strong AI pinhole]</b> if strong AI, nothing else matters"},{"_id":"558df6a1ec8506e115d9ab9a","treeId":"578ba24f24345e44b9000096","seq":2964733,"position":6,"parentId":"558df6a1ec8506e115d9ab94","content":"# OR"},{"_id":"558df6a1ec8506e115d9ab9b","treeId":"578ba24f24345e44b9000096","seq":2964734,"position":7,"parentId":"558df6a1ec8506e115d9ab94","content":"# <b>[no responding]</b> if strong AI, little opportunity to act after it appears"},{"_id":"558df6a1ec8506e115d9ab9c","treeId":"578ba24f24345e44b9000096","seq":2964746,"position":1,"parentId":"558df6a1ec8506e115d9ab9b","content":"# Little opportunity for feedback and correction"},{"_id":"558df6a1ec8506e115d9ab9d","treeId":"578ba24f24345e44b9000096","seq":2990442,"position":1,"parentId":"558df6a1ec8506e115d9ab9c","content":"# Treacherous turn"},{"_id":"558df6a1ec8506e115d9ab9e","treeId":"578ba24f24345e44b9000096","seq":2990443,"position":2,"parentId":"558df6a1ec8506e115d9ab9c","content":"# Hard to modify or switch off intelligent adversary"},{"_id":"558df6a1ec8506e115d9ab9f","treeId":"578ba24f24345e44b9000096","seq":2990444,"position":3,"parentId":"558df6a1ec8506e115d9ab9c","content":"# Hard to contain sufficiently intelligent adversary"},{"_id":"558df6a1ec8506e115d9aba0","treeId":"578ba24f24345e44b9000096","seq":2964747,"position":2,"parentId":"558df6a1ec8506e115d9ab9b","content":"# High speed"},{"_id":"558df6a1ec8506e115d9aba1","treeId":"578ba24f24345e44b9000096","seq":2990445,"position":1,"parentId":"558df6a1ec8506e115d9aba0","content":"# Intelligence explosion"},{"_id":"558df6a1ec8506e115d9aba2","treeId":"578ba24f24345e44b9000096","seq":2964736,"position":8,"parentId":"558df6a1ec8506e115d9ab94","content":"# OR"},{"_id":"558df6a1ec8506e115d9aba3","treeId":"578ba24f24345e44b9000096","seq":2964738,"position":9,"parentId":"558df6a1ec8506e115d9ab94","content":"# <b>[no cramming]</b> if strong AI, little opportunity to act just before it appears"},{"_id":"558df6a1ec8506e115d9aba4","treeId":"578ba24f24345e44b9000096","seq":2964748,"position":1,"parentId":"558df6a1ec8506e115d9aba3","content":"# maybe little warning"},{"_id":"558df6a1ec8506e115d9aba5","treeId":"578ba24f24345e44b9000096","seq":2990446,"position":1,"parentId":"558df6a1ec8506e115d9aba4","content":"# Intelligence explosion"},{"_id":"558df6a1ec8506e115d9aba6","treeId":"578ba24f24345e44b9000096","seq":2990448,"position":2,"parentId":"558df6a1ec8506e115d9aba4","content":"# Not clear what warning signs would be"}],"tree":{"_id":"578ba24f24345e44b9000096","name":"AI Risk Case","publicUrl":"ai-risk-case"}}