Crisis of rationality - Problem
Wrap up
Ways forward out of the crisis - Climax
Reason 1 - Humans
Reason 2 - LW
Wrap up of why rationalists aren’t winning + whilst they should. Make suggestion of kruel of ending with rationality. At the last moment say that there exists another way
Expand on what the other way might be
Cognitive Agency
Self-Deception and Reasons
Reasoning in general
Name game
Prob. estimates
Bayesian updating
Bayesianism
Ignoring Philosophy
Formal methods
Rationalists should win
According to Eliezer rationality is winning.
Yet, LW rationalists don’t EVIDENCE
Yet, LW rationalists aren’t winning.
Out of 9 posts in main as of 20th of April, 2014, one is a recurrent one about meetups, one is a recurrent one about rationality quotes, 3 are about AI related problems, 2 about EA, one about the study hall and one about strategic choice of identity. The study hall and the EA movement are (arguably) cases of rationalists going down the route that leads to winning - with results yet to be assesed.
fake new way
http://kruel.co/2012/11/02/rationality-come-on-this-is-serious/#sthash.iq2qV8Y7.dpbs
Let go of rationality
Where do we have blindspots?
http://www.meltingasphalt.com/technical-debt-of-the-west/
We need to see how hard this will be. We are machines built for “self-deception (along with internal conflict and fragmentation)”. Possibly because those serve to improve “deception of others, internal representations of voices of significant others, internal genetic conflict, orienting group”
This goes as deep as our brain anatomy. According to “Dr. Ramachandran’s solution < the odd behavior of split-brain patients.; anosognosia patients>? He posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right brain is the seat of the second virtue. When it’s had enough of the left-brain’s confabulating, it initiates a Kuhnian paradigm shift to a completely new narrative. Ramachandran describes it as “a left-wing revolutionary”.”
Argumentative theory of reasoning
Couldn’t find any directly related research. BUT there is some research pointing in this direction - http://hal.archives-ouvertes.fr/docs/00/90/40/97/PDF/MercierSperberWhydohumansreason.pdf
Leads to improper decisions on some problems.
Not only does reasoning not exist for getting to truth and proper decisions - it sometimes is in direct opposition to it:
http://www.sciencedirect.com/science/article/pii/S0306987709005556
Computationally expensive
John Baez on how using reason all the time doesn’t work - too computationally expensive.
“What I can say is that I am becoming increasingly confused about how to decide anything and increasingly tend to assign more weight to intuition to decide what to do and naive introspection to figure out what I want.
John Baez replied,
Well, you actually just described what I consider the correct solution to your problem! Rational decision processes take a long time and a lot of work. So, you can only use them to make a tiny fraction of the decisions that you need to make. If you try to use them to make more than that tiny fraction, you get stuck in the dilemma you so clearly describe: an infinite sequence of difficult tasks, each of which can only be done after another difficult task has been done!
This is why I think some ‘rationalists’ are just deluding themselves when it comes to how important rationality is. Yes, it’s very important. But it’s also very important not to try to use it too much! If someone claims to make most of their decisions using rationality, they’re just wrong: their ability to introspect is worse than they believe.
So: learning to have good intuition is also very important – because more than 95% of the decisions we make are based on intuition. Anyone who tries to improve their rationality without also improving their intuition will become unbalanced. Some even become crazy.
http://kruel.co/?s=rationality#sthash.S31GkRqu.dpuf“
Hypothesis: High IQ people better at rationalisation.
Upon reading this I had the wild guess that everyone goes around rationalizing instead of reasoning. Self-deception seems to be pretty common, good reasoning rare and computationally expensive. I predicted then that high IQ individuals would be “better” at rationalizing than lower IQ individuals.
Akrasia (make a story out of this)
The nominal fallacy is the error of believing that the label carries explanatory information. In medicine as well, physicians often find technical terms that lead patients to believe that more is known about pathology than may actually be the case. In Parkinson’s patients we notice that they have an altered gait and in general that their movement’s are slower. Physicians call this “bradykinesia”, but it doesn’t really tell you anymore than simply saying “they move slower”.
Why do they move slower, what is the pathology and what is the mechanism for this slowed movement - these are the deeper questions hidden by the simple statement that “a cardinal symptom of Parkinson’s is bradykinesia”, satisfying though it might be to say the word to a patient’s family.
http://edge.org/response-detail/11730
We have substituted lack of willpower for “akrasia” and seem to now have an understanding of the phenomenon, except we don’t.
Uncertainty as probability estimates
No one actually thinks in probabilities about their beliefs http://edge.org/response-detail/11339
On another note the concern that it problematic to boil down all concern about uncertainity to a single number is also absent from less wrong.
Nassim Taleb writes a lot about how people get irrational by trying to treat all uncertainity as a matter of probability.
Scott writes in his latest post “An Xist [Bayenian] says things like “Given my current level of knowledge, I think it’s 60% likely that God doesn’t exist.” If they encounter evidence for or against the existence of God, they might change that number to 50% or 70%. instead of “ “You can never really be an atheist, because you can’t prove there’s no God. If you were really honest you’d call yourself an agnostic.””.
In a recent facebook post Taleb wrote: “Atheists are just modern versions of religious fundamentalists: they both take religion too literally.” By which he means that atheist make an error because they think that religion is really about God.
If you start by thinking in probability of whether God exists you still think it’s about God and block yourself from framing the issue differently and perhaps getting a different understanding of it.
This makes them unfalsifiable and shielded from any criticism
Predictive power should come before consistency
Also keep in mind that it’s more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it’s perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it’s probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier than it is in physics.
http://lesswrong.com/r/discussion/lw/8ib/connecting_your_beliefs_a_call_for_help/5aik
Compartamentalization is a really good idea, bayesian updating through all beliefs is terrible
-> How does conceptual knowledge affect mental maps? People tend to distort mental maps regularizing them to support their propositional knowled -> essay on consistency
Bayesianism and formal methods come only once a problem has become a task
For Bayesian methods to even apply, you have to have already defined the space of possible evidence-events and possible hypotheses and (in a decision theoretic framework) possible actions. The universe doesn’t come pre-parsed with those. Choosing the vocabulary in which to formulate evidence, hypotheses, and actions is most of the work of understanding something. Bayesianism gives you no help with that. Thus, I expect it predisposes you take someone else’s wrong vocabulary as given.
http://meaningness.com/comment/370#comment-370
Link to sub-essay on task vs problem.
1st essay on cognitive agency I wrote is totally about philosophy. We have been doing philosophy all the time.
Philosophy of mind is not about mental states. It investigates the concepts we use to refer to mental states. The philosopher’s job is mainly to clarify, differentiate and enrich existing concepts, and sometimes even to develop new conceptual tools to support ongoing research programs. Relatedly, the philosophy of psychology and the philosophy of cognitive science are not about psychological states or cognitive processing per se, but about the theories we construct about such states and processes, about what counts as an explanation, about what the explananda really are—and about how to integrate different data-sets into a more general theoretical framework.
http://journal.frontiersin.org/Journal/10.3389/fpsyg.2013.00931/full
Useless
I’d say that there are parts of rationality that we do understand very well in principle. Bayes’ Theorem, the expected utility formula, and Solomonoff induction between them will get you quite a long way.
— Eliezer Yudkowsky
Here is what bothers me:
All those methods are uncomputable.
The long term detriments of our actions are uncomputable.
A useful definition of utility seems impossible.
Our values are not static.
We cannot assign value in a time consistent way.
There exists no agreeable definition of “self”.
There are various examples of how those methods lead to absurd consequences.
Given those caveats, what is it that makes the kind of rationality advocated at lesswrong.com superior to the traditional rationality of Richard Dawkins and Richard Feynman et al.?
The same way that virtue ethics is what humans should do, because of (Marr’s 3 leves) formal methods is what humans shouldn’t do, because we can’t implement them.
“What’s attractive about deontology and utilitarianism, and unattractive about virtue ethics, is that the first two appear to be complete systems, so that if only one could get the details right, you wouldn’t need to think about ethics. You could just consult the system and be done. And if one could be made to work, that would be great! Thinking is mostly an unpleasant time waster. But, my view is that they are unfixable, and the possibility of a systematic ethics is a chimerical fantasy. It’s actively harmful because it points away from thinking about ethics into thinking about technical, meta-ethical problems instead.
So, if what I wrote in “how to think” looked like virtue ethics, it’s probably only because it’s non-systematic. It doesn’t hold out the possibility of any tidy answer.” Chapman
Here I present some possible future venues to be explored.
Virtue epistemology?
Train intuition
Hulbert on unsymbolised thinking,
Gendlin on the felt sense
IFS on akrasia
Gigerenzer and intuition.
Words as symbols - signifiers, not signifieds or referents.
Formal methods as heuristic producing
conclusion - resolution
In accordance with the old way: the goal is to win, and we follow evidence where it takes us
I have asked you to let go of what you are used to. This is in accordance with making you light, after years of trying this in the real world. We have just started and should not have expected to have it right on the first turn. Let’s plunge into the void.
KW “green” stuff
meditationstuff
existing research in philosophy