Podcast: Play in new window | Download
Subscribe: RSS
We spent the hour trying to answer a question posed to Eneasz regarding his work with the Harry Potter and the Methods of Rationality Podcast. Here is the query (with pleasantries removed):
Is there evidence that a rational approach to decision making, either on the personal or institutional level, will be more likely to achieve desired outcomes? For example, HPMOR implies that a perfectly rational decision maker will do a better job than a very smart and informed adhoc decision maker, but I don’t understand why this should be the case. After all, the Bayesian priors for any real life problem aren’t available and if you’re estimating how are you doing better than someone using their knowledge and intuition. I don’t include empirical decision making as inherently rational here, so for example, if GiveDirectly were the best charity I see that as more of a data driven outcome than a rational one. Obviously, the two aren’t mutually exclusive, so I could be missing something.
We thought that maybe this was a case of the question writer using a different definition of “rational” than we do, but we dove in, trying to pick the question apart and introduce some rationality concepts at the same time.
Concepts and Linkz:
- Difference between epistemic and instrumental rationality
- Instrumental rationality is concerned with achieving goals. More specifically, instrumental rationality is the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one’s preferences. Said preferences are not limited to ‘selfish’ preferences or non-shared values; they include anything one cares about.
- Epistemic rationality is that part of rationality which involves achieving accurate beliefs about the world. It involves updating on receiving new evidence, mitigating cognitive biases, and examining why you believe what you believe. It can be seen as a form of instrumental rationality in which knowledge and truth are goals in themselves, whereas in other forms of instrumental rationality, knowledge and truth are only potential aids to achieving goals. Someone practicing instrumental rationality might even find falsehood useful.
- Affect heuristic: When subjective emotions about something act as a mental shortcut.
- Halo/horns effect (is an example of the affect heuristic)
- Principle of charity: A technique in which you evaluate your opponent’s position as if it made the most amount of sense possible given the wording of the argument.
- Steelmanning (rationality tool)
-
- A straw man is a misrepresentation of someone’s position or argument that is easy to defeat: a “steel man” is an improvement of someone’s position or argument that is harder to defeat than their originally stated position or argument.
- The strongest form of an argument that you disagree with.
-
This time, Katrina copied or at least drew language from Less Wrong and associated wiki.
Mentioned in this episode: Women experience heart attacks differently from men.
Seen on SSC literally one day after we recorded, a case for rational decision making. Then linked from SSC recently, a case for irrationality.
Intro and outro music ‘Thrashing Around” by Chris Martyn/Geoff Harvey – Purple Planet Royalty Free Music.
Good first episode and some great reading material in the links! Thanks guys, looking forward to the next one!
Thank you for commenting! We’ll keep the links rolling.
I just cant stop imagining a little boy talking to a centaur. I don’t recognize katrinas voice, did she also speak a part in hpmor-podcast?
Maybe it was a little obsessive to re-listen to all episodes for the fifth time…
but thanks to this podcast i have something new to listen to now 🙂
Haha! At first I had no idea what you were talking about and thought this was the most obscure spam comment I’d ever seen.
Katrina did Tonks when Tonks was actually in her body for that one scene.
I’m a bit late to the party, but just listened to the first episode and wanted to offer up an alternate take on what the question may have been getting at. The question asked about a “perfectly rational decision maker,” and most of the podcast focused on that. But my take was that the question used some unfortunate wording, and actually *meant* to ask about a Bayesian decision maker who made decisions by consciously assigning probabilities in all kinds of non-empirical situations. For an example of what I mean, see chapter 86 of HPMOR, where Harry is trying to decide whether Voldemort is a competent foe. This would be in contrast to a smart, possibly quite rational decision maker who intuits the final solution to some degree, rather than explicitly assigning probabilities to sub-problems. You did begin to touch on this a bit toward the end, but, if this *is* what the question meant, I’m not sure you fully addressed it.
I think some parts of the question make more sense if you mentally substitute the word “rational” for the concept described above. Let’s say “bayesional”. Then you have, “I don’t include empirical decision making as inherently bayesional here…”, which I would take to mean the questioner does not wish to label decision making when actual probabilities are present as “bayesional”. In other words, the question is specifically questioning the practice of Bayesian decision making with made up (or guessed) probabilities.
Thanks for putting it out there; I really enjoyed it!
That’s a really good interpretation! I think we’ll revisit the idea in the future, if not for quite as long.
Great first episode! Look forward to more.
In response to the question that you discuss in this episode, according to Eliezer’s definition of rationality, the very first sentence of the question is a non-starter as WHICHEVER method of decision making produces the best results is the rational approach. (See: Rationality is Systematized Winning). So if a method of decision making produces consistently better results than you are getting with your “rational” approach, you are no longer being rational by following your approach.
As for the bit about data driven empirical decision making, I believe that caveat was to rule out decision categories for which data is readily available. For instance, in the charity example given, once you have defined what you mean by, “best charity”, then simply sorting available data to bring to the top the one that best fits your criteria is a simple query matter, but what about the case of who to pursue as a mate, or which career to pursue if long term happiness is your criteria? For these questions, there is no available database to query and much more subjective (and therefore fallible) decision making is involved.
First, i’m so glad this podcast exists and has the form it has. So thanks for doing it.
Second, I think the question you tackle has been partially answered to scientifically, by the Good Judgment Project. Basically, a good bayesian with access to publicly available data is 30% more accurate in predicting geopolitical events than a CIA trained analyst with access to classified data.
http://slatestarcodex.com/2016/02/04/book-review-superforecasting/
Last but not least, I was very surprised by your discussion about not being rational.
I think there was a confusion there about being rational and showing your rational beliefs. I don’t think you scenario about being friend with a proponent of pseudoscience was especially far-fetched, because I regularly have relationships with very irrational people that I cannot easily afford to confront with my bayesian rationality. Last example was when I directed a scout camp and visisted a town to see parents and was hosted by the parents of a scout chieftain, whose father produces and sells phytotherapy and homeopathy. And he was providing us with all the basic first aid material for free.
Confronting him with my beliefs about his livelyhood might have had serious social and financial repercussions so I chose the polite road and didn’t broach the subject. I thing that was a highly rational behaviour: my priority goals were having a good relationship with the parents that are involved with scouting activities and with the chieftain as well a minimizing the costs for my scout camp. Educating as many people as possible and espacially proponents of pseudoscience is one of my life goals but it was lower in the short-term here and the risk/cost analysis wasn’t in favour of trying to debate pseudoscience.
File not found. Is there any way\place, except this site, to download or listen to the episodes? I wanted to start from the beginning, and I suspect many more are missing.