Podcast: Play in new window | Download
We discuss You Are In Newcomb’s Box and The IQ Shredder
also touched on – Why Does Singapore Have Such A Low Birthrate?
Hey look, we have a discord! What could possibly go wrong?
Rationality: From AI to Zombies, The Podcast, and the other podcast
LessWrong posts Discussed in this Episode:
The Tragedy of Group Selectionism
Next Episode’s Sequence Posts:
There might be a better way of rephrasing Newcomb’s problem. Imagine a scenario where the Newcomb box test is performed twice:
The first time you take the test, the opaque box is empty and your decision is recorded as the prediction. Your memories of taking this test and your final decision are completely erased. Then, box B is either filled with cash or left empty, and you would get to keep any winnings.
Without your short term memory, you don’t know which test you are participating in. You still have to choose while weighing the consequences of that choice as BOTH a prediction (that has real consequences affecting the contents of the boxes), AND as decision.
This also gives a strong real world explanation for what makes the predictor so amazingly accurate, because it’s reasonable to suggest that for a thousand people who took this test, they all made the exact same choices after their brain states were reset, and yet it still seems possible to deviate. It would be ill-advised, but if someone was making their decisions by flipping a coin, I wouldn’t be surprised if they managed to take both boxes and win the maximum reward.
It also preserves the present tense opportunity to make a choice “in the moment”, to see boxes in front of you that you can just pick up and take, with their contents being fixed and immutable.
Hopefully this lets people dodge problematic ideas like “freewill” and “perfect predictors/backwards causality” WITHOUT changing the crux of the problem.
PS: I’d also compare this to a 2-man prisoner’s dilemma where a diagram of outcomes based on a single personal choice will always leads to a better personal results if you chose to be greedy or selfish, and yet the best outcome requires cooperation overall.
Playing the prisoner’s dilemma with an identical copy of yourself is pretty ideal for being able to force cooperation, despite being blind to their decision. That exact thing could be said to happen in Newcomb’s problem.