Podcast: Play in new window | Download
Subscribe: RSS
It’s about modeling agents making decisions, not game design. 🙂 – EB
…but you can also apply game theory to game design. – KS
Yvain’s sequence of posts introducing Game Theory
Defecting is MADE OF FIRE
Wikipedia on Dr. Axelrod’s Iterated Prisoner’s Dilemma tournament
Over the years the Less Wrong community has run a few PD tournaments of their own, with some variation.
2011 Standard evolutionary tournament
2013 tournament with visible source code (set up, and results)
2014 tournament with opponent-simulation
Punishing the cooperators! Who would do that??
(Eneasz: my explanation for why this is done may have been off – I was mixing it up with the “Insulting The Meat” practice I’d recently read about)
Learn about Evolutionary Game Theory from chapter 7 of this online book!
Patent Trolls. Eneasz hates them. The Patent Troll Episode of This American Life may make you hate them too, if you don’t already.
The Planet Money episode on the body-builder supplement patent troll. Or read a summary at NPR.
The story of the Phoebus cartel, a group of light bulb manufacturers who conspired to reduce light bulb life.
The best Golden Balls Split or Steal
“The Button” a parody of The Box with Cameron Diaz (not Jennifer Lopez) which is based on a short story called, “Button, Button.”
Eliezer Yudkowsky on Newcomb’s Problem, and on The True Prisoner’s Dilemma
Hello,
I was listening to this episode while driving a car and didn’t get the part about transparent box with $10k and opaque box probably with $1kk. I even tried to rewind and re-listen, and still didn’t get it. Like, one needs to choose if they open one box (which may be empty), or two boxes. Aaand… what? What’s the deal after one opens the box or boxes?
Thanks!
By the way, if you’ll address me in the listeners feedback and it’s hard for you to pronounce my name (I know it is), you can call me Sebastian, which is my “backup” name for English-speaking people 🙂
That’s pretty much it, actually. You get what’s in the box or boxes you opened.
So if both, either $10K + $0 or $10K + $1M
If only opaque box, than either $0 or $1M.
There’s a good write-up linked above. And also right here:
http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/
Thanks, I’ve read it and now it’s much more clear. Turns out I didn’t get that part about predicting the outcome based on the boxes which one will choose to open.
I vote we call the porn-buying companies Katrina mentioned “Copyright Trolls” as they’re abusing a different part of the intellectual property system than Patent Trolls do.
I generally really like the podcast, but for this episode you would have really either needed an expert guest or do a bit more research befor doing it. As someone who jas studied game theory before I noticed multiple mistakes, I cant remember most of them as they where minor, but one of them is that at some point you mentioned that game theory is also applicable on games where you play alone, which is wrong. The definition of game theory is that it studies situations where your payoff depends on actions of other agents. For solo player situations none of thr concepts of game theory, (such as nash equilibrium or actually any other equilibrium) makes sense. The study of situations without others is decision theory.
Also.i think while you mentioned many of the standard reasons for cooperation it sounded to me like you all believed that eliezers “timeless decision theroy” solution to the nash equilibrium is the true reason why you cooperate in this game. I think it is worth metioning that this is not standart game theory and most people who research game theory would disagree with it (if they had heard of it).
The theory might make sense in the ai context, but to apply it in real life you need to make the incredibly unrealistic assumption that another human being thinks exactly like you and hence he will truely always choose what you do. Given the complexity of humans utility functions and the fact that ramdom.things like mood effect choices this will never be the case.
I think you might want to reexamine if you actually believe in this theory because it makes sense or because you like its prediction. Namely that a society of rationalists would cooperate in the prisoners dilemma. Especially Steven said things along the line of ” I dont want it to be rational for an agent in the prisoners dilemma to be sensible to defect, as this would leed to a bad world, hence it cant be rational” which sounds a bit like belief in belief to me.
Also I would like to.mention that it is by no means required to assume that timeless decision theory is right to predict that a society of rationalists would cooperate in the prisoners dilemma. This is because it is hard to imagine a rationalist who is a fully selfish agent, as I dont know how someone who is a rarionalist could come to the conclusion that wellbeing of others does not matter. And as soon as you imcorporate altruism tje payoffs change and standart game theory predicts cooperarion. I think that assuming altruism is the much more realistic assumtion to explain why a non AI rationalist actually cooperates in the prisoners dilemma than the assumptions that timeless decision theory makes.
Thank you for your comment! Decision theory/ game theory are big ol’ topics and it can be difficult to be adequately critical of sources as non-specialists with limited time to read up. Funnily, we did have an expert lined up for this topic who had to cancel!
Steven: “I want to hear an example of a mixed strategy game that people actually do.”
All sports.