These are my super rough notes for Algorithms to Live By by Brian Christian and Tom Griffiths. I wrote these notes for myself and cleaned them up a bit. So hopefully they make a little bit of sense!
I thought the intro was slow and it was only going to cover really basic stuff. It took a long time to cover the secretary problem and seemed to dumb it down. But after a while they did a good job exploring the general optimal stopping program and adding real world constraints.
Clinical trials — we can do better than a 50 / 50 placebo randomization. We need to structure adaptive clinical trials so that we are balancing the following tradeoffs:
Save the most number of people
Learn about the drug we are studying, and be confident in our learnings
Fairness among the trial population
Easy to understand / standardized so that the results are trusted and that patients are OK with the the format
Clinical trials are still a young science. And with Tuskegee + Holocaust, a lot of crimes have been committed. The Tuskegee syphilis experiment happened between 1932 and 1972 which is super recent. Hell, even the smallpox vaccine was first tested on a boy without telling him about the consequences.
Belmont Report for clinical trials — written in response to the Tuskegee syphilis “experiment”, 1979. Three core components:
Respect for persons
Beneficence — do no harm, minimize risks
Justice — fair distribution of resources
Marvin Zelen proposed adaptive clinical trials in 1979. Worked with Belmont on this to use the new ECMO treatment. Adaptive clinical trials make the successful treatment more likely for future patients in the trial.
Jim Ware didn’t think that the adaptive trials provided clear evidence of efficacy so he repeated the study with randomized testing and more infants died due to not receiving ECMO.
Then again someone in the UK didn’t think the results were set in stone so they ran another randomized trial.
Why is it so hard for people to accept the results of adaptive clinical trials? It’s a new type of statistics that has not been accepted in medicine. We need a standard for adaptive clinical trials.
Adaptive trials do present new problems. How do you do a double blind experiment where you are randomizing pill bottles for new patients?
Memory gets worse because we have more in our head. So looking up a memory involves sifting through more and more information. This idea makes me want to experience frivolous things.
Filing is a waste of time because it is work ahead of time. It is usually not that expensive to search. Another example of where lazy evaluation / procrastination is the best solution.
Interesting application of game theory and algorithms — parking. When do you stop looking for parking and backtrack to the best spot you’ve seen? What about this game makes people so angry? How can you have a good parking strategy that minimizes stress and maximizes other quantities? My guess is that people are over-optimizing on a close spot. If you are ok with walking 2x as far on average, I’m guessing you can often find a spot with a lot less searching.
Sorting in sports games. Bracket only picks the best team a low percentage of the time. Does the second best team really mean anything in a bracket?
Carnegie Mellon’s Michael Trick is responsible for scheduling for the NCAA
Relaxing hard constraints can allow a better exploration. And constraints can be added back later on.
Donald Knuth reviews email just a few times a year.
It is important to train your priors. All of the information we read goes into shaping our priors. We are constantly updating our priors about the world. (Does the media fuck up our priors by showing us a lot of crazy events? Yes.)
There are probably certain priors that are good to have skewed. For example, if you read a lot of prominent biographies, you may think success has a higher percentage chance and this will inspire you to be more ambitious and try better things.
How do we think about public policy in terms of shaping society’s priors?
Randomized algorithms — often you don’t need to be 100% certain, so it’s easier to use sampling to get to /nearly/ certain.
Randomization can help us get out of local maxima while hill climbing. Simulated annealing helps us use randomness in an explore / exploit method so that we explore at first using randomness then slowly exploit the most lucrative paths.
Randomness can also be used to stimulate creativity. Present someone with a random word / concept while they are trying to create and it will break them out of a block.
Scott Aaronson and John Rawls — bridging the gap from computer science and philosophy thru computational complexity. Aaronson noted that there are different types of computational complexity that are qualitatively different than the binary can / can’t compute.
GiveDirectly — interviews one cash transfer recipient each month and publishes the notes verbatim whether or not the recipient is doing well. Shows the power of sampling.
I am happy I give money to GiveDirectly.
The best scientists are prepared to take advantage of randomness. We often stumble into weird things or coincidences and if we are prepared, we can take advantage. Luria’s discovery of bacteria came because of a chance encounter in a casino. Penicillium was found on a neglected Petri dish.
Albert Hoffman’s discovery of LSD. Faraday discovery of induced current from magnetic flux.
Data networks work with packet switching. Phone lines work with circuit switching. Blips of data transfer rather than continuous transfer.
How do you know your messages are getting through? The byzantine general problem shows that it’s impossible to perfectly agree over a lossy line because you never know if your partner received your last message.
TCP works with a triple handshake: message, ack, then ack of ack.
Exponential backoff — the only working way to deal with network congestion without coordination.
The medium is the backoff. The time that you worry about not receiving an ACK depends on the medium. If you are on the phone, you can wait a second for a response. Writing letters, a week. An email, a few days. A ping, 500 ms. Et cetera, et cetera.
In real life, there is a real cost for recursion. In poker, you play the hand you think your opponent has who plays the hand they think you have who plays the hand that you think they have… This kind of computational complexity is expensive and can lead to poor performance.
Nash proved that every two player game has at least one equilibrium. (This is surprising!)
Proof of Nash equilibrium. Those slides are pretty heavy on set theory notation and they remind me of why I dropped that class… I feel like when you get out of the class, you understand the nuanced differences between finite and infinite strategies yet you won’t be able to escape a prisoner’s dilemma or solve a coordination game.
Game theory tells us that the selfish routing of traffic is at least 33% as good as the optimal centrally controlled solution.
Algorithms can get trapped in irrational cycles — 2010 high frequency trading flash crash, books selling on Amazon for more than a million dollars. But so can humans: 2008 mortgage bubble and other information cascades.
Computational kindness is a new kind of design that takes into account the cost of computation. That’s why it’s easier to suggest a time to meet rather than “any time in the next two weeks”.
Economics looks at humans as perfectly rational actors. Behavior economics looks at where rationality breaks down, often this is due to the cost of computation.