Three Stages of Life

At the start of Life 3.0, Tegmark outlines the three different stages of life, in terms of physics:

1.0 — Biological — Hardware (physical bodies / phenotypes) and software (DNA / nervous systems)  improve over generations through evolution. 

Example: Over thousands or millions of years, the eye develops from light sensors to a complex structure that senses depth and color.

2.0 — Cultural — Software (cranial neural networks) improve within a generation through the spreading of memes. This is where we are right now.

Examples: Humans can upgrade their “software” — their brains — through learning new skills and abilities. New ideas spread quickly and culture develops within a single lifespan. 

3.0 — Technological — Recursive hardware and software improvement. Life improves by re-designing itself.

Example: Humans design a drug that will make their brain have improved pattern recognition and creativity.These new humans utilize their super powered brains to design bionic legs. And so it goes.

Tegmark thoroughly covers these distinctions, then moves on to the consequences of artificial intelligence — self driving cars, terrifying autonomous weapons, etc. etc. — yawn… He is a good writer but all of the immediate consequences of artificial intelligence felt explored by Black Mirror.

But I’m happy I stuck with the book as it shifts to tackling questions of what it means to be human and what makes life valuable. 

Intelligence and Consciousness

I care a lot about intelligence. Growing up, I made sure to learn as much as possible and get good grades. I read classic literature, not because I enjoyed it, because it was supposed to make me smarter. When choosing colleges, I chose the one that felt like it had the smartest, most studious, most ambitious population. 

Intelligence also serves as an unfair justification. I have to work hard not to be condescending in some situations. I justify eating animals because they are much less smart than us and don’t experience as complex of thoughts or emotions. 

But what happens when AIs become smarter than we can fathom?

We are more than an order of magnitude smarter than ants, so we don’t even register an anthill being moved for the construction of a skyscraper. What happens when we make an AI that sees our civilization as that anthill? 

Tegmark argues that we should value consciousness, not intelligence. The immense beauty of the universe is only beautiful because we see it. It’s misguided to look for a meaning in life, instead life gives meaning to the universe. Subjective experience adds meaning to the universe. 

So we should value consciousness and make sure that the AI we create value consciousness. 

Becoming Vegetarian

I was driving when I finished the audio book. My plan was to pick up lamb shawarma from Palmyra on the way home.

But finishing this book made me reevaluate my eating habits. Animals don’t have the same experience of memory, hope, and regret as we do. But they are conscious. So if I would want a future AI to value consciousness, I should probably start valuing it myself. 

Hmm, maybe I’ll switch that to a vegetarian wrap. 

As the safe driver I am, I asked Siri to call Palmyra for me.

“Siri, call Palmyra.”

I don’t see Paul Myra in your contacts.

“Siri, call PAL-myra.”

I don’t see Paul Myra in your contacts

 “SIRI, CALL PALMYRA.”

I don’t see Paul Myra in your contacts.

“God damnit, Siri call Palmyra Mediterranean Restaurant on Haight Street.” 

Would you like me to call Paul C—? I don’t see a number for Paul C—. (I have two numbers for Paul C—.)/

After a few more minutes, Siri ended up calling Palmyra.

By that point my fear of AI had subsided and I ordered a lamb shawarma. Consciousness be damned.