Here is the text (AI transcribed, so probably some mistakes)… of Tim Urban talk. In the “fireside chat”, Tim is being interviewed by Adam Gries, one of the organizers of the Vitalist Bay event.
Big factories, but they’re not going to break the light bulb. Probably, maybe people just before the light bulb could predict, you know, electricity and stuff like that, but they’re not going to predict AI, right? It’s just too too many paradigms for it. Your imagination can’t get there. I think that we could have multiple paradigms in these other areas, like in.
Just a few years a few decades. So, like, whatever, if you’re feeling depressed and scared about the future. Remember, we have literally no idea what’s gonna happen, and if you feel excited, it’s like there’s no guarantee of anything we have to like. Focus on the road so. I’ll leave it at that.
We’re gonna go move on and talk about more of the topics of this conference now, but. Hope I freaked you out a little bit.
All right, let’s get a couple of chairs on the stage, and we’ll do a far side. Now, I’m sure, uh, they’re gonna be also a lot of questions, so I’ll try to open it up for the, uh, audience, as well. Weather is so schizophrenic in the Bay Area. I’m like, constantly putting on layers taking off layers.
Yeah, can’t we fix that with some technology. Yeah, that’s right. Is anyone working on that this issue, the weather stabilized weather? I think that for sure, yeah, yeah, get on that. When is that they’re actually? I talked to some. I talked to someone who’s I forget his name is he’s doing the whole layout, Marjorie green says.
We already do it. Yeah, move the weather around the rain clouds around very Mr Burns. You know, keep putting on, you know? Put on, put off Daniel’s son kind of thing, all right. Cool. So, um. So many questions. Let’s get started with, um. A reference to your recent book.
What’s our problem? In that book, you mentioned the human Golem and the human genome, which is kind of like how we aggregate and behave as groups. How do you see that corresponding to this new age, and what are your top worries and hopes? Yeah, it’s very related because. Um.
Collect with humans are weird, like? We start when we had language. We started connecting our brains like neurons into a super brain. Um, which creates this organism. This, this real like emergent organism? Um and. What’s weird about us is that that organism has the capacity to be way dumber than the average person or way smarter than the average person.
I mean, this incredible civilization around us, the ridiculous quantum mechanics, not one person you know in the forest in 20 000 BC is going to think up quantum mechanics, but the group brain with enough time gets there. The group brain also creates insane, murderous mobs. Where all reason? And truth, and it goes out the window.
Just complete. Like, like an angry beehive, you know. Or, or just, you know, you can see, you know, just social media, and we could be so dumb collectively, uh, and so this to me is this is the game here. I mean that we’re going up this mountain. I think there’s incredible rewards at the top and really scary, uh
Perils on the way, and I think that the what I would call it the genie, the the good version of us, um. Which I think happens when kind of what I think of is our higher Minds. I’ll kind of take over the culture, uh, which the higher Minds don’t identify with their ideas.
They’re not tribal, uh, they have a humility about them. They have a general empathy for everyone, not just their own tribe, you know. It’s just just basic kind of grown-up Behavior when those get together. I think that that resulting Genie can get us to the top of the mountain.
I think that our biggest danger is that we slip into, you know, I would call it low wrong mode when you know a bunch of, uh, uh, the the really, really kind of Primal? Parts of our brain that don’t realize it’s not 50, 000 BC, and we’re not out in the Savannah somewhere.
When those parts of our brain start to kind of the pheromones go. And you know, they start to get together and? Build this big dumb giant. That giant will drive us right off a cliff. And so I think that when you look at, you know, stuff like culture, war, and stuff like that, it’s, it’s one hand.
It’s like, why stop that? This is not. That’s not what’s important. On the other hand, it’s like, this is literally like the the thing that needs to get fixed. Yeah, I think. Awesome, um, so? Thinking about AI, which I’d say a lot of us recognize could be some crazy exponential and Event Horizon.
Let’s put it aside, let’s imagine there were no AI. What would be the three Technologies or the few Technologies that come to your mind that could create crazy exponential outcomes. So, in a world without AI, based on all your research. For me, it’s so much around the biological stuff, but that’s a huge umbrella.
So, like BCIS, that is like a total just World shifting technology, and I again, even when the iPhone came out, who was pretty predicting Uber that they was going to screw over taxi, the taxi industry, you know, even that little jump is hard, so trying to predict the implications of bcis is I, who knows.
I don’t know if it’ll be good or bad. Um, I imagine, like other Technologies, that’ll be on, you know, net positive, but uh. And then, for me, like, you know, yeah, like, uh, the curing of diseases and, and by far the biggest, uh of all, which is why I like being here, why I love being around people like the people here is like longevity, lifespan, and you know, there’s a lot of people who are, like, well, the species.
I’m like, fuck the species I want to stay here. See the end of this movie, the movie’s so good, yeah?
Know, and it’s a dark, dark, dark, and then there’s a spotlight on one little part of the conveyor belt. And it’s like, you emerge into the spotlight. You’re like, holy shit, what is going on here, and he’s like infinite Darkness, infinite Darkness in both directions, and you’re like moving towards this thing.
You know to me like if if we can somehow get beyond that to some kind of longevity, Escape velocity people of the future will look at, like, pre that moment and post that moment. That’s like the real BCAD line. That’s like the big moment when you graduate in my opinion into, like, out of your like, animal kind of past so?
Yeah, come on. Yeah, I think there’s a lot of agreement there. Yeah, um. Which, by the way, it almost sounds like you’re saying that the best thing we could do for future people is just unlock longevity, because then everybody benefits because you know, there’s all this question about, like, oh, what about the trillions of future imaginary people in the future?
Shouldn’t we do something about the climate to help them instead of, for example, longevity? Every we say, you know, someone saved someone’s life. And it’s, like, well, no, you prolong their life all right, um, no one has yet saved any life. Yeah, this, you know, true. The true longevity stuff is one way we could save lives, and so, yeah, those future people.
You’re you’re, birthing them all with this catastrophic disease. That everyone has, uh, aging, and it’s like, what if we could, you know, of course? It’s the biggest thing we could do for anyone. Amazing! All right so, uh? Quick ones. Do you think biologically humans will be recognizable to us in?
Yes, no. 100 years. Yeah, recognizable 200 years. I mean, we’re talking. This is all bullshit because we’re talking way past Event Horizon. Here, I’m trying to. I’m like a homo erect, just trying to predict, you know, the industrial, you know, the stock market, but? Uh, I, I kind of feel like.
We may have just if we if we’re really around that long. I figured out how to just get away from these dying mammals all together, and just be like in a, you know, virtual world like sliding down rainbows, uh, hanging out? Um, it just feels like again. Like, right now, I think most people are, like, oh no, like?
Uh, this is better, but I’m like, once we’re not attached to the dying mammal. Well, we really want to go back to that, so I kind of feel like we’ll have gotten away from that then. Oh, wow, all right. And honestly, maybe that’s my answer to 100 years, too.
Just AI, like, we’re going pretty quickly here. All right, people take care of yourselves, because, uh, it seems like it may be coming fast, all right. Question you’ve spoken to a lot of people at the very center of AI? In some way, you have a privileged position because of your audience, and I’m sure you can get to some people that average person can’t talk to directly.
Is what they say in private. The same is what they say in public, and if not, in what ways without disclosing any confidential stuff? Yeah, I honestly, um. It’s not that far off because most of them, I think, deep down, just don’t really know. I think the more you know about it.
The more you realize like, this is not a very predictable thing right now. But it’s just Whitlash talking to these people because I’ll talk to one person. And really smart person, you know, and they will make me feel like we just got no shot to figure to get this right, and we’re screwed.
And then I’ll talk to another person. And I’ll, you know, and they’ll disagree. And I’ll ask, what are that person? What is that? What do the doomers get wrong and they’ll have, you know? Great answer! Is that you know about about, uh, that they’re just overest, generous many of the walls that will come, that this thing will hit and?
And the certainty that it won’t, you know, that so far a lot of actual llm things, that’s the solutions have kind of just happened because it seems to get what we want. Yeah, so again, we can go into alignments the massive topic, but? And then I, I will like, literally.
Hang out with my two-year-old one day, and I’ll be like, you’re gonna have this amazing life. I can’t wait to like be at your wedding one day, and then the next day. I’m like, you better enjoy these, like last couple years, it’s all about who the last person I talked to is because these people all know more than I am like, shit.
Okay, so I, I don’t know. It’s a roller coaster if you just for those of us who are not as deep. If there were one Doomer and one so-called Bloomer who you could mention as being an eloquent representative of the perspectives are there? Are there two people you would point to or two sources of content??
Leds, everybody. He’s good at talking about it. He has good reasoning, um, again. A lot of the other people say he’s too deep in his own narrative, he can’t, you know, he’s too attached to it, but I think you know he’s he’s going to present that that case quite well.
You know if you listen to John Talon on an interview, he’s another person that you know is pretty pretty much there and, and he’s, uh, also very eloquent on the issue, um. Peter diamandis always makes me feel fantastic about everything I talked to him the other day, and I was, like, life is good.
I was, like, we’re basically, there’s the top of the mountain right over there. But, you know, he’s a very smart guy he’s been, you know, he’s, he’s he, you ask about any technology he can go deep? And. He feels, uh, that we are. That there’s, you know, a lot of them.
You know, you can reference history either way. So, the the positive people? We’ll reference, you know, history, and say, look, you know? There’s every single one of these giant Tech revolutions. We’ve had the Boomers, and they’ve never been right yet. Um, and it’s never, that’s not clear how we’re going to figure it out, but we always have.
You can, you know, look at history back up a little more, and you’re like all these awesome Empires and civilizations. They fell, every single one so far. You know, and you know, you could say AI is just different than these so? Yeah. Start with those people, but honestly, most people are neither of those you know.
Those are the kind of the P Doom above 90 or below 10, maybe most of the people are between 90 and 10. Most of the people I talk to think we’re as if there’s a fork in the road with it, one. Is it just doom and one is Utopia?
The the extreme people think we’re already past the fork. It’s this is going to happen, or that’s going to happen. Most people think we’re still before the fork that that there’s their their pedium was somewhere in between, and we actually can affect this. This is, this is. Something that we can still do something about so that that’s the large, much larger group.
Awesome! All right now! A kind of two-part question so? A lot has been said about how some of the heads of these AI labs and others the leaders in the field feel stuck in a prisoner’s dilemma. In other words, they all want to be more responsible. They all wish they could all work together and maybe pause or work conservatively, Etc, but they feel that if one of them.
Post of the trigger and they don’t trust one another. Then they all have to do this mad rush. So first, I want to ask, did you get the sentiment from the folks you’ve spoken to that? In fact, that’s the feeling that they wish they could collaborate more, but they feel they can’t.
Yeah, yeah, there’s definitely a collective Collective problem here, which is, um. You know, the head of Google to say says, we’re gonna, we’re gonna stop this. He gets fired, right? Like, it’s just there’s bigger forces here, and it’s not, you know, every single person. It’s one of those classes every single person individually can feel like we need to stop, and they won’t.
And then, of course, that’s the that. That’s this smaller Arena, which is these companies against each other. And of course, you back up, and you get to the larger unit, which is geopolitical International. And it’s the same thing. It’s this, you know, reading about this. It’s just like, there’s this giant neon sign.
That’s, like, slow down everyone slow down, and then you have the another neon sign above it. It’s like, butt, China, right, it’s like, and, and this is the problem is if we slow down like, this is immense power to whoever gets there first. So, it’s this awful situation, and it’s like.
You know, I bet you, if you talk to the people in China, they would also say, yeah, we should all slow down. Like, it’s, it’s everyone person can see this. And we still have a problem. This is, you know, I don’t know my friend Liv Marie, you know, or Scott Alexander.
They talk about Malik, right? It’s this kind of this, this kind of God of perverse incentives that drives off a cliff, even though we’re all even if everyone’s being a good person. A big problem. We have an incentives problem, um. And. You know that what they? A lot of people think we just have to somehow thread the needle.
We have to go faster than China, but not too fast. And it’s like, maybe we can do it. We could get lucky. It could work out. But, like, yes, I would hope that we could somehow as a species. I think it’s the same thing with I was talking about.
We, if if danger is not right in front of our face, we’re really not good at believing it could really happen. Deep down, even eyes. I said if if I believed truly what the doomers say, I would be like freaking out all the time right now, but I’m not because, like, deep down.
I’m, like, nah, because I’m a human, and if something it just seems so crazy, we can’t believe it. And I don’t think people believe the Utopia either. I don’t think they really deeply believe they could either of those as possible? So I think that if somehow it’s the same thing I was saying if we can understand the stakes?
Geopolitical agreement. This is what happened with news. There was literally like so much fear that actually grown-ups came to the room and prevailed. Maybe we can do it again? All right, uh. So, next question about bio and AI. Those are two topics you’ve had a lot of interest in when you speak to people on the AI side.
I’m sure many here have heard, uh, Demes, and Dario and Sam Altman, saying things like we may cure all diseases in five years or 10 years, or um, we could double the human lifespan and so forth. Bioscience Community often respond. I would say tepidly to those statements because they say, oh, biology, is so complicated.
Have these people really thought this through. Do they concretely have ideas for how AI will solve this, or is it just like hand wave hand wave? It’ll be fine ASI. Any interesting feedback or Knowledge from talking to the AI people that would give you confidence that they understand the complexity of biology when they say such things may happen.
This is another like double neon sign thing where it’s like. Talking, digging in with the biotech. People with the you know, it’s like you, you just realized you’re like we could get to. Pluto, you know and build a civilization before, we can understand, like, what’s going on in the body?
It’s like, so ridiculously, kind of aging, you know, as like, I think there’s like 13 different, you know, systems. It’s 13 different causes of Aging, you know, in our body, and they’re they go from molecular to, you know, cellular to tissue. And it’s just so complicated. Okay, like, I don’t know how we’re gonna do it.
And then the second, the outside’s, like. But AI, literally, like. When AI gets good at something, it just becomes crazy. Superhuman, you know, chess, go right. We’ve seen these things protein folding. That’s a pretty like, I think that’s a point in the AI people’s favor here, because it’s like.
That is a total Game Changer, and no humans could begin to try to do that that quickly. Um, there’s something like 10 to the 60th possible drugs. I think there’s some crazy number of possible drugs that could be like, you know, made out of, you know, atoms? Ai is the thing that can help us, you know?
Right now, you’re getting lucky. You’re experimenting with tiny little, this giant Beach here. A few little grains of sand, hoping you find something good. Ai is something that could actually do something with this vast amount of data, you know. I, I visited the people at New Limit. I’m sure a lot of people here know about what they’re doing.
You know, with the epigenetic reprogramming stuff, and uh, you know, Rejuvenation, and of course, they are both AI people and. Because they’re using AI all the time. Uh, to try to pour through this giant. Uh, you know, well of transcription factors and find combinations. So, I think in certain areas where it’s like processing data, huge amounts of data.
I actually think AI, can I? I would never bet against AI. Think it’s kind of like the human body’s. Wow, it’s so complicated. And then there’s like, don’t bet again. Say, yeah. To me, that don’t bet against AI ultimately wins out. Um, and I think it might be in just like the AI people might not know enough about biology.
I think the biotech people sometimes are are not are underestimating AI. Them all, right? Uh, could we have a Mike, uh, somebody from that just so we can have some audience questions? And while we get that. I’ll ask another quick question, which is, what are you personally doing to prepare if anything?
Nothing, just like procrastinating away my days, and like being sad or happy, depending on who I talked to last. I, I don’t. I’m not like building a bomb shelter if anyone is doing that. Like, tell me if I should do that and how I should do that. But otherwise, I’m like, basically, uh, you can join mine.
Yeah, no, I’m I’m just kind of. I’m just. It’s part of me is, I’m scared and excited, but I’m also just interested. As I said, this is a great movie. Wow, like what an interesting movie we’re watching here, that unfortunately we’re also the characters. Writers, so let’s like, you know, like, okay, hopefully.
We’re also the writers. But yeah, I’m just kind of interested. I guess where I am fascinated. Awesome. All right, let’s have our first question from the audience. Quick intro on question? John cumbers found a sin by Aveda. I’m curious what you think about the role of the media in this Utopia that we’re building right now.
We’ve got a crisis of trust in the media, and we’ve seemed to be setting up all the wrong incentives for things to take off, uh, the misinformation, or that aren’t true, and you’re very influential person. Now, in in the media, so I’m curious how you see the role of the media in us.
Being able to ascertain what’s correct and what’s truthful going forward in this utopian Vision that you have. Yeah, the media can help or hurt. I mean, it’s hurting right now, because when the media is being done. I mean, what it does, is it just not just like reflects the status quo, but it kind of makes taboo things that are not in conventional wisdom.
It’s going to reflect, you know, the conventional wisdom more than anything, because that’s what most of the audience wants to hear. It’s going to almost always be wrong. Um, and I read an article the other day about how, like, you know, don’t believe the hype about AI. It’s a bunch of, like tech Bros.
I’m just like that’s just that’s just just not helpful or true, and it’s just like a popular thing to say right now, so I think that’s bad house. I think the media, can you know the worst thing the media can do, is and is, is? You know, suck these critically important debates we need to have about genetic engineering and about, you know, AI alignment?
And so many other things suck them into the culture more because we look at covet. Discussion about, you know, wet market or lab leak should have just been a debate, and I think we would have gotten to, you know, at least. Maybe you’ll have leak like, pretty soon. A couple weeks in a month in?
A year passed right and delayed that by a year because it got sucked into the culture where and suddenly it became taboo just to talk about this to say the wrong thing about this. Intelligence goes out the window. We get the dumb emergent Giants now in charge. No progress happens.
No thinking happens. So what you really don’t want the media to be pulling these things in and saying, this is what right-wingers say about. This is what left, that’s a disaster for the topics, because then we are, our wisdom goes away, right? And you, just and, and so it’s all becomes private, little private conversations, the big conversation that we need goes away.
So that’s what I think the media can do badly, of course. On the other hand, the media is doing, you know, talking about how high the stakes are.
So, I guess I don’t really. I just, I think those are the possibilities and I, I would. I will do my part if I can’t. To help help, you know, be a media person who’s trying to do things the other way. Awesome! All right, one more question. Yeah, I think the major problem with AI is that when you, when you pair it with authoritarianism, it becomes just unbelievably bad.
I mean, look what happens in China. Now, you know, you’re in a certain City, and you’re getting rated. If you jump over, uh, you know, get into the Metro free or something like that you can get arrested, then later look what they did in Hong Kong with the facial recognition.
Look at the targeting that the military issusing AI for where they’re it’s very effective at, you know, killing certain people. And so, I think, if you have authorianism, AI is a disaster and even for drug development, I mean, you know, a lot of drugs are searing different, you know, ai’s never going to get Syrian good fish, you know, we’re just a lucky accident.
And most things are like that, you know, most real dances and biology or science are are serendipity, and you know, this optimism that AI is going to somehow figure that out, is, is just I? I don’t think it’s realistic. It’s everything you just said, I mean. You never had an Iron Fist dictator like one that has like ASI working for it, right?
I mean, it’s million year Empire. That is inescapable. I mean, it’s not with perfect surveillance. And yeah, of course, terrifying. That’s why it’s even more important right now like that. What I would call, like the liberal house, we live in the lowercase L classic liberal house. We live in with the support beams being core constitutional things.
That’s why it’s even more important to right now to teach kids about it and like, have to try to have a real strong, you know, liberal values? And of course, I feel like we’re going in the wrong direction there for, you know, on all sides of the political aisle.
It’s just it’s just a bad spiral we’re in. We need to get out of that because. Yeah, it’s again. It’s a time when we need to have our wits about us and be most grown up and most, you know, kind of high integrity, uh, as this crazy technology comes in.
It does feel like it’s going in the wrong direction. We’re kind of, uh, at the worst time, so I, that’s why I always come back. That’s why I wrote this first book about I wrote my last book about culture, War stuff, and politics because it felt like this is actually, we’re gonna, you know, we’re not gonna get to any of this crazy Utopia.
If we don’t figure this out right now, like, why are we devolving? Why are we going down in maturity as a society every year right now? Why aren’t we one of these slides?