My guest today is Kevin Kelly, who co-founded Wired Magazine in 1993 and served as its Executive Editor for the first seven years. As one of the most important futurists of our generation, he’s published a number of books including The Inevitable, What Technology Wants, and New Rules for the New Economy which is my favorite one. Coolest of all, he’s also a founding member of the board of the Long Now Foundation, a non-profit devoted to encouraging long-term thinking.
We discussed the Long Now Foundation at the end of this episode in a conversation about what it means to be a good ancestor for future generations. A couple of things stood out from this conversation. First, I like how Kevin focuses on clarity above all else whenever he writes. He sees himself as a great editor, and writing is the process by which he discovers what he’s thinking. Second, we build off the ideas of Marshall McLuhan who was the founding saint of Wired Magazine. Through McLuhan, we explored Kevin’s Christianity, how screens are shaping consciousness, and how our technologies have a gravitational life of their own. Please enjoy my conversation with Kevin Kelly.
Keep up with the podcast
Enter your email to receive information about every new podcast.
Emails will include links, quotes, videos, and exclusive behind-the-scenes features.
Expect an email from firstname.lastname@example.org
Find Kevin Online:
Kevin’s Current Projects:
2:22 – Exotropic energy and how Kevin uses it to explain the negative entropy we see throughout the universe.
6:24 – Why California has become the world hub of extropy.
10:33 – The transition from the written word to text and screens and how it affects our psyche.
15:27 – What made Marshall McLuhan’s writing so paradoxical and engaging.
18:34 – How science fiction has usurped religious teachings as the modern leader of theological thought.
24:06 – Why our limitation as seeing the future only “through the rearview mirror” is driven by a disease Kevin calls “thinkism”.
31:25 – How the Amish have utilized an evidence-based method in their adoption of new technologies.
44:46 – Why technology that we create will always be weaponized in the end.
49:01 – Why Kevin believes that the evidence shows the increase of accessibility of and power in technology has not correlated with our ability to harm.
53:15 – How moral progress is a natural byproduct of technological progress.
57:26 – Why Kevin sees a fundamental transformation in how Maslow’s Hierarchy of Needs is thought about and utilized in people’s lives.
1:05:25 – Why Kevin’s futurology is much closer to simply noticing the present that it is divination.
1:15:20 – How moving away from improving everything’s efficiency is against the very things we desire as humans.
1:23:35 – Why writing for Kevin is nothing but a means to an end to discovering his thoughts.
1:29:43 – How thinking with ‘the long now’ can help us become better ancestors and leave a better world for the future.
David: The first one I want to talk about is this idea of exotropic energy. I never come across that term before. It’s in contrast to entropy. Talk about that idea and why it’s centered in California.
Kevin: So extropy was a term that was invented by a guy named Max More. He started a little kind of a movement and it was kind of a transhumanist idea. He was of the transhumanist group and the idea was that it was supposed to be this kind of positive transhumanist force of evolution in the right direction. I hijacked and appropriated the term and changed the spelling a little bit to mean in a technical sense the opposite of entropy.
So everybody knows what entropy is. It’s kind of the running down indefinite universe. Everything is kind of running down until it’s all flat and everything’s the same. That’s the basic physical idea. So entropy, this running down, this inevitable decay. Well, decay is a whole system and there’s entropy as a universal principle is a little one of the few things we’ve seen no exceptions for anywhere except there are these local pockets where the order actually increases over time.
And it does that only interestingly, by increasing the entropy around it.
Kevin: So the cost of making that little extra order is actually by increasing the entropy. That increasing order we call life. Okay. So life is a system that kind of increases entropy in one area in order to build up order somewhere else. And that’s technically called negative entropy. But because entropy is already a negative I didn’t like double negatives. So I said okay, it needs a positive term because that building up to me is a positive so I called it exotropy. The exotropy is the positive force in the universe that tends to want to increase a special kind of order in a local area. And that is an unbroken chain in terms of what we’ve seen of our own experience of kind of a self-organizing increasing exotropy. And I have a theory that, that increasing self-organization over time is actually what technology is which is an extension of what life is which is an extension of the same kind of physical systems that have been working to make first of all galaxies which are stable and order out of random particles in the universe.
And then planets first actually they’re making the elements and then planets or stars and then planets. And that those same forces of self-organization are actually exotropic. Okay. So they’re little bits of little corners that don’t run down. They actually build up over time. And that self-organizing aspect is very evident in the way life self-organizes itself and becomes a more complicated and so forth. And in my view technology, which is an extension of the evolutionary process in life is very exotropic. So that’s a long-winded way to say that exotropy is the sort of opposite of entropy.
David: And why has it been centered in California? California going back to the Summer of Love even going back to the Gold Rush has always been a place of radical imagination of conquering the frontier and where California used to be, this geographic frontier, it is now become an intellectual one. Why has that been the geography where this has taken place?
Kevin: Because there’ve been no adult supervision. The California was settled by young men, restless young men who came out for adventure and interesting of all the places in the world the US was the only place where the government did not claim the gold. They actually let it up to individual young brazen young men to claim it. And so you had this whole city and area being settled by young men without supervision, no one to tell them they can’t do something. And that spirit kind of shaped the culture of the city for a long time. Very Bohemian, very kind of do what you want kind of libertarian. And that went through the beatniks and through the hippies. And so it’s always been this little place that was very far from the center. The capital is 3000 miles away. The funding is 3000 miles away. The elites and the adult people, the authorities, the authors are all 3000 miles away.
And so you had this kind of an opportunity where people started things in garages. They couldn’t get funding from the government or the rich people. So they got funding from each other and they started this kind of VC startup idea which was a new gold rush. And so the California ideology has always been a kind of do it yourself because no one to tell you no. Ask forgiveness not for permission. And that’s been kind of fundamental to the settling of this even down in LA Walt Disney and all they were just sort of not asking for permission far enough away from the centers of authority to be forced to. So this is not a choice people, it’s kind like startups. The reason why startups succeed is because they don’t have money and that not having money forces them to invent things rather than purchase them.
So a big company if they have a problem their solution is we’re going to apply some money to this. We’re going to buy thing. We’re going to buy our solution. That doesn’t work. The startup doesn’t have that. They would like to buy the solution but they don’t have any resources to, so they have to invent the solution. And that’s sort of what happened in the West. It was far enough from the funding centers that they would like to have some money for grants and other stuff but no one was giving it to them so they were forced to invent it themselves. And that kind of a culture has persisted all the way through. And now we have California and it’s, we’ve become, we used to be as a society people of the book now we’ve become people who screen and big screen, small screens, Hollywood screens, phone screens, we’re now people that screens and going from there.
David: Talk about that. I’m a huge fan of Marshall McLuhan. He was the first author for me that was like almost like the nuclear bomb of the mind. People have this idea of a quake book and I read McLuhan and I said, Oh my goodness the way that I’ve been seeing the world until today has been wrong. It’s been misguided. I miss something in front of me that now I see. And one of the things that I think really understood was how society was going from a culture of the printed word to a culture much more of the image similar to what you’re saying with screens and what is your take on what the screens are doing to consciousness. And I mean we can just focus on screens now and then I think we can move into networks which you’ve written so much about.
Kevin: Sure sure. Yeah. I’m a fan of McLuhan. We made him the patron saint of WIRED from the very first issue. It was my idea was to declare him the patron saint. And if you know anything about McLuhan the joke is of course he was very Catholic. But I’ve taken McLuhan to heart because I’ve never read McLuhan. The whole point of McLuhan is you don’t read him. You only hear about him. I found him impossible to read. And if you ever saw how he made his books you would understand because he would recline on his coach and look up and dictate. He would just recite and he had scribes who were writing down what he was spouting. And that was how he wrote. It was a very oral, he was a professor of rhetoric. And so it was a very oral thing and it just didn’t, you have to hear about McLuhan rather than read him.
So the movement to screens is fundamental. It’s very foundational because the thing about books is that they’re made up of texts in black and white ink that doesn’t change. That’s very fixed. There’s a permanence, a monumental aspect of the scriptures and the constitution and laws and from those authors we get authority. Okay. And so we’ve had this authority based culture based around texts and books. Well the screen has none of that. It’s infemoral. Things for the cross. It’s flowing. It’s liquid. It’s messy. It’s open-ended. It’s never finished. And so you can’t get truth from the authority. You have to assemble your own truth. Okay. This is the problem right now. We’re having to assemble our truths. Okay? Because we can’t rely on the authorities of authors. And so everything is fluid and dynamic and changing and never finished. We have constant endless updates to things. Wekipedia is never done.
It’s always ongoing. It’s not really a book it’s a screen. And so the failing or the I would say the bane of the printed world was propaganda. The bane of the screen world is conspiracy. Propaganda doesn’t work. Conspiracy is what we have to work against. And so because you’re all assembling your own truth this is, it’s contextual, it’s up to you. It’s a big weight. And so we have a different way in which we know things. And I’m really interested in how we change how we know something. And a lot of what science has been about the evolution of science is how we know something, how we decide that we know something. And so what’s interesting about the science, I wrote the first piece on the history of the scientific method. I mean, I was shocked that there wasn’t a book or even an article about the history of the scientific method because the scientific method is like the foundation of our entire prosperity.
And there was never had anyone written a history of it because the scientific method itself has evolved and most of what we associate with the scientific method is very very recent. Like the double-blind experiment wasn’t invented until the ’50s. Random sampling. All these components that we think of as science are very recent which means that the scientific method is likely to change more in the next 50 years than it has in the past 50 years. Continue to evolve. So I am really interested in how we understand and how we come to what we know and moving to the screen is part of that migration, that shift and what we, how we know things and what we think we know. And part of that is that we are relying less on authorities and texts and more on this fluid liquid streaming flowing streaming world.
David: Mm-hmm (affirmative). Yeah it’s interesting because in reading McLuhan he had one quote that was something like, “If you disagree with what I’m saying so do I”. And I think that what he was trying to get to was that there were multiple interpretations of what he was saying. And often he would throw ideas out there that actually made no sense but were just thought provoking. And it’s interesting at such a meta level to see how the character of Marshall McLuhan anticipated the very things that he was talking about and trying to write about in his books.
Kevin: Yeah absolutely. He was a trickster and a jokester. I mean he was a very complicated person. Not at all the kind of stuffy professor. He was a prankster, a trickster, a punster, liked jokes and playing with words and puns. And so I think of him as one of those coyote characters, the coyote trickster. And he’s got court jester where he’s making a joke about things to make a very profound point. If you disagree with me so do I or something like this was one of his quotes and so. And that’s sort of again you’re right that’s very reflective of the service assembling truth. So I getting deep into philosophy I would say I believe that at the foundation, at the very foundation of the universe and the origin of the universe, that it rests on paradox. That there is an essential paradox because whatever brand you want to get at to the creation of the universe whether you think it self-created or was created by God you still have this self creations. Whatever it began it had to self create. Self creation is paradoxical, right?
So the origin is paradox rests on some paradox somewhere. So I think paradox is at the foundation. And so paradox is sort of gluance. I think we’re heading into a more paradoxical place where we can embrace paradox. And so once you have recursive self creation, self-organization is paradoxical. It’s like it’s making itself. It’s like the ouroboros that the, not the snake swallowing its tail is the snake unleashing its tail as it gets bigger and bigger. And so that kind of recursive self-organization is paradoxical. And so I think whenever we get down to basic things, the cosmological things there’re going to be paradoxes. And maybe what we’re going into with the screen is a little bit more room to embrace paradox.
David: Mm-hmm (affirmative). Why are there so few religious futurists. It seems like you have futurism over here and then you have religion over here. And it’s the same thing and at the end of the book of Revelations it talks about a new heaven and a new earth. And then we talk about sort of people building this city and going from Eden to the city. And one of the things that I had come across in myths is a history of the idea of progress but really came across as I was reading your prediction on the next thousand years of Christianity is the idea that Christianity in linear time really is born out of Christian theology. And it’s interesting because you would hear that you would say well, there would be a lot of Christian futurists but there aren’t. And what is the mechanism behind that?
Kevin: I think there is a whole faction of Christian that is very much wedded to the originalist position of the Bible and Revelation as being the only I mean, basically it’s the only scenario that’s permitted. And I think that’s unfortunate because we should have alternative scenarios in a kind of a fuzzy sense. But I think science fiction is now kind of much more. I always say this, most of the theology being done today is being done by science fiction where science fiction is grappling with the same kind of issues of what does it mean to be human? Why are we here? Where are we going? What are we here for? What should we be? Those used to be the catechism. That’s what used to be studyed catechism and that’s no longer relevant for most people. I think the general culture has moved away from the established religions as having answers because they’re very parochial.
Mike I just shared the other day a photograph of the 10 million stars at the center of our Milky Way. A photograph you could see 10 million stars of the 400 billion that we have just our galaxy. And it’s sort of okay. So is there only one Jesus and we all share Him or is every planet have their own Jesus. To answer that kind of question, what we have right now is not even attempting to answer that and that’s the kind of the background of science fiction is saying well, this is a really big world we need to think about these issues. What’s the role of humans in the world with 400 billion other planets with civilizations on them. And so let alone the fact that we’re going to make artificial aliens I call them AIs. We’re not going to wait for meeting ET.
We’re just going to make various ETs on this plannet of various kinds of mind. But we’re going to make our own Ets, our own aliens. And they’re going to be called robots and AIs. And they’re going to have the same issues that we would have had if we met creatures from another galaxy. It’s like what do you believe? Why are you here? So I’ve been working on this project called a catechism for robots. I was like okay, if we have these things and they do have some kind of sentients or even maybe consciousness what do we tell them? What’s our message to them? What’s their role? What’s their relationship to God? Where do they fit in the cosmos? Should they believe, should they become believers. And so the religions are a little bit bound I think by earth and we have realized that we’re just a little tiny rock on the edge of one galaxy.
And so we have a much broader canvas now to ask these questions and I think science fiction authors intuitively know this and they’re asking the questions that theologians used to ask but in a broader context. And they’re really wrestling with these issues about if you have other beings who are we, what are we good for? Why are we here? And I think the question that’s most pertinent to us right now is what do we want to become? We are remaking and remaking ourselves as humans. So what do we want humans to be? I mean it’s one question it’s like, what are humans good for now? But the real question is what do we want to become? And that’s a much much harder question. I mean most of us particularly when we are young we struggle with our own identity. Who do we want to be? But now we’re applying that at the level of our dier species is like it’s a double whammy. So David not only you have to decide who you want to be, you have to decide who we all should be. That’s a heavy load.
David: Yeah. To return to McLuhan here is I think that this is one of the big projects of your work is sort of shown in two McLuhan quotes. The first is that “we drive into the future using our rear view mirror.” And I think that McLuhan really didn’t, like he try to sort of anticipate the future and show us the broad details but there was actually something-
Kevin: He wasn’t interested in it. He hated the future. He didn’t really want to know.
Kevin: He constantly talked about the fact that he had, wasn’t interested in was I mean, he abhorred it in some ways. So he did try to, he knew that it would be something he didn’t really want.
David: So it’s something like that. But I think that you’re kind of trying to show us to aluminate the path but this is to me what is quite concerning to the extent that we drive into the future using our only rear view mirror. That we can only see the future from the past or in the present, then do we actually have this ability to shape the world that we’re going into or are we going to end up like a Mark Zuckerberg who had no idea what he was creating in the early days of building Facebook? He had no idea that he was building this new global village and I think that he sort of stumbled into becoming one of the most powerful people who’s ever walked the planet Earth.
Kevin: Right right. So I have this thing. It took me a long time to kind of realize this but I mean basically the future is unpredictable. The only thing we know about it for sure is that it has to be possible. So the impossible futures won’t happen. So we can kind of use scenario planning and science fiction and all kinds of stuff to kind of outline the general sphere or circle of what’s possible. Knowing that whatever happens has to happen within that field of the possible. And so by going to the outer edges we can kind of stake out some idea of what the boundaries of the possible are. So when we think about the future, we may not be right, but we can be ready for it. We can imagine the outer limits and sort of prepare ourselves for the wild edges of it. We may not be right but we can be ready. We can not be not surprised. So that whole idea of futurism that we’re not surprised when it comes.
That being said the thing about new technologies and the way I, the reason why I give Mark some slack is that it takes several generations for us to decide what they’re good for. They’re kind of like babies. I think that technologies have to find the right job and that’s our job as their parents to kind of help guide them to find the right role and it takes some time. It takes some time and more importantly, it takes use. So these things are so complicated these days that we cannot figure out what they’re good for by thinking. I call that thinkism. I thinkism is a disease of our time. It’s particularly the disease of really smart guys who like to think, and they think that we can solve problems by thinking about them. And I think thinkism is completely wrong. And that’s where the singularity stuff comes in.
Well, if I can accelerate the thinking about it, if I can be really smart, we can solve all kinds of problems. No, you don’t just solve problems by thinking about it, you actually have to do things. You have to try things. And so I think that it takes using technology in daily use to figure out how it actually works. What’s really good about it. What’s bad about it. And then to rectify it, to do to this constant iteration where we go back and say, Oh, we don’t want to do that. There’s too much of that. Let’s do it here. So through use, we figure out what it’s good for. It’s through use that we steer it. So the problem with a lot of technology critics is they want to ban technologies. They want to turn them off, turn them away. If you do that, you don’t get to steer. You steer technology by using it.
It’s like riding a horse. You have to kind of mount it and then get it and then use them daily in order to find out what they’re good for. Thinking about them is too complex. We just cannot anticipate both its benefits and its harms by thinking. We have to only do it by use. And the more complicated the technologies get, the more critical it is that we arrive there through use. Okay? And so I preach in the embrace of technologies as a way to steer them. Okay, so the problem, not the problem, the thing with social media is that it’s less than 5,000 days old or something, right? It’s an infant.
It’s going to take some time, a generation, for us to kind of figure out what it’s good for, how to use it, what the limits are, all these kinds of things. And then we can begin to try and steer it. I think it’s, I don’t know, I think it’s weird that we expect that we can kind of know how these very complicated things are going to react. Just like deciding that your baby’s going to be a doctor. “I can see it in their eyes, you’re got to be an artist. They better be an artist.” They better be whatever it is.
David: Better be a podcaster.
Kevin: Yeah, right. So that’s where we are. So, we’re just at the first rev of this. We’re in the dawn of social media. It’s going to take awhile. And here’s the thing that’s really, really weird about what’s been happening in Silicon Valley and the social media is that the whole genius of the startup world and VC funding is that we demoralized failure. Okay. Until that moment, until that time, if you lost a million dollars to someone’s investment, that was a moral failure. That was bad. You were irresponsible. You were not to be punished, but you certainly weren’t to be given another million dollars. Silicon Valley came up with this idea that failure was crucial, essential to learning. And that was the kind of foundational idea in science is that you learned through failures. You kept trying. If you weren’t failing, you weren’t learning.
If you were trying something that you knew would work, you weren’t learning anything new. So baked into the idea of the scientific view was this idea that failure, mistakes, detours, all this kind of stuff was an essential part of the process of progress. And that kind of went over into entrepreneurial stuff where you would make prototypes that wouldn’t work and you’d go through iterations and being agile. But basically, it was assuming that you’re going to be making a lot of mistakes. You had pivots, all that kind of stuff. So we demoralized failure. We made it a requirement, a part of it. And yet when we come to something like social media and these companies, we’ve been, not, I think that the moralizing language about the failures of this is not beneficial because we’re reintroducing. We’re saying you’re bad people. You’re evil. This is a moral failure on the part of people. No, we want to demoralize failures.
These companies are still young. This technology is still young. We should be embracing the failures. Governments are not allowed to fail. That’s the role of governments. That’s one of the definitions of the government is that we don’t permit them failure. That’s why they have difficulty being innovative because we don’t allow them to fail. We want them to treat everybody the same. So we don’t give them enough room to fail. And that’s part of what their role is. Okay, it’s kind of a trade-off. If they’re not going to fail, they’re not going to be very innovative. But if we’re going to go into the entrepreneur area, in the area of business world, we want them to have failures. And that has to include failures at the level of what’s happening with social media. We want to be able to correct them. We want to have insistent, relentless, vigilant correction, error correction, constantly. That’s what we’re about. So if there’s any criticism of this, it was that they weren’t correcting fast enough. That’s a fair criticism, but it’s still not a moral failure.
David: Yeah. We’re talking about thinkism, which is a great term. This idea that we can think the future, we can know exactly what it’s going to look like at very narrow strokes and then we can anticipate what’s going to happen. And you give a great tidbit about the Amish. And the Amish, what they do when a new technology is introduced, is they ask two questions before they adopt it. The first one is, will this technology strengthen my family and will this technology strengthen our community. And I love this idea that they have these early adopters where certain people will go and they’ll adopt a technology for the community and they’ll do it for say one year.
And then the rest of the community will observe what happens to these people? Do they become more socially isolated? Do they become more anxious or do they become kinder? Do they become more generous to the community? And then rather than grand architect thinkism, they look at what happens. They run their own kind of scientific experiment for the community. And only then, only once the results are in, do they make the decision to adopt or to reject that technology.
Kevin: Yeah, that’s a very good summary. And it’s sort of evidence-based technological policy, which is what I advocate for, rather than basing our policy and what we could imagine could happen, let’s base it on what actually does happen. Let’s take the evidence. Okay, so let’s take the evidence of social media and use that, the actual scholarly scientific evidence, rather than all the things that we could imagine could happen. And the Amish do that in that kind of very IHAT way. They never actually make a real decision. And most of those decisions are at the parish community level, one by one. But yeah, they will allow an Amish early adopter to try out something, always with the caveat that if they observe negative effects, he has to surrender that immediately. And that’s the deal that they make. And bit by bit, basically what the Amish are, is they are very late adopters.
They’re kind of, they’re in the process right now of accepting cell phones, but not smartphones. Then cell phones for a number of reasons, even though they don’t have phones or wires, telephones, they were accepting cell phones as appropriate for their life. Solar recharged, all that kind of stuff, by people observing the people who were early doctors using it, who often begin by using them at work first. That was often an excuse they use is… And then almost like, kind of like the Orthodox Jews have some really weird workarounds. So they have this distinction between using something and owning it. So they will be often driven to places they need to go by people who have vans, who drive the Amish around. That’s their job. They’re regular people. They’re Uber drivers. They have vans and they only ferry Amish and the Amish will ride in the car happily, but they don’t own the car.
And there is some distinction between owning and access, which I talk about as well, access is superior than ownership in many ways. But the Amish do use an evidence-based idea on lets look at how actually how it does, and then the criteria that the Amish use, which you talked about is true. It may not be different than individuals, but the main difference that the Amish have is that they have a community criteria. They have a collective criteria. Most people that I’ve met, and I’m sure you included and me included, will accept, will use certain technologies and not others. We don’t have TV in our house. Someone else may not have a dryer or whatever it is. But the difference is that I make those as an individual or we have individual family. The thing about the Amish, which is unusual is they have a collective criteria. And so in this individual culture, we have of typical modern Americans, that’s a huge difference. And that gives them some power because they’re much more collectivistic than individualistic, modern Americans.
David: Have you spent any time doing a study with cultures that don’t use technology in a very anthropologist way? Maybe you’re a techno-pologist where an anthropologist is going to study similarly where there’s not civilization in the way that we think about it. There’s not-
David: … Yeah, for you, maybe that would be, what does it look like in a place with no technology? Have you spent any time doing that?
Kevin: Well, I spent a lot of time in my young adult years living with people who had very little technology. So, I had the privilege of traveling in Asia in the early 1970s when there was a magic moment where someone like me, who had no money, I could not afford an expedition, I could only ride on the back of a Jeep. I could arrive at a place that was in the medieval times in the 1500’s, like Northern Afghanistan in 1975, say. It was a medieval place in every respect that you would measure from the nightwatchman who had a lamp. He would go down and light the little lanterns and the streetlights because there was no electricity in the entire city. There was no cars. Everybody rode donkeys. I mean, it was, they threw garbage out. There was no toilets. This was a medieval city. There was a medieval culture, child brides, the whole thing.
And I lived with people in the Himalayas and others who basically had almost no metal. They were living, and I know what that was like. I know viscerally, experientially, what it was like. And there were people who studied those kinds of cultures and primitive cultures, hunter gather cultures, to see what their lives were like. And they were, it was tough, they didn’t live long. They were always hungry. There are lots of things that they did because of not having surplus and not having science too, to be honest. So there are people who have studied that. In modern times, I did a hunt for the kind of cultures today that would have, like the Amish, we’re the Amish. I fully expected that there should be Japanese Amish because there’s Israeli Amish, they’re the Orthodox. And then there are some Jain in Hinduism there, Jain culture, who reject all kinds of things, including shoes.
And I thought that, well, there should be a Japanese Amish, but there weren’t any. Maybe there will be. I might make a prediction that we’ll see Japanese Amish at some point. But there are very few. There are the Mennonites and the Hutterites in Canada. There’s very few who are still maintaining that modern tradition of, decreased, and they’re all religious as far as I can tell, because it takes that kind of a commitment, I guess I would call it. So a lot of the anti civilizationists, the kind of Luddites, the modern Luddites, they haven’t rejected anything. They’re using computers. They’re talking about it, but they have not given it up. So, the only, for me, the only real case, or the only real evidence for what a world that technology looks like is the past and the few Hunter-gatherer tribes that we have remaining.
And they are, by every account, stained or influenced by modern times. Even the un-contacted ones are really hard to evaluate today in terms of what their lives are like. But from my reading they’re not noble, this is not a noble state. This is not an elevated desirable state. I think a fair reading of it shows that they are really short lives, very, very tough, always hungry. They make do. They can be satisfied and comforted, but it’s not a place where you’d want to go. It’s really, anybody today, anybody alive, you, me, any of your listeners, could buy a ticket to the Amazon somewhere and I would say within maybe three days, no more, you could certainly be in the most remote place on the planet and leave everything behind. And there’s nobody who’s doing that.
Nobody is going that direction. People by the hundreds of millions are buying one way tickets into cities. Why? Because there are choices. The problem with being Amish, the problem with being a Hunter-gatherer is that you have only one occupation. You have, if you were born with a natural ability for mathematics, the violin, ballerina, science, I don’t know, anything, you’re going to be thwarted. You’re going to have to still do what your dad did, or your mom did. You have no choices. You go into the city because even though you’re living in a ghetto, even though you’re living in poverty, you still have more choices than what you had living in your village. Okay? So cities are possibility factories. And so people move there because there is some possibility that their unique set of talents can blossom, okay, that they don’t have in the village.
They will suffer through those really dank quarters and that grime. Where in the village, they had a beautiful view and they had organic food and they knew who they were and they have the support of the family. Why would they leave? They’re leaving to find themselves, to become something that they could not become in the village, to blossom with those unique set of characteristics that we get from technology. And that’s what technology gives us. It gives us more and more options.
And so, somewhere in the world, what will, it’s like, imagine Mozart being born before we had invented the symphony. What a loss. What a crime that would have been if he had been born before symphonies and Hunter-gatherer. We wouldn’t have that beautiful music or Hitchcock with film. That means that today, somewhere in the world, there was someone born whose technology we have not yet invented. Who’s going to be thwarted. Who’s waiting for us to invent that piano, who’s been waiting for us to invent that new thing, the book, whatever it is, so that their genius could come out and be shared. So we have a moral obligation to make these new things, to increase the number of opportunities and to make sure that everybody has the basic ones of clean water and education, all of those basic ones so that everybody in the world born has a possibility to uncover, develop and share their genius.
David: Yeah, that’s beautifully said. I think that if, on what you’re talking about right now is technology, as it manifests in creativity, in art and beauty and genius in what Arthur C. Clark said that any sufficiently advanced technology is indistinguishable from magic. And then on the other side, you have these technologies of destruction, of what we saw in the mid 20th century of tracing a lot of the artistic movements were backlashes to World War I, World War II, and the technologies of destruction. And you’ve been talking a lot about the technologies of creation, but are there ways for us to put the genie back in the bottle with technologies of destruction?
I mean, what we’ve seen is, to go back to what you’re saying about this weird paradox, that nuclear weapons, since they were first invented, since they were first used, have actually made us safer in some way. We haven’t had another world war and there’s something deeply paradoxical about that. But still, every single day, there’s this dark ominous cloud that even if we see blue skies, the creation, the genius that you’re alluding to, there’s sort of this invisible cloud that hangs over us. And I think that we should be quite concerned about that, of course.
Kevin: We have not yet and never will, make a technology that we cannot abuse or weaponize. And I’ve been saying this for a while saying, Oh, and by the way, the most powerful technology that we just invented, the internet, we’re going to weaponize and we’re going to abuse it. It’s going to be abused powerfully. And this is the thing, the more powerful the technology, the more powerfully it will be abused. That’s the nature of it. AI, man, it will be really abused. However, and this is the curious thing, even those abuses of technology are increased choices. When the first humanoid picked up a rock and turns it into a hammer, either to make a shelter or to kill his brother, he suddenly had a new choice he never had before. That choice is good. You see what I’m saying? Even the choice to do evil is itself a good. Which means that if there’s a 50/50 wash between good and evil, the fact that there’s another choice gives it a 1% edge.
So there’s 51% versus 49%. So that’s why I say that given everything, even the choices to do evil are a good, that we have 51% good and 49% bad. That 1 or 2% Delta, the tiny, tiny little bit better, the incremental creep of goodness and betterment compounded yearly is civilization. All we need is to have 1% better. What we need to do is create 1% more than we destroy. Then we have civilization. And by the way, 1% it’s almost invisible. Look around the world and say, half of it’s crap. Yes, it’s true. There’s only a 1% tiny bit, but that 1% difference, that little tiny bit, is all that we need. And that little tiny bit comes from, even the evil choices are choices and those choices are good. It’s good to have a choice.
And so I am a protopian. I’m not a utopian. I’m protopian, meaning there is tiny, incremental improvement. I don’t want utopia. I’m not a dystopian. I’m a protopian where it kind of is really kind of a crawl to tiny little, tiny improvements over time that are compounded over decades and centuries. And that is all that we need. So when we look around, we can see still 49% of harm and it looks terrible and it is terrible, but we have a 1%, given the fact that even that choice to do evil is a good.
David: Mm-hmm (affirmative) Help me understand something. So given that, and I don’t mean to come across as a pessimist, I’m just really trying to come-
Kevin: No, you’re doing your job.
David: … Really understand what you’re saying. But take a city like Jerusalem. We’ve spent years with that incremental progress building the dome of the rock, the Western wall, all of these places of religious significance, of spiritual beauty. And my concern is that what technology can do is even if we can have the 51/49, all it takes is one destructive moment to basically destroy something. And that what technology does is it makes it easier and easier for one person to have destruction. Whereas somebody with the stone that you were talking about earlier, right?
Kevin: Right. Right.
David: Cain can kill Abel. But then what happens is we get into guns where guns, you could kill with a semiautomatic gun, you can kill just a couple of people, and then you can have the twin towers and kill thousands of people at once.
Kevin: Okay. So I was concerned about that and I did some research. I said, what does the evidence show? If we take one person, has the ability of one person to kill a lot of people increased over time. And the answer is it hasn’t. So I looked at… First of all, think the twin towers, 19 people, how many people died? It’s like a 20 to 3000 that’s one in a 100 or something, whatever it is. That ratio an individual person really has not been able to kill, or I should not be able to, has not killed on average, more than a 100 people. Set a fire to a building. Take the Manhattan Project, the number of people who worked on the Manhattan Project and the number of people died is still in that same order of magnitude. So maybe we could imagine that, but if we take the evidence, show me the evidence that over time, the ability of one person to kill many, many people has increased.
And we can imagine it. But again, if you go back to the evidence, I just don’t see it. I haven’t seen it. Like if you were to play your mind of like, well, people will say, well, I’m a bio releasing smallpox. Okay. Maybe we can imagine, of course it hasn’t been done yet. Here’s what I would say about it is probably much harder than you think, probably requires a lot more people involved to do that. And it may not even work very well. The short answer is it we have no evidence. Again, we have lots of speculation, we can imagine it. But if you take the evidence, show me anywhere where one person has killed a million people or one person has killed 100,000 people, or one person has killed 10,000 people. You say that all the time, the technology is allowing us, but it hasn’t happened.
And why is that? I’ll tell you why. Because as these technologies become more powerful, they become more social. They actually require more and more people to be involved to make them happen. And that social-ness works as the deterrent. So you, in order to make it happen, you have to have work of a larger viewpoint. It has to be a bigger idea. It has to be more than a rogue individual. You need a team or maybe a State because there are these technologies that are powerful. We imagine them as being, again, it’s a service of an individual, even a nuclear bomb is probably not going to be a lone person. So I’m only saying, yes, I can imagine with you. But if we look at the data, the data does not support your statement.
David: Interesting. So within there’s this really interesting relationship between technological progress and moral progress. And I was thinking a lot about this as I was preparing for our conversation today, and I’m going to pause at something. This is where I ended up not thinking about it a lot at all. So forgive me if it’s wrong, but I’m going to throw out a thesis that moral progress doesn’t exist nearly as much as we think, except for that it follows from technological progress. So let me frame this for you. And it may very well be wrong.
That what technological progress does is it gives us wealth and it gives us leverage. And because we don’t have to deal with the evil inducing elements of scarcity, we can then have the privilege to act like less of the Hobbsy and savages that we may be are, and much more of these good people with good intentions and good morality. And that moral progress is in some sense or overrated because it just is a result of technological progress. Crazy thesis may well be wrong, but I want to hear your response to it.
Kevin: I agree with you. Again, going back to my limited experience with the early past and their struggles is that the whole point of Malthusian theory, that was a spark for both Wallace and Charles Darwin in the evolutionary theory was the fact that in nature, all populations of whatever plant animal, and certainly humanoids would reproduce to the limits of what they could sustain. Right? So they were always, every population would come up to the limits of starvation. They were always on the edge of starvation, all populations, okay. That’s just the nature of how they’re going to reproduce and totally can’t reproduce anymore because they’re at the limit. So science was the way out of that. That was true for humans until fairly recently. And any of the contexts of surplus based civilizations that went and had contacts with people who were still in their subsistence mode, always talked about how they were basically always hungry.
And they were always at the limits and their kids were dying early because they were basically right at that limit. So it was only food science and technology that we were able to escape that ongoing pressure and begin to invent ourselves. As we have invented our humanity, we invented our sense of fairness. We invented all these things and that surplus has given us the ability to not just have to worry about surviving, but to actually thrive and prosper and change. So we’ve been changing all along and people don’t realize how much we have invented ourselves through the technology of cooking, external stomach. They’ll allow us to digest nutrients that we could not digest ourselves, which that cooking changed our teeth and changed our jaw. We evolved lactose tolerance in adults. As soon as we domesticated certain animals, allowing us to have the benefit nutrition from milk as adults, we’ve been changing your genes the whole way along.
And we’re still in the process of inventing ourselves. So that has definitely come from the technological surplus, you might want to call it, that has only been present in the last three or 400 years and more so today. So I think that as we go forward, this arena of our meaning and what humans mean and what humans are about, that’s going to be our main area of focus of now that we have surplus. Now we have too much to eat. Now that we have air conditioned boxes that have running water and wifi. We’re going to start to think about, well, why am I here? What am I doing? And so we will continue to evolve in a moral dimension, in those concentric circles of expanding, who we think we are, including others of different color, maybe including others of different species, maybe including the machines. So I think you’re absolutely right.
David: Yeah. Two things came to mind, as you said that. The first was Churchill’s idea that first we shape our tools, then our tools shape us. And the second was what I’m going to forever call Kevin Kelly’s hierarchy of spirituality versus Maslow’s hierarchy of needs. I think that’s what you’re getting to, that we can ascend to thinking about these higher questions, which leads me to how much of your work is almost a biography of the future history of tomorrow versus a more science fiction imagining the landscape of tomorrow? Certainly there’s an interplay between them. It’s not one, it’s not the other.
Kevin: Sure. Sure. One of the questions as I spent time traveling I spent… I have been… I think as of next week, one year, not being on a plane, I have not been on a plane for entire year, whereas I was on the plane all the time before that. And a lot of the time was spent in remote parts of the developing world. And one of the questions I was always asking as I traveled around these countries, not just in the big cities, but trying to get out into the boondocks, as much as I could was is the world converging or is the world diverging?
And I came up with a theory. The theory is… Going back to Maslow’s hierarchy, which by the way, he never used, if you’ve read about that, he never talked about the hierarchy, but we’ve applied. It works for him. What I observed was I believe that around the world, there is a convergence on the lower levels of Maslow’s hierarchy, for the basic essentials of shelter, food, and clothing. And airtight air conditioner department with wifi. That’s what everybody in the entire world wants. If you ask any young person anywhere that is their dream. Air conditioned box, running waters, plumbing, and wifi, they’re going to be really, really happy. And t-shirt and sneakers and fried chicken. Okay. So the basic levels of Maslow’s hierarchy are going to converge. But I think we’re going to see a divergence at the top end about what it means, what the purpose is, what we’re about.
We’re going to see continued, increased diversity and the answers to, okay, you’ve got the things covered. What’s this about? Why you’re here? What’s important? And that is very exciting. If that was really true, if we really saw. At the base level matter, where in a road to travel, you go downtown, you can’t tell where you are, people were in the same clothes. We’ve got the same stores. Okay. Maslow’s hierarchy, basic things, convergence, total convergence. It doesn’t matter. But what does it mean to us? What are we about, where are we going with this? What’s the purpose? There we might see a huge divergence. And I think that would be a good thing.
David: Yeah. You mentioned this in the way that Christianity is splintering into all different ways of thinking about the faith.
Kevin: Right. Right. Exactly. So let me go back to your question about imagination versus truth. So I had been talking about the future long enough to know that whatever I say about the future will be wrong, but you really can’t predict future it’s way too complicated. Again, as I was saying, the best you can do is talk about the outer realm of what’s possible, and then not be surprised by it, but these complex systems are just inherently stochastic. They’re inherently unpredictable. And the further out we go, the harder it is. However, I do believe that there are trends that are built into the very nature of the physics of in chemistry that will dictate a certain course of development in technology. Meaning that my bet is, is that if we talk to all the other civilizations in our galaxy, we would begin to see that there is a developmental pattern that civilizations would go through that they would do sewing and then pottery.
And that there would be a progression and they would… As soon as they invented radio waves, they would have the satellites, and you would go through a certain progression, like you would develop any other organism. And that is because there’re constraints about what can happen just in the physical world as an aside, which would come back to later. I think the most remarkable molecule in the universe is DNA. There aren’t very many molecules in the universe that are possible to arrange themselves in sufficient numbers of stable variations that could build upon each other. There may be two or three. There may be more than we could invent, but there’s only one or two that could self-organize at the same time. So this is a molecule, not only does it have this flexibility and stability at the same time, but can self-organize there. We’ve looked.
And there just aren’t that many choices. So that suggests that even other life in the universe is likely to be DNA ish. That’s a controversial statement, but if that’s so, and particularly if that so. I think there is constraints, positive constraints on what is possible. So if we had more of a dataset, we could say more about where things were going, because we’d have more than N one case, but when N one we can’t really predict, but we can try to not be surprised. However, I do know that when we want to make something complicated, something good, that’s complicated and very, very complex that it’s easier to do if we imagine it first. It’s really hard to go to a very complicated future that we don’t imagine first, that we’re going to get there. So I believe that what I’m trying to do is really not to predict the future, but to imagine a positive future that we want to aim for. Because what Hollywood gives us and what science fiction is dystopia. It’s actually very hard to imagine any positive scenario of the future.
That’s on the planet earth, because Star Trek doesn’t count, that’s positive. And I think that makes it harder to arrive there. And I have to say though, that trying to imagine a future where there’s ubiquitous AI and commonplace genetic engineering and constant tracking of everything, it’s hard to imagine a future that we want from that, but I don’t think it’s impossible. I just think that we have to get better at imagining it. And so you’re right. That I see my role, not in trying to predict and be correct. But to try to help us imagine a positive future that we want.
David: Yeah. I want to ask you about a specific article that you wrote, because for me, it’s the best thing you’ve ever written by far. Not because the other things aren’t good, but because this is just a stellar piece of work that I always come back to, and that’s new rules for the new economy. You wrote an article that spoke about how networks will unfold. And I have read that thing so many times and made specific bets in my career around the truth of that article. And the one that I really appreciate a single sentence in the network economy, the separation between customers and a firm’s employees often vanishes. You see that with Tesla, you see that with Apple and you see it all over the place. And my question is how much in an article like that is predicting the future versus understanding the present in such a lucid way that you’re just describing what’s happening now, but to everybody else that looks like futurology?
Kevin: Right. You nailed it. That’s the best way to put, it is I’m not predicting the future. I’m actually trying to just predict the present. I’m trying to describe the present clear enough to say this is what’s happening now in its early forums, but this is the way it’s leaning. So I see technologies having leanings or biases. And the other example was the internet wants to copy things. Well, my early experience on the well and other places was all these copies, when it copies, it’s really hard to stop copying. Everything wants to copy what’s that about? So if you watch people actually do the things and not what you think they should be doing, but what they actually do, you get this feeling that, that’s because there’s a bias in the technology, the technology wants to, it wants to make copies.
And so we ask ourselves as anthropomorphic thing, what does the technology want? It’s a trope, it’s a hack. It’s not really true, but it’s useful to say, well, there’s a bias in which ways it leaning. And so what I try to do is like I say, listen to the technology, which is like, where is it tending to? What’s advising? And people’s habits and they actually use things, not how they’re invented well, what people think they are invented for. Not what Mark Zuckerberg thought that people were going to use Facebook, but what do they actually use it for? And what is that? Can we generalize that? Is there something general about it that would suggest where it might be headed? But we’re really trying to describe or see, or notice what’s happening right now. I’m just saying, this is like saying, well, given what we see, we’re going to lean in this direction. So that’s the best I think we can do in terms of predicting things is to predict the future and say, all things considered, it’s going to lean in this direction.
David: Yeah. I love that word biases. And I think that a word that might aluminate what you’re saying there is gravity. And so you could have this technological gravity, and it works in two ways across two axis. The first is the number of people using that technology. And the second is how long they use that technology for. And so I think that to throw something out is what the gravity is doing is like, if you were to go as far out, you would have billions of people using that technology over a thousand years, maybe then you have these inevitable results due to the biases of technology provided that they don’t change all things being equal. But then if it is one person over one day, those biases don’t show up themselves. And one of the biases that I’m very interested in is reading on different pieces of media.
If I read on my phone, there is no way in the world that I’m going to get through a whole chapter of a book on my Kindle, no way, but somehow something translates when I get to my iPad where I can do it, maybe, but there’s no way I could get through three chapters. Whereas if I leave my phone upstairs and I go downstairs and I open an actual book, I can get through a couple of chapters, no problem. And so see there that there’re different biases of focus and distraction and what the phone does versus even say an iPad, even though they have the same capabilities, the way that the media is actually biased makes you more social on your phone and more consumptive on your iPad.
Kevin: Exactly. And you could imagine a bunch of engineers deciding that they were going to try to make a device that would really focus your reading and make it even easier than a book. There may not be a huge economic incentive for them to do that, but that’s possible. You can have something that’s really biased to reading even more than ink on pages. Like, well, how could you make a book even more bookish than it is?
David: Send someone to an Island in the middle of the ocean with no wifi. And that’s how you do it.
Kevin: A code of silence, whatever it is. There could be lots of things, but I think your image of gravity is a really good one because I have another related image, which is that there is a gravity, and this is one of the ways we can look where technology is going in this biases and the analogy. The visual analogy I use is imagine a valley of landscape valley and it’s raining and there’re drops are coming down and hailing on the hillside. And the one drop is landing on a Ridge of the mountain. And the course of that drop as it comes down to the river is unpredictable. You can’t predict where that drop is going to go, but you can predict one thing. Gravity will take it down, the direction is inevitable.
Okay. So that’s the level at which we can talk about the future of technology. We can say, here’s the direction it’s inevitable. We can’t describe the path as specifics. But we can talk about directions. Okay. And that’s because of the gravity. So I think there are gravities that pull, and we can’t talk about specifics, how it’s going to, what the particular product is, what company will win. All those specifics are completely inherently unpredictable, but the direction down towards gravity, may be very clear.
David: Yeah. I heard a good one, one time, that the arrows of directional progress and computing are going towards closer and closer to the body. Like my watch.
David: Like my phone, and ever, and ever smaller. One of the things that I think-
Kevin: Right. Where does that end? It ends inside you.
David: Right, exactly.
Kevin: Right. Right. It starts with the at a distance in the room and then at the refrigerator level then next to you on the desk, then on top of your desk, then it sits in your lap, then it hops in your pocket, then it’s on your wrist, and next is go inside your brain or whatever. Or, here’s my version of that. Or we’re going to go inside of it. That’s AR and VR. We actually go inside the computer. The computer is around us. It surrounds us. That’s what the special computing in the mirror world is where we actually become so close to it, we can’t get any closer, so we actually go inside the computer and the computer is around us.
David: That’s something I’m really looking forward to because there is something so not tactile about the screen. It’s funny, because we’re actually moving into a place where I think the world is ready for this sort of tactility of the screen. Because if you watch kids go to a museum, what do they want to do? They want to touch everything. I think that there’s actually this frustration, I see it with a lot of people actually, who have spent a lot of time working with machinery, working with tools, and then get away and have to spend their whole lives on the computer. The way that they can’t actually touch forms and stuff becomes really frustrating.
When I was a kid, one of the things I always wanted to do was dive through the television screen and go be there. That is sort of what you’re saying with virtual reality. That is what it becomes. It becomes a place where you can actually pierce through the screen into that environment and begin to touch everything. I think that’s a really important aspect of what VR is. We always think of VR as something that covers the eyes, but in reality, the hands and the smells are really what are going to make that technology transformative.
Kevin: Right. Right. The guys who study this say that the visual is only 50%. The other 50% is the tactile, the audio sensing, and AR versus VR, augmented reality, where you wear the clear smart glass and see the real world is part of that. There you’re not even blocking out, you’re seeing and hearing the real world, in addition to this overlay of the virtual one-to-one world in that, but you’re still inside the computer because the computer in general is watching all your movements and is reading, the world is machine readable. You are inside of it, you are machine readable. That is, it’s VR and AR, or what they call XR, either one of those, that is where it becomes so close to us that we actually go inside and we flip around inside. I think that’s, people ask me what’s after smartphones, I think smart glasses are the thing after smartphones.
David: For you, you wrote a post with 68 pieces of unsolicited advice. One of them was that you really don’t want to be famous. Read the biography of any famous person, but you, at some level, have fame. I mean, I’m psyched to be talking to you right now because of your fame. What is the optimal level of fame? Is that a gradient or what were you referring to there? Because at some level, being famous, founding Wired, you have some level of notoriety, but maybe that’s not fame.
Kevin: Yeah. I have a little touch of fame in China, where I’m a rockstar and I have bodyguards.
Kevin: I’m recognized I’m recognized on the street, but the definition and the problem of fame is that you are recognized to an extent where you are not allowed to make mistakes. Okay? You can’t make mistakes in your social life. You can’t make mistakes in your creative life, that’s fame. You’re constrained by that. It’s terrible. This is the thing. It’s not the paparazzi that’s the problem. It’s not having people follow you or stalk you. It’s a fact that you are no longer allowed the courtesy or the liberty to make mistakes, to try something that doesn’t work, to fail in some way, to have a friendship breakup, whatever it is.
That’s what I call fame and toxic fame. It’s something that you would get if everybody recognized you is, is that you couldn’t try something somewhere. A lot of the people who are famous lament the fact of the early days when they were trying things and nobody cared and that’s when all the great stuff happened. I am really impressed with people who are famous and continue to make really good stuff, like say Bob Dylan, in part because they become recluses in some ways, they hide themselves from that fame in order to try to be able to do the things that you need to do in being creative, which is have failures.
David: Yeah. That really goes back to some of the things that we were talking about earlier with government and companies.
Kevin: Right. Right. One of the things I say about government is that the business of government is to be inefficient. We saw that from the COVID no business, no real business run by competent people would allow itself to stockpile 50 million masks just in case. You can’t have extra nursing staff that’s not being used just in case. That’s what you want government to do. You want government to stockpile all these things-
Kevin: In a very inefficient way. You want them to have extra surplus and having whatever it is. It’s like old Egypt with the Pharaoh and the grains for the lean years. That’s inefficient and no company that is a real company can afford to do that. That’s what we want to have governments do. They want to be inefficient in that sense of having that kind of capacity, also investing into things that aren’t going to pay out for 10 years or 20 years, that’s inefficient.
That’s the role of government, and that’s what makes it conservative is that governments aren’t allowed to make mistakes or they can’t be efficient. We lament and we cry and we criticize them and we taunt them because they aren’t innovative and stuff, but that’s their purpose in some ways. We want to have institutions that do that. We particularly want to have institutions that take a long-term view. This is sort of the whole business of The Long Now Foundation of we want institutions that are good ancestors. We want to be good ancestors. We want to be able to do things that may take more than our own lifetime to finish. That’s not efficient, by the way. Okay? That’s another inefficiency. We have to judge things on a kind of a different timescale, but that’s part of what governments should be doing. That’s part of what businesses should not be doing.
One of the things that I think a lot about these days is this issue of efficiency, because in Silicon Valley, there is a culture of optimization. Okay? Oftentimes efficiency is optimized, that’s what we’re optimizing for. I think that’s good and necessary, but there should be other parts where we’re not trying to optimize efficiency. I actually think inefficiency is part of the origin of creativity and human experience. Right now, what we’re doing is not very efficient, right? We’re chit chatting, we’re digressing, we’ve got tangents, we’ve got pauses, we’re totally inefficient by the way, right? That’s the beauty of it. That’s what we love.
Science is terribly inefficient. Think of all those experiments that don’t work. If you were kind of like optimizing it, you would say, well, don’t do the experiments that don’t work. Don’t have failures. Just try stuff that you know are going to work. Innovation, entrepreneurs, making prototypes, those are all inefficient, having dead end, pivoting, terribly inefficient. Why don’t you just aim what you want for and just go right there. No, no, no, no. Art is inefficient. Nobody ranked Picasso on how many paintings per hour he is churning out. Right? Discovery, adventure, every single thing that we really value as humans is terribly inefficient.
What I say is efficiency is for robots. Efficiency is for AI. Humans are about inefficiencies and what we want to value and what we want to emphasize and what we want to cultivate, we’re all the things that are inefficient. It doesn’t say that all inefficient things are things that are desirable, but a lot of the desirable things are inefficient. We have to be careful in our culture of optimization that we’re not optimizing efficiency for all things, because that is robotic. Productivity is for robots. All right? We want to be going in another direction of things where we want to optimize something else. We want to optimize opportunities, choices, possibilities, innovation, novelty, all those kinds of things. Love, optimize love. None of those are very efficient.
David: Yeah. This leads me into, I want to talk a little bit about writing with you because you’ve written so many books that have been hits and that I think more importantly have been ways that you’ve taken the hazy ideas in your brain and turned them into concrete words on a page so much so that you go to China and people recognize you on the streets. I mean, who cares about the sort of external stuff, but the fact that you have documented so many of these ideas on paper so that we can share them and pass them along through generations is beautiful. I think that this is one of the things within writing that I struggle with so much, most of the words that I put on the page I end up deleting, I end up pressing that backspace button and they leave the page.
How do you think about efficiency in your own writing process? One of the things I found is we’re talking about conversation. I think the reason why we love conversation is, like you said, not because it’s efficient, but because on the inverse side of efficient is randomness. At some level that randomness leads to serendipity and serendipity leads to generativity and generativity leads to that innovation. I always find that my best ideas, at least the ones that helped me cross the adaptive valley, that helped me see sights anew come through conversations, but conversations are extremely inefficient. My question for you here is how do you think about efficiency in your own writing process? I know you spent, at some point, 6 months researching virtual reality in a feature piece for Wired. That’s inefficient, right?
Kevin: Well, first of all, my own conception of myself as much more as an editor than a writer, I love to edit and I hate to write. I don’t like writing, I procrastinate writing. I write out of desperation and I write primarily to discover what I think. I don’t have access to what I think until I try to write it. I think I know what I think, but I realize I don’t know what I think until I try to write it. That’s when I realize I don’t know what I think. I have to go back and do more research and think about it some more. I’m a very slow writer. I’m a slow writer, I’m a slow typer, and I’m not at all efficient in terms of how I do things. It may seem productive, that’s only because I’m old. It is not because I’m really very productive. My process of writing is iterative.
I’ve come to like to write in public or to publish early drafts because I get such good feedback about what people understand and what I don’t know, or mistakes, or new sources. I think the one primary thing that I try to optimize when I’m writing is clarity, clarity on the small-scale. Then yeah, even clarity at the large scale. That’s really the only thing that I am asking myself is how can I make this clearer? A good turn of phrase, of course people like that I’ll kind of say, save those babies, but in the end I will kill them if they aren’t communicating, if they aren’t clear.
I don’t see myself as a writer who’s really good with words and stuff. I see myself as someone and that clarity that I’m trying to get is actually for my own sake. It’s like I need to really understand this or see this in order to make it clear to someone else. I have to, most of my time writing is pacing. That pacing is me trying to see it, to see it, to be able to convey it. For me, it really is a kind of thinking on paper.
David: Have you tried the McLuhan-Churchill Style of just talking it out and seeing what happens?
Kevin: I have a couple times, and my problem is there’s not enough resistance. I tend to say what I know. I find it easier to say something new when I’m writing it. I don’t know why that is. Maybe I could change that. Maybe I could learn to force myself harder when I’m speaking, like dictating. I don’t know. I’ve tried a couple of times and haven’t really been very successful, but it is very appealing because I’m such a slow writer. Have you tried it?
David: Yeah. I do it all the time, man. I do it all the time.
Kevin: You dictate most of your stuff?
David: I don’t dictate most of my stuff, but here’s what I do.
Kevin: You say all the time, what do you mean by all the time?
David: There was a while where what I would do is I would, on my way home, whenever I got off the subway when I was living in New York, on the way home I would dictate one idea and I would get an auto transcription and then I would keep a fair amount of them. Then the other thing that I do all the time, so I’m working on a big essay right now about how to save the liberal arts. I just finished the first draft, it’s about 10,000 words, but it’s super disjointed. What I’ll do is I’ll read the entire thing and then I will speak out a summary and I’ll give myself like three minutes to do so. When I have to summarize, what ends up happening is something that is sort of amorphous, almost like an amoeba all of a sudden gets structured into an idea where there becomes this logical flow. For that, I find speaking forces me to structure something. Whereas, my words on a page are sort of much more cubist. They’re all over the place and they sort of slither and slather in ways that don’t make sense.
Kevin: That’s an interesting idea of trying to speak a summary because here’s what I have observed is that the best time to write a book is after you’ve done the book tour for the book. After you’ve had to summarize it for a year, you finally realize what the book is about. It’s like, I wish I knew that when I was writing the book. The idea of speaking out the summary might actually be something that I should try.
David: Go for it. Hey, I got a question for you about The Long Now. I find that The Long Now is sort of, stands in contrast to the biases of screens and the biases of … Neil Postman saw this as the biases of the television, which was that it’s all about the now. We live in this never ending now where we’re sort of completely aware of the news cycle. We’re completely aware with our moment in history right here right now, but we don’t get introduced to that many historical ideas, at least as a percentage basis.
If you go to your college library, the average book that you find will have been written 50 years ago or something. Now, the average post on social media was like 24 to 72 hours. What you’re doing is, I don’t think that people really realize, and this is one of sort of the implicit or explicit messages of The Long Now is that in our perception of time, we can actually stretch it like an accordion and then have like a long now clock that operates over what 1,000-10,000 years where we’re then thinking from generations past to generations future rather than from the beginning of the week to the end of it.
Kevin: Absolutely. You’re absolutely right. Another way to maybe convey what The Long Now perspective is about is that we want to learn to become good ancestors. Just as we have benefited from the work of our ancestors, making, building roads, building buildings that have lasted for generations, we want to be good ancestors ourselves so that people in the future might look back and say, “Thank you for doing that.” How do we become good ancestors when things are moving so fast and how do we make something where most of the benefits may not come now, but down the road two generations from now. Right? That’s the benefit of science, investing into pure science and research is that those are not going to pay off next year, maybe even five years, but they will pay off in 10 or 15 years. Maybe we should be investing in things that don’t pay off for a hundred years.
How can we become good ancestors and invest into things that will not pay off during our lifetime, but in the future. Some people talked about colonizing the future, that we’re kind of like imperial, we’re kind of robbing, we’re kind of like robbing the future generations of their value by using it up now. Right? This is a kind of a form of colonialism and imperialism, and that we actually want it to kind of decolonialize, and allow more of today’s value to go and increase. The other metaphor is a lot of Silicon Valley folks and entrepreneurs are really good at scaling up things in time. Now, figuring out how do something and scale it up in volume and scale it up in size, but we’re much more interested in scaling up things over time so that they start small now and they become bigger and bigger and bigger as they go along. Again, that most of the benefits happen in the future.
That scaling up in time is a skill that we don’t really have and we want to kind of promote. Then there’s maintenance, right? How do you maintain things? Again, this is so you’re maintaining as a good ancestor into the future. The Long Now of course is contrary to the short now, which was the last five minutes, the next five minutes. We want to think about the last 10,000 years, the next 10,000 years. Most of the futurists that I know that are any good are very good historians. They want to take the long view. We are trying to think, not, again, we’re not trying to make a plan. We’re not trying to predict the future. We’re not like Asimov’s Foundation, which has some kind of thousand year plan. No, we’re saying we want to liberate the longterm imagination to go to your point. We want to have people imagine over large timescales, imagine what you could do now that might take a long time to succeed or come to fruition to imagine what we would be like if we kept future generations in mind when we made things.
David: Yeah. That’s a beautiful way to close. Kevin Kelly, thank you very much.
Kevin: Thank you. It’s a real delight. I loved having our conversation. I really appreciate your taking the time to talk to me.
David: Sure thing.
Cover Photo: Christopher Michel