Ash Fontana: Building Artificial Intelligence

Ash Fontana: Building Artificial Intelligence North Star Podcast

LISTEN HERE: ITUNES | POCKET CASTS | OVERCAST | SPOTIFY

Ash Fontana is an entrepreneur, investor, and author. As an entrepreneur, he was only of the early employees at an online investing platform called AngelList. From there, he became the Managing Director at Zetta, the first investment fund focused on artificial intelligence. Now, he’s the author of the AI-First Company.

This conversation is about that book. Ash says that AI-First companies are the only trillion-dollar companies, and soon they will dominate even more industries, more definitively than ever before. But we don’t just talk about the book. We also talk about health, continental philosophy, and Ash’s obsession with bicycling. Please enjoy my conversation with Ash Fontana.

Keep up with the podcast

Enter your email to receive information about every new podcast.

Emails will include links, quotes, videos, and exclusive behind-the-scenes features.

Find Ash Online:

The AI-First Company

Ash’s Twitter

Ash’s LinkedIn

Other Links:

AngelList

Zetta

The Bed of Procrustes

Write of Passage


Show Notes

1:38 – Ash explains why he believes that all you need is a problem to get started in AI.

6:27 – What happens to disruption theory once everybody knows about it; how the dimensions shift.

10:03 – How insightful computers can be today and what the future of insight generation looks like.

13:52 – Designing a world that is easy for computers to process using various sensors.

18:47 – Whether or not people will develop an intuitive understanding of databases in the future.

21:31 – Why Ash says that we are conductors in a symphony of intelligent systems.

23:01 – Ash shares some of the incredible innovation he has seen in his position as an investor and the implications of those technologies.

28:45 – Learn more about the complicated relationship between intelligent data and surveillance.

30:55 – The fragmentation of AI technologies that will have to occur to protect personal privacy.

34:21 – How advancements in AI are potentially hampered by current scales of data.

36:28 – The opportunities for collecting data to solve the problems that Google or Amazon can’t.

38:15 – Ash’s interest in continental philosophy and how it informs his thinking about AI.

42:28 – The utilitarian rational articulation of AI systems versus the exquisite unpredictability of life.

50:54 – The explore-exploit trade-off that many of us make in our daily lives.

52:53 – How Ash’s love of cycling is related to the Venetian work ethic of doing a good job.

55:31 – Why he finds the angle of the seat tube and the head tube on a bicycle so fascinating.

57:24 – Ash’s intuitive approach to cycling and why he believes most people do too much high-intensity training.

1:04:02 – What it was like for Ash to have been at a company as influential as AngelList really early on in its establishment.


Transcript

[00:01:34] DP: How do you build the systems that learn? That’s just a beautiful question.

[00:01:38] AF: This goes back, actually, to the first question, which is, do I need to know anything about AI to get this book or to be interested in AI? Or I would say, to add to your question, to get started with AI? I think the answer is no. Because what you start with is a problem. The problem is framed in a temporal sense.

I have the problem of trying to understand this thing that may or may not happen in future, or this thing that I know I need to think about in the future, like a decision I might need to make in future. I know I have this problem, because I face this problem a 100 times in my current job. I know, I have the problem of trying to figure out what ads to target to what audience, to try and figure out when there’s going to be a defect on my manufacturing line or whatever. Because I’ve had that problem 100 times.

I want to build a system that can help me solve this problem, make this decision, make a prediction around this decision, so that I make it better next time. That’s the starting point. From there, it’s all pretty straightforward, depending on the type of problem, but these days. Because a lot of the infrastructure is out there now to help you build systems around that. How do you get started? You think about what you really want to predict and what you really want to solve. How do you then build the systems that learn? To your second question, it’s really starting with a good definition and it’s starting with a good experiment. It’s like, how do I design an experiment that makes a little part of that prediction, or helps me with a little part of my decision? Let’s see how it goes. Let’s see how accurate my prediction is. Then let’s improve and improve and improve over time.

Then you start, once you’ve really defined it well and run those initial experiments, then we can start talking about building distributed learning systems, which is what people think of as AI.

[00:03:40] DP: Oh, interesting. That’s an interesting definition of artificial intelligence. Within prediction, how do you think about regime changes? One of the fascinating news stories that I got somewhat into at the beginning of the pandemic was that the airplane price predictions no longer held, because we were so far outside of the band of what was statistically significant. I guess, you could call that a regime change. Is that some flaw in what artificial intelligence can do? Or are we beginning to build artificial intelligence systems that can actually respond to that?

[00:04:17] AF: Yeah, it’s very methodology specific. Some systems, like a lot of the yield optimization systems that airlines use. By the way, I love that we’re already talking about planes, because it’s a conversation with David, and a couple of minutes in, if we can talk about planes, we’re doing well.

I think, a lot of those systems that they use or the sub-systems of the price prediction and yield optimization systems that airlines use in particular are based on statistical methods that, I guess, are very limited in their consideration. They consider the data they have and they extrapolate forward, to oversimplify. Obviously, these systems are very advanced these days. That’s really what they are. They’re linear optimization-based thinkers, pretty simple statistical stuff that extrapolates forward.

They’ll only see within the distribution of what they’ve considered. You think of a Gaussian distribution, you think of these normal curves that we all at least looked at and maybe properly studied in school. That’s all that’s in their realm of consideration, these systems. If you think about different forms of machine learning and artificial intelligence more generally, you can start considering things out of the realm of previous consideration that could have, for example, predicted something like this.

If you think about maybe creating an agent-based simulation, so you think about creating a simulation where you’ve got the number of planes, you’ve got the people that book seats on planes, you’ve got all these other factors, you’ve got some constraints. You set up a little environment, like a little game, and you just let it run. That may have got to the point where, in the game, it had considered a situation where all of a sudden, no one is flying at all. 

That is a very different way to model a problem than using statistics, like building games is very different to using statistics. Building that game may have been able to let us see a situation like the one we’ve faced. That’s a very different form of intelligent system than what a lot of people use in a production sense.

[00:06:27] DP: One of the questions that came up to me, and you get into this towards the end of the book, was about disruption theory. One of the questions that I’ve been trying to answer for years was, what happens to disruption theory once everybody knows about it? Talk about that, because it seems to be something that you’ve thought about quite a bit here.

[00:06:46] AF: I love questions like this, because it’s like, well, what’s the end-state of everyone knowing something in a zero-sum game? If we all know the same thing, but someone has to win and someone has to lose, who wins and who loses? The answer is, well, the bar just goes up. I think, to recap for everyone out there, what’s disruption theory? It’s a theory – I’m going to totally oversimplify it. It’s a theory that you come into a market just doing one very specific thing. You’re not providing all the things your customers need, but you’re just providing one element of what they need, but 10 times better, faster, cheaper.

Then, because you get in with that little wedge, then your customers are okay with you providing the next thing and the next thing, and eventually, you take over all of their needs and their vendor that previously served them is irrelevant to them at that point.

What happens to disruption theory over time, the basis of disruption, the catalyst for the wedge, the initial thing you do just changes. I think the theory is actually a very good one and will stand the test of time, so to speak, but the dimension of disruption changes. In the industrial era, the dimension of disruption was, if I can run my production line better, if my workers are just a little bit faster, if I train them better, then I’m going to be able to do this 10 times faster. Then, as we moved further on into more resource-based economies, it became more about scale.

If I can pull more metal out of the ground using the same amount of trucks, because I have a richer seam or something like that, then I can provide this thing 10 times cheaper. Now with AI, what it becomes is a question of not scale of data and not speed of calculation, necessarily. It’s accuracy. If I can train a model to be more accurate in the predictions it develops, then I can go in and if it’s much more accurate, and say to customers, “Look, I know you’ve got this system today that’s this accurate, or doesn’t even do any predictions, but mine’s a little bit more predictive, or a little bit better at predictions.” That’s the basis of disruption.

They say, “Okay, we’re still going to keep what we have. We’re going to still keep out CRM, where we keep all our leads for sales. Because we don’t want to throw that out yet. It’s got a big database of all our sales leads that we’ve ever contacted and want to contact. We’re going to keep that, but we’re going to let you run this little bit on the side, where, five times a day, you can send us a suggestion of who to call.” Over time, if those suggestions are good, they’ll completely move. They’ll spend all of their time in your system and they won’t even use their CRM anymore, because it doesn’t matter who they thought they should contact, if you’re telling them who to contact and it’s working.

That’s what happens to disruption theory. I think it’s a really good theory. The question just becomes, what’s the dimension of disruption? Is it a cheaper resource? Is it a faster processing thing? Or is it a better prediction? I think that’s what’s happening now, the basis of disruption shifting to accuracy and better prediction, rather than just calculating stuff a bit faster.

[00:10:03] DP: Yeah, in the book, what you say is “charge customers more for novel AI-based features, such as personalization, insight generation, and automation.” The one that’s most interesting to me there is insight generation. Because what an insight is, it’s like an emergent property. It’s basically, you’re taking what you have and then you’re processing that information differently. You’re remixing in some way. Then from that remix, you end up with something new and novel.

I just have no sense for how good computers are at actually doing that. I have a sense of the sci-fi vision of where that could go. Right now, right here, I have no sense for how good computers are actually doing that.

[00:10:44] AF: Well, I think you can get a sense in daily life. It’s not a good sense, but it’s a sense, how good are the recommendations I get from Amazon about what to buy? Or from Netflix about what to watch, or Spotify for what to listen? You can get a sense. They’re pretty advanced systems that generate these recommendations, in terms of, they generate a lot of recommendations for a lot of people very quickly.

Now, they’re not that advanced in that their realm of consideration is pretty local. It just looks at what you’ve listened to, maybe what people you connected to and listened to what you bought, what you clicked on, whatever. They’re not really going to come up with something that you view of is highly insightful. Like, “Oh, wow. I was interested in this theme of documentaries, and you’ve recommended something in that theme.” They’re probably going to recommend a documentary by the same director or something like that, which is not particularly insightful. It’s pretty obvious, I guess.

We can, using slightly more advanced techniques that are beginning to be able to be deployed at the scale that Amazon and Netflix and whatnot need, start considering things outside of that really local realm. For example, they don’t just get information from the things you’ve watched, like who was the director, how long was it, all that stuff. They look at themes. They look at all the text in the movie you’ve watched, or all the texts of the podcasts you’ve listened to. They pick out, they look at it on a bigger scale. They don’t just look at what was said in this first sentence, they look at, well, how many times did this word come up in this one? Then, how many times did it come up in another one?

Or what was the meaning of this one-and-a-half-hour podcast? What was the one-sentence summary and does it line up in some significant sense to a one-sentence summary of something else? That was really hard to do in the past, basically look over huge volumes and try to understand the meaning. We could look at what was called an engram, something of a certain number of words long, but it wasn’t very long.

I think you can get a little bit of a sense of how insightful computers can be today, but it’s not a very good sense. I think we’re going to start seeing more and more of that, as some of these newer techniques are able to get to the scale where they’re able to operate in or through the products we use every day.

[00:13:09] DP: It’s funny. I think of this as, if you’re going to design an AI intelligent system, there’s two categories. There’s what you’re saying here, the ability to process information. Then there’s this other category, which is the ability to create information in a way that can be processed. If you take a podcast, text is easier to process than audio. One of the things that I think a lot about is, how do you actually design a world that, assuming that computers are good, that is easy for computers to process, so that then, we can have more of the world being driven by computers and we have basically a more intelligent humanity?

[00:13:52] AF: This is such an interesting thought experiment or design challenge because I can take a really contemporary example, which is autonomous vehicles. How do you design a community where autonomous vehicles can operate really effectively? Well, you put sensors absolutely everywhere. They’re essentially running on rails. The rails are very small, and you can’t really see them, but they’re running on rails. That’s a world in which autonomous vehicles can work.

Autonomous vehicles can’t work in our current world, which wasn’t designed for that, because there are potholes and people running around and branches falling off trees and thunder and lightning, or whatever. They can’t operate in that world. You can think about this in lots of different ways. How do you design a world where there’s a lot of good sensors, so that machines can do sense-making from that? This is really exciting.

Look, if I had the ability to go out today and raise a multi-billion-dollar fund, I would, and I’d invest it only in new sensors that we can use to help understand the world better. For example, imagine if you had an eye that could see on a hyperspectral basis. Had much more of an ability to see different spectra of light, or across the light spectrum, and you had those eyes everywhere. Imagine if you had smell sensors everywhere, pressure sensors on everything, but they were all very small, very cheap, had low energy requirements, etcetera, etcetera.

If we had more sense gathering, we could do more sense-making. I really like that design challenge you pose there. You can pose it on a very local basis. How could I make my room where I do all my video conferencing better, so that it senses how I feel and then communicates that to how someone on the other end perceives me, or communicates that to them so they can receive me, like do they know how hot or cold I am? How hot or cold the room is? What happened in that room? What time of day it is outside? What’s the ambient light? Do I have a lot of blue light, am I awake? Do I have a lot of red light? Am I falling asleep?

Then, how do you gather that and communicate that to someone? That’s a hyperlocal example. You can think of this on a much less local basis, like with roads, with the environment, understanding the weather, etcetera, etcetera

[00:16:17] DP: Yeah. I’ll give you a very practical problem in my life. I cook on high heat a decent amount of my apartment. The problem is when you cook on high heat, it really hurts the air quality in your apartment. I could cough up a couple $100 for an air purifier, which is probably what I’ll end up doing, but it’d be really cool if there were sensors on the walls and on the ceilings. Another thing is dust. Some people are really allergic to dust. I’ve had, in the past, done a spring cleaning in my place, and then all of a sudden be like, “Oh, my goodness. I feel so much better.”

It’s almost like the frog in the boiling water, where the frog is just in there, the water gets hotter and hotter. All of a sudden, it’s bubbling and the frog dies because the changes happen so slowly. Without sensors, you don’t really realize this stuff.

[00:17:06] AF: Oh, absolutely. This so much in our own homes and lives that we are just completely unaware of, dust, mold, etcetera, is one thing. Then those volatile fat particles that have come about because you’ve cooked above the smoke point and then they’ve settled in the curtains, or whatever else and flown around. That’s another problem as well. We can solve that problem for you by getting you to cook with macadamia oil, by the way, a good Australian products.

[00:17:33] DP: Explain that.

[00:17:33] AF: Well, because macadamia oil has a very high smoke point. As you cook on high heat, it doesn’t get volatile at the temperature at which other things get volatile. Therefore, the fat particles don’t start breaking down, vaporizing. Once they vaporize, they’re then in the air. When they settle and liquefy again, they liquefy in your furniture, on your carpets, whatever. Forget what happens in your home. Think about what’s happening in your body. Once you vaporize something, get it to that volatile point, ingest it, then essentially, when it settles, that then striates in your arteries and hardens them up. That’s why it’s good to cook with – If you’re going to cook at high heat, cook with oils that have a really high smoke point, like macadamia, so they don’t get volatile.

[00:18:19] DP: I had no idea about that.

[00:18:21] AF: This is why fried food is so bad, because you are cooking something at a very high temperature, but has a very low smoke point, like vegetable oils. They get super volatile. Then once they settle in, yeah. I’m oversimplifying. I can go on and on, but I don’t want to bore people to death with the different chain lengths of different triglycerides. Yeah, that’s why fried food is terrible for you.

[00:18:47] DP: Back to AI, I have a dream about databases, where – one of the things that happened in the past 10 years, is people, just by being people, became really good photographers. If I’m traveling or something, I go in front of some landmark, I always ask young people to take a photo of me. My question is, will young people have a similar intuitive understanding of how databases work? Or will we continue to go through college, you do your Excel trainings, and then you get into it? Because it seems like that’s a binding constraint, just how late in life people learn about how to use databases.

[00:19:32] AF: I think it’s a good question. I think it would certainly help people understand intelligent systems if they understood how databases work and how data structures are made and what data structures are conducive to eventually doing analytics and developing some sense of what reality is represented in that data structure. Sure. I think the answer to your question is no.

If I take your comparison to the camera, I don’t think young people today understand the physics of camera lenses better than people did a long time ago. In fact, I think they probably understand it less. Because in the past, with a manual camera, you really had to understand how different lenses interacted with each other as you move them around and different light conditions based on the film you had, and etcetera, etcetera We’ve probably both been through that.

I think, people understand the physics of camera lenses less. What they understand better is what makes a good photo in terms of composition and effects and things at that layer, which is many layers above the physics of lenses and light. I think what happens, or what will happen in the world of the future, where we’re trying to understand intelligent systems better, is people won’t understand databases necessarily or won’t need to. Because they’ll be thinking many levels above, which is how to interact with systems to effectively train them so they work better for you. What’s a productive bit of feedback to give an AI system a yes, no? Or how do you use AI systems? When do you trust them? When do you not trust them? 

I think that’s the level of consideration people will be at and not, how do I reform this data structure, so that I end up with a better system 20 steps down the line? We’re very gracefully moving up the ladder of abstraction in so many different things, because technology is abstracting away all those lower level problems for us.

[00:21:31] DP: Yeah. I hear you saying that we’re conductors in a symphony of intelligent systems, where the software is playing the music, not you.

[00:21:39] AF: Exactly. Just like if you’re filming a video these days. You’re focused on the movement of the people. You don’t have to focus so much on constantly changing the light and having multiple cameras running at once and doing all those things that are really hard to do and that great cinematographers and camera people and whatnot do. Because software does a lot of that now. It’s constantly adjusting levels and whatnot.

Like you and I right now, aren’t having to think about sound levels too much because, to some degree, they’re being adjusted in the background. What we’re actually doing is we’re focused on conducting the conversation. I like that. Conductors. Man, I think that’s how we’ll interact with or work with intelligent systems in future. We’ll get better at developing an intuitive sense of how to work with them. Again, what feedback will make them better, where to trust them, etcetera, rather than constantly tweaking the knobs behind them.

[00:22:36] DP: You, as your position being an investor, what was the last just like, “Oh, my goodness. I cannot believe what is in front of me,” technology that you saw? I’m less interested in the actual technology in terms of, this is what the tech did, but in terms of the implications of technology, which is, I think, what really matters for most people. This is now what you’re going to be able to do.

[00:23:01] AF: It’s funny. Those moments are rare. The moments where, as you said, the implications of the technology in front of you are so great that you think they’ll cause a shift in some significant part of society. I’m trying to think. I mean, look, there’s a lot in the field of quantum computing. It’s the people in the field of quantum computing at this point is questions of degree, whereof this shared understanding that it’s coming, it’s happening, but we’re not quite there yet, but we’re getting there step by step.

From someone coming outside of the field of quantum computing, I think if they came in and saw some of the stuff I’ve recently seen, they’d go, “Wow, this is going to change everything, all the ways in which we encrypt data and think that things are secure. We’ve got to question them again.” All of the ways in which we think, okay, you get a computer to do a thing, you wait a second, and you come back and it releases the output of that thing. That paradigm totally changes. Computers can do so many more things in parallel, quantum computers that is. I think, in the realm of quantum computing, I see so much that will change so many elements of our economy and society, etcetera, etcetera. That’s one thing.

I think, another thing is related to what you said before, or what we were talking about before about sensing. I’ve seen some, I guess, in a simple sense, you call them cameras, but really small cameras lately that can pick up so much more. Can pick up so much more of the spectrum and, therefore, understand so much more about what’s in front of them.

I think the third thing, I’ll stop myself after that, because I’m likely to see so many amazing things every day, is AIs that build chips that help AIs run better. I’ll take a step back. If you think about all the different ways in which we build artificial intelligence today, build intelligent systems, all the different models we use, they all run a little bit differently, as in they require different amounts of power, they require different amounts of things to run at the same time, or in sequence. Some of them collapse all at once, some of them have to have certain things happen in a certain order. They’re all a little bit different.

Trying to run them all on the same calculator, on the same chip is pretty hard. That’s why we have some different chips now. We have CPUs, and people would have heard of GPUs. Now, Apple has an integrated thing that’s running the conversation we’re having now on this laptop I’m using. That’s pretty cool. We’re going to have hundreds of different types of intelligent systems, of different AI models that we want to run. If we want to run them in the real world, we’re going to have to do it really cheaply without much power. We’re going to need to develop different chips.

Now, the problem is designing a chip is a very difficult thing to do. You have to map out all the different circuits. It’s very, very difficult work that requires a lot of experience in electrical engineering. Then fabricating that chip, so designing a production line that can build it, is a very expensive thing to do. In the past, it’s cost at least 10 million dollars to get a production line to build one particular chip.

Now, something exciting I’ve seen recently is an AI that can basically take, how much power do I want to use? How quickly do I want this thing to run? What are the parameters of this model? How does the model work? And go, all right, this is the optimal chip to run this on. Then designs it for you, so that you can then just go, “All right, I’m going to go print that chip.” Now you’ve got a whole new computer chip. We could end up with the ZPUs, whatever, XPUs, YPUs, whatever. All these different types of processing units, and hundreds and hundreds and hundreds of them. That’s pretty cool.

[00:26:56] DP: That is really cool. Is the extreme cost and economies of scale, then I would assume the benefits of local knowledge of just having a lot of people in a small area know how to design chips. That’s why, to the best of my knowledge, there’s big time chip manufacturing in three countries; US, South Korea, and Taiwan. I might be wrong about that.

[00:27:20] AF: Yeah. Well, and I’d put China in that bucket now. Yes, the short answer to your question is, yeah. It’s really hard to do the economies of scale in doing it. It’s not just about the design. It’s about the process and making it really clean and efficient and all that stuff. Yeah, that’s why there’s so much consolidation in that industry. The world I’m thinking about, there may still be consolidation on the production side, but I think there’ll be less consolidation or at least there’ll be a little bit of commoditization. Not really the right word, but a little bit of commoditization on the design and development side. That could be really exciting.

[00:28:00] DP: One of the things with sensors that I find to be pretty interesting is sensors are, at least on first glance, one side of a two-sided coin, which sensors sending information out. The problem is that that’s very closely tied to surveillance. As just a normal person, consumer who knows very little about these ideas, one of the things that I would be a huge fan of is how do I send more information out, get it anonymized, know that it’s being used in ways that are helpful at the aggregate scale, but then not be trapped at this super creepy level?

[00:28:45] AF: That is a problem that I think we need to solve. That is a problem that a few people are working on. That is a problem that I would like to see more people working on, which is the tail end of this, which is, as you said, it’s one thing to collect all this information. It’s another thing to make it available to the systems that need to, well, that will ultimately, get the value from the data, I should say, not the information, by turning the information.

It’s one thing for those systems to be able to do that without exposing any underlying understanding of people’s lives. People deserve to live lives, I think, is my personal view, deserve to live lives in private with the level of dignity that they want to live. Part of that is being able to do things without being surveilled. Yeah, I think that would be good if more people are thinking about this. I think it is a big concern.

[00:29:43] DP: Yeah. I mean, look, there’s certainly some really cool applications, where I just don’t worry about it. In my living room, or in my kitchen, I have a sensor that is a scale. I put my coffee on the scale. Once it gets fairly low, I automatically get a new shipment of coffee. It’s great, because I never have to think about ordering coffee. It always arrives. I don’t really care what people know about the coffee in my life.

[00:30:11] AF: Your coffee consumption.

[00:30:12] DP: Right. Eventually, I would love to have my kitchen tables be sensors, where they can sensor and know how much food I have, all the shelves in my fridge are sensors. Then, you can have really intelligent data on, say, when something goes bad. For example, I came back from a three-day getaway and I was like, “Is this salmon good to eat? Or is it not good to eat?” 50-50 with fish, you always say no. I would like more data on what’s good, what’s bad. That just isn’t really available. I feel like, the kitchen is a great place where a lot of these ideas can really come together in a way that is healthy.

[00:30:55] AF: This is the place where most of my ideas come together. At least, my better ideas is in the kitchen. Yeah, I think you’re right in saying two things there. The limitation there is sensing. I’ve looked at a couple of really exciting companies that have, for example, developed very small cheap gas sensors that you can put next to around food and whatever else, and tell you things like that. Is it emitting a noxious gas, which means it’s bad for you to eat?

Those companies face challenges in producing these sensors, getting them out, then ultimately, showing people the value of having that information, which is the coffee scale one is a good one. If it was just a coffee scale that says “you don’t have much coffee left,” that doesn’t add anything to your life. You can see you don’t have much coffee left. It’s the fact that it then communicates to another server through the internet. Then that thing communicates to another server through the internet that eventually orders you more coffee that arrives on your door, without you thinking about it.

The fact that it’s saving you time and effort is what makes that sensor valuable and what makes the manufacturer that sense of money. I think, you’re right in saying that that’s a limiting reagent of making a lot of these things more effective for us, or really seeing the value in a lot of these relatively simple intelligent systems, is getting the data in the first place and having the right sensors.

Then yeah, it’s another interesting point around, well, what am I happy having observed in my home and what am I not? I think this comes up for a lot of people, because they have one bit of AI in their home, and that’s an Amazon Alexa or Google Home. The problem with that is, it’s both owned by a big company that has a lot of other data on you. Two, it’s got a very general omnibus sensor in there, which is it just detects all sound, and then maybe temperature and a few other things as well.

There’s a lot of things that happen sonically. You say a lot. You listen to a lot. Other people say a lot. You can’t control it. That’s not really a comfortable position to be in, to have one of those things that’s made by a company and that has a lot of data about you and it’s getting information from a sensor that can sense a lot of stuff or can gather a lot of information. It’s not really comfortable position to have that thing in your home.

Whereas, having a sensor that’s made by a smaller company that doesn’t have any other data about you, other than what comes through that sensor, and has very specific data it’s getting, like the weight of something, that’s totally fine. The point I’m getting to is, I think we will probably, if people are actually concerned about their privacy and do actually want to get value out of these systems, probably moving to a world where we have more quarantined sensors made by lots of different companies and there’s quite a bit of fragmentation. The punchline is, I don’t think, if we actually care about our privacy, we’ll get to a world where one company makes all the stuff in our home or one company offers all the AI in our life. It’s actually lots of little bits of AI that’s built on lots of little bits of data from lots of little sensors.

[00:34:11] DP: How does that square with the very clear scale advantages of AI that you talk so much about?

[00:34:21] AF: I do talk about them, in that some models do need quite a bit of data to be trained, to point of accuracy. A machine needs to see a million images or something to be able to predict the 1 million and 10th image is that thing, to a high degree of accuracy. Sure, in some cases, you need that today. I think, a lot of problems don’t require that much data. A lot of new methods are being developed that also don’t require that much data.

I think it’s just a point in time thing, which is that, at this point in time, we do need a lot of data to train a lot of models that are useful today. In the future, there are a lot more things that will be useful. There are a lot more models we can use to make predictions around them and there are a lot more sensors we can use. These all play in together. Depending on the problem you’re trying to solve, you can be quite specific about the data you need and develop a sensor to just get that data.

What we do today is we try to get information from data that’s not really related to the problem we’re trying to solve. We try to understand, for example, what’s on a shelf in a supermarket from a photo. If we just had a different sensor, like different weight sensors, if every product had its own little tag and whatever else, we do have these tags, they’re just really expensive, then we wouldn’t need any cameras. We wouldn’t try to Jerry-rig it using computer vision systems. I think we’re just at a moment in time where that’s true, where there are returns to scale of data. I don’t think that will be true forever for all problems.

[00:36:03] DP: It also seems like there’s been a lot of visions to basically create conglomerates that have a lot of data, that then other companies can tap into. Because then it helps with a cold start problem where, if I’m competing against Google in problem X, well, I’m at such a data disadvantage that I can’t even get started. How are people trying to solve that problem?

[00:36:28] AF: I covered this so much in the book, which is all the weird and wonderful ways to get data without spending a whole heap of money or by doing it in a very aboveboard way. There are lots and lots of ways to do that. As you said, you can partner with other companies, you can say, “Well, I’ve got this data, you’ve got that data, they’re complementary.” I’ve got data about income, you’ve got data about gender. I’ve got data about weight, you’ve got data about smell. Using these two data sets, we can both understand more about the person or about the products respectively, to those two examples I just gave.

A lot of people do it through partnerships. A lot of people build sensors. A lot of people build out little side consumer apps to collect data. Lots of different ways to get data. Again, there’s just an endless amount of problems to solve in the world, prediction problems that is. Google’s not going to solve all of them. Amazon’s not going to solve all of them. In fact, they solve very little of them. They solve very, very general prediction problems like, how do I get from A to B using a map and incorporating traffic info and weather and all that stuff? It’s a very general problem.

They don’t solve the problem of, well, how quickly does this truck need to get from A to B, for the salmon, in your example, to still be good by the time it gets to B? That’s a very, very different problem, but it’s a problem that you only really need to solve for a couple of logistics companies in North America. Google’s not going to solve that.

I really, really don’t think it’s the case that these big companies are going to solve all these problems. I think it’s just the case that the problems they’re solving now do happen to have a scale advantage, or do happen to have some scale effect in the background, because they’re general computer vision problems or something like that.

[00:38:15] DP: Switching gears here into just who you are. It’s really easy to think of the things that you’re normally interested in, just away from work and the things that you’re interested in in work as being totally separate. Humans are pretty integrated as individuals, and you love continental philosophy. I want to ask why and where is the residue of continental philosophy in how you think about artificial intelligence?

[00:38:42] AF: Oh, gosh. We can go down some very big holes there. They’re not rabbit holes. They’re full-blown meteor craters. We can get pretty lost. I like where you started that, which is why. I think to be really cynical about myself, so it’s okay. I’m not being cynical about someone else. It’s just the education system I grew up in. There’s a whole separate way of understanding the world that in this dichotomy is very eastern, that involves holding multiple understandings in your head at the same time, involves much more non-linear thinking and whatnot.

I didn’t grow up in that world. I grew up in a world that is very Western, which is you’re taught to solve problems in a logical sequence, whether it’s a physics problem or a mathematical problem, or even understanding the course of events in history class. I grew up in that world. I think a lot of people did, because of the way Western industrialized education has proliferated throughout the world.

Therefore, my default reasoning systems are very sequential. My default value systems tell me that if something is reached rationally and logically, it’s probably good or right. It may not actually give me a better understanding of reality, but I think it’s good or right and, therefore, I conflate that with being closer to reality.

Anyway, I think I’m drawn to it because, cynically, I just grew up learning that way, learning a lot of those – learning using a lot of methods that actually were inspired by early continental philosophy, a lot of logical methods and whatever else. Now, good question in the second part of that, which is where’s the residue in how I think about AI?

I think, tacking onto what I just said, it’s helpful to have a pretty strong background in more logical methods of reasoning when you’re trying to understand how AI works today because a lot of the way AI works today is based on statistics, like sequential calculations that happened very quickly at very large scale, but it’s still a whole bunch of sequential, rational calculations. I think it’s helped in a very straightforward sense, like understand the statistics and mathematics behind it.

If you’re asking, how does it help me think about the values around building AI and bringing these products into the world and things like that? I think the residue is in understanding whether or not we should build this product on a utilitarian basis. It’s very much the case that my default position is, it would be unethical to not explore the potential of this technology because it offers so much potential improvement to our quality of life. Whether it’s getting us food cheaper, fresher, etcetera, or whether it’s giving us more time for enjoyment, because we’ve automated away something that we don’t have to do anymore. Or whether it’s giving us more enjoyment, because we have better video games powered by AI engines.

I think it’s repugnant to not explore the potential of this technology. Now, of course, we have to be super careful, like any tool that can be used for evil ends. That’s the residue, I think, of a lot of Western, continental, at least moral philosophy is in having a utilitarian framework for approaching the levels of investment that should or shouldn’t be made in various ways to explore AI technology, or in AI.

[00:42:28] DP: I feel like, to use moral frameworks, it seems AI necessarily perpetuates a world that is really based on utilitarianism, which isn’t to say that utilitarianism is necessarily bad, but I feel AI reminds me, and I say this as an outsider, of the way that governments make decisions. I think I once read that the value of an American life is something like 9 million dollars. They had to quantify that in terms of decision-making. I don’t know if that’s still the same.

My point is that the way that, for example, you deal with your friends, the way that you deal with your family is very much not utilitarian, at least a lot of the time. If I look at the last 100 years of where we’ve gone, ever since John Stuart Mill and stuff, it seems like utilitarianism is the framework of a rational world that we end up with.

[00:43:26] AF: This is a pretty interesting point. I might take it in a direction that you didn’t intend but I think is very much linked to what you’re saying. Look, most AI models now require you to put weights on things. What’s the relative importance of this factor or that factor? That’s a very utilitarian thing to do, which is how much utility is in this, or that? What are the utils of this or that thing or this or that outcome?

Yeah. I think, the way a lot of AIs work today, would probably or do reinforce the preferences of those that design those systems, because a lot of those weights, at least initially, are hard coded. Those preferences are hardcoded initially. Now of course, over time it learns from the people using their systems, and so they will change based on the express preferences of people using their system.

For example, oversimplify things, but the designer of the Netflix algorithm could say “all happy movies are good and all sad movies are bad.” If people want to watch a lot of sad movies, then the system will eventually think that sad movies are good. At least initially, it does reinforce preferences of designers. Now, that’s one type of machine learning. There are lots of other types of systems that we use to develop recommendations and predictions, etcetera, etcetera.

I think, that might be true today, that it sort of reinforces this view of the world. Essentially, if you think about it, what’s a ratio? What’s rationality? It’s putting something into a form that is expressed or articulated numerically. If we want to build systems that can operate on a computational substrate, those systems have to be articulated or expressed in a language that is a numerical language that can be calculated. We’re going to have to rationalize down to that numerical articulation of things.

Now, of course, this only has to be one part of our life. It is, for me, it’s not really part of my life. I don’t use these systems day-to-day, in terms of figuring out who I should contact or what music I should listen to with my friends or what recipes I should cook for my friends, or whatever. I don’t use any AIs at all. I ignore all of them. I banned them all from my life, basically, because I don’t believe that that’s how we, one, do and, two, should treat people around us in that way that’s reduced to these rational articulations of good, bad, and otherwise.

[00:46:00] DP: Yeah. I almost think of that like, how do you get more “used bookstores” in your life? Things that are wacky and, “Huh, I never expected to find this.” One of the fun things of living in Austin is the owner of my local bookstore now buys books for me. She only buys books at auctions. I’ll walk in and she goes, “Hey, I got you this book.” It’s really fun because, honestly, most of them aren’t that good. Every now and then, she finds something that sends me in some new space in a way that Spotify Discover Weekly certainly doesn’t do.

[00:46:42] AF: Exactly. I think that’s a good question. How do you make room for that? Some people call it serendipity. You can call it randomness. You can call it lots of things. None of those things are really it, but you can use words like that to describe it. How do you make room for that in a world where your taste is molded by a system that is designed to narrow? How do you broaden? How do you maintain open-mindedness? We’re so used to having a lot of unpredictability in our lives. If you think about someone tending to a farm hundreds of years ago, they had no way of telling if it was going to be a good or bad season, whether the weather was going to be good or bad, whether they could even sell what they produced, whether there was going to be demand for it, whatever else.

We have a lot more predictability around those things now because we have predictive systems. That’s very good in the farmer’s case, probably. Because that predictability allows them to make decisions, like, “All right. I know I’m probably going to be able to sell most of my crop this year, so I’m going to invest in my house. I’ll invest in my house, and then my family will have somewhere better to live, and then maybe I can consider having another kid.” That’s a decision, a really significant decision you can make based on having more information about whether or not you can sell your crop this year.

Having so much predictability around some things, it’s just no fun. It’s fun to develop tastes around things. It’s no fun to have an entirely predictable life. I think we have to understand where we want more predictability and where we want less, and consciously make that decision.

[00:48:17] DP: That’s such a beautiful point. In The Bed of Procrustes, Taleb has an aphorism that he says, “You are alive in the inverse of the density of clichés in your writing.” I think this Ash Fontana aphorism here is, “You are alive to the extent that you are surprised by something, to the extent that you have broken out of the swath of predictability.”

[00:48:44] AF: Exactly. Your Spotify example is one that resonates with me, because I don’t use Spotify. I never have. Because I like developing my own taste in music. I like going on an adventure that starts with this song is called, who played the flute in this song, or who played the bass in this song? Huh. Who else did they play with? What other bands did they play with? Huh. What period in which were they most productive? What albums came out in that period? I’m going to listen to that album, start to finish. Then you listen to that album. You’re like, “Wow. That’s an amazing album.” That is so exciting and rewarding as a process.

It’s exciting because you didn’t expect to find an album that you would like so much. It’s rewarding because you can see how your effort led to the discovery of that thing. That all starts with letting yourself be open to the unpredictable, rather than going, “No, I want to sit down and have a really productive work session. It’s going to be productive because Spotify is only going to recommend music for those two hours that I like.” Maybe there’s some good predictability there, that you know that you’re going to enjoy that music for two hours, and so it’s going to motivate you to do the work you’re sitting down to do.

That’s an example where predictability would be good, but also when predictability, or unpredictability would be good in the music sense.

[00:50:10] DP: Yeah. I think the way that this manifests itself in the real world, in terms of what we decide to do is, if you were to go out, and it’s a new place, you can have a night that’s a 10, or night that’s a two. You have variance night. Or you could sit home, order in, watch Netflix, guaranteed seven or an eight. I just think, over time, that gets slowly better and better and better. Over time, I just think that humans naturally fall towards low variance. I think a life well lived falls towards high variance. It’s almost a – I don’t know, I want to call it a cognitive bias, but it’s not that, that you have to shake yourself out of.

[00:50:54] AF: Some people refer to this as the explore-exploit trade off. Let’s say, you eat out 10 nights a month, which is a lot. Let’s say it’s 10 for because it’s easy. 80% of the time, you go to the places that you know you like. You go to the old Greek tavern that’s been there for 50 years, and you know what you’re going to get, and it’s great.

Then 20% of the time, as in two nights a month, you try something completely new. It could be a total failure. It could be quite unenjoyable or not food you like, whatever. They haven’t got their service game together because they’re new, or whatever. It could be something that makes the stable, makes it into your favorite. It’s that trade off. There’s an articulation of this in discrete mathematics. It’s called the secretary’s problem, which is not a good name for it, which is, well, if you have some idea of how many choices you’re going to be able to make, at what point do you just pick the next best thing, as in it’s better than all the previous things you’ve seen, such that you’re very likely to have picked the best thing?

If you take a deck of cards, 52 cards, at what point in turning over all those cards, if your goal is to pick the highest card, do you go, “It’s that one”? The answer is, 1 over e, times 52. There’s an articulation of this explore-exploit thing in discrete mathematics. People think about this in lots of different ways. I think, to your point, we have a tendency to again, it’s not a bias, but a tendency to exploit, because I think we’ve just evolved in a world where there were limited opportunities to eat and enjoy ourselves and survive. There were limited ways in which we could access the things we need to survive. We try to exploit where we can, which is why we love eating Snickers bars, because we never know where the next calorie is going to come from. We eat as many as we can when they’re in front of us. Yeah, I think we do have that tendency to exploit. We pick the same thing over and over again.

[00:52:53] DP: They are delicious. One of the things that always surprised me about you is your relentless pursuit of cycling. When you are living in San Francisco, you wake up super early and you would race over the Golden Gate Bridge and go bike the headlands. What inspired, or who inspired that pursuit of excellence? Not, “Oh, I want to be in shape.” The pursuit of excellence.

[00:53:25] AF: That’s a great question. It’s a really personal one, because it’s actually something that goes back to something very Venetian. I’m three-quarters Venetian, one-quarter Sicilian. I grew up, and the one line that I just always remember growing up is “finish the job, do it well,” which is what my dad used to always say to me. “Don’t do half a job. Don’t do things by half. If you say you’re going to do something, finish it, and do a good job of it.”

Once you’ve done that, you can make the decision about taking on the next job, but just finish what you start and do it well. I think it’s that. It’s just this Venetian work ethic of really focusing on doing a good job. I think this manifests itself in lots of different cultures, not just Venetian culture, but in many aspects of Japanese culture, which is just getting very, very good at one thing. I’ve just found, over time, that’s where the fun is, too. The fun in the last 20%.

It’s really fun to, for example, start cycling and ride a 100Ks for the first time. That is super intimidating when you start cycling. Then when you do it, you’re like, “Oh, that’s really fun. I’m glad I did that.” It’s actually nowhere near as fun as the next level, which is hitting a certain wattage number and holding it and going, “Wow, I did that.” Or, being able to ride up an absolutely gigantic mountain and see the view from the top and do it all off your own steam, or come back from a day where you’ve burnt 10,000 calories and feel completely fine inhaling 10,000 calories.

That’s really, really, really fun. I got to tell you, it’s not twice as fun as eating 5,000 calories. It’s 10 times as fun. I’ve just found, there’s a lot of joy in that last 20%. It was inspired by my dad constantly telling me, like, “Finish the job. Do it well.” It’s just been reinforced by my experience of getting to the detail.

[00:55:31] DP: What within cycling is a detail that’s fascinating to you, that is at the forefront of your consciousness, that I’d have no idea about?

[00:55:41] AF: It is the angle of the seat tube and the head tube. They’re those two bits that are up. The head tube is the bit underneath the handlebars. The seat tube is, basically, the post you’re sitting on. Just thinking about what angle you’re out on the bike and how that affects your center of gravity on different grades of mountain, then how that affects your back position, and then how that affects how you’re breathing, and how that affects how much your hip angle and how much leverage you have to exercise through your legs, really gets to the core of cycling.

As a side point, the beautiful thing about a bicycle is it allows you to get the most leverage you can possibly get from your body. It is the most efficient way to convert human energy into forward motion. The most efficient way. Gives you the most leverage. Call it six times leverage as a starting point. Thinking about those angles, I think, there, where you get the most leverage on your leverage, they’re the most fertile variable in the design of a bicycle.

Now, there are lots of other elements in the design of a bicycle; the thickness of the tubes and the height of the bottom bracket, where the cranks are that turn around, and how many gears you have, and all that stuff. I think about those angles a lot and modifying them and adjusting them. This week, I’m making a bike that’s point one a degree difference from a different bike I was thinking about making, but two degrees different or 1.7 degrees different from the bike I have now. I’m really excited to try it on different mountains. Because I think it’s going to be a completely different experience. I’m going to use a totally different amount of energy to get up at the same speed. I’m thinking about all of that.

[00:57:24] DP: As you improve your cycling, how much of your assessment is analytical and how much of it is intuitive?

[00:57:33] AF: This is where I differ and probably why I’m still very slow, compared to some of my friends who are very, very fast. I’m not that analytical about cycling. Because for me, cycling is not just about the exercise. I love lots of different forms of exercise. I like lifting weights. I like walking. I like yoga. I like mobility exercise. Yesterday, I was doing exercise that was to do with hand-eye coordination. I love all of this stuff. That’s fun. Cycling is very good exercise, especially of the fat burning variety. It’s very good at training your metabolic efficiency.

That’s not what it’s about. It’s about so many other things. It’s about being in nature. It’s about socializing. It happens at a pace where you can talk and get to know people. It’s about racing competition. It’s about mechanics and design. It’s about culture. There’s a lot of culture around cycling. It’s about experiencing other cultures, because you can travel long distances on a bike. 

It’s six or seven things in one. Because it’s so many of those things in one, most of the way I’ve assessed my enjoyment of cycling or whether I should leave the house today and go for a ride, are not to do with some analysis I’ve done. Like well, I did this many watts, this many minutes yesterday. Today, I’m really excited to leave the house because I’m going to aim for this. I’m happy or sad about my session because I did or didn’t hit that number. I’m not really like that.

Look, I tried to do those sessions. I have had coaches that helped me structure my workouts, so that I can be very analytical about them. The answer is no. I’m just not like, I think about all those other things, like how many new places did I discover? Who did I meet? What views did I enjoy? That’s how I think about it.

[00:59:18] DP: That’s awesome. One of the things that you once said to me is that people tend to do too much high-intensity training and not enough mid-intensity training. What is it, like, zone two, versus – I think zone two is where you want to spend more time. Is that right?

[00:59:37] AF: Yeah. Look, it all depends on your goals. To be controversial for a moment, I think this trend to doing three to five high-intensity sessions a week is not very healthy. A high-intensity session, for example, is something like a full-blown CrossFit workout, where you’re all on and then a little break, and then all on, and your heart rate goes really high and all that stuff. You release a lot of cortisol, or a really intense 45-minute Peloton class or whatnot, where your heart rate is getting up to close to your maximum heart rate, like a 80% to a 120% of your maximum heart rate.

I think that trend is pretty unhealthy, because you release a lot of cortisol. Cortisol suppresses your immune system. You get sick. What you’re not doing in that time, is you’re not training for metabolic efficiency. You’re not training your body to use lots of different types of energy, like fat and carbohydrates, or glycogen, to put it simply. You’re just training it to use sugar.

Then, you actually want to eat a lot of sugar afterwards, or you want to eat a lot of sugar before and after. You’re not training your body to use fat. It’s good to train your body to use fat, because you got a lot of fat sitting in your body. You got a 100,000 calories of fat, roughly, depending on how big you are, sitting in your body. You only have 1,000 to 2,000 calories of glycogen sitting around. You burn through that pretty quickly.

If you go for a long walk, if you go a couple of days, a couple of hours without food, you’ve got to be able to burn fat to stay at work and show up and whatever. Anyway, I think this trend is just doing this high-intensity stuff is pretty unhealthy and can lead to lots of different problems. It’s very stressful for people. It’s very stressful to people’s immune systems. It doesn’t lead to a good range of health outcomes. For example, not getting diabetes. It doesn’t really help that much with that problem, or avoiding that outcome. Helps a little bit, for sure, but it doesn’t help that much with it. I think it reinforces this thing of thinking of exercise as a tool, or to achieve a goal, rather than thinking of movement as part of being a human being.

I think, spending more of your week moving at all different levels of intensity, also is just much better for your mental health as a human being that has evolved to move. I can go on and on about it, but that’s a trend I’d like to see reverse a little bit for people.

[01:02:00] DP: I think that one of the things that I’ve realized, just running an education company, is you have a problem where the things that actually facilitate learning are often different from the things that people think facilitate learning, which is also different from what’s fun. It reminds me of what you’re saying here, where the most simple, makes a lot of sense, primal way of thinking about exercise is: it was really hard, therefore, it’s good for me. The harder I push myself, it’s good.

A lot of people think about learning like that, too. It sucked, therefore, I learned something. Or, I had to memorize something, therefore, I learned something. I think that you have all these slight issues in the world, where what people perceive to be the best can often just be different from what’s actually the best.

[01:02:55] AF: Exactly. In the learning sense, there’s some wisdom there. There’s an analogy to exercise. It’s very well proven that having a traumatic experience around something helps you remember that thing very, very well. In exercise, having a very high-intensity session will engender an adaptation that can be really positive.

However, another big part of learning is letting something seep into your subconscious, different parts of your brain. That just happens over a really long period of time by sleeping, or walking, or just staring into space for a couple hours. That doesn’t seem like much. Just like in exercise, a lot of this fat adaptation stuff I’m talking about and zone two training, just happens on a really long brisk walk, which doesn’t seem like exercise to most people, or quite a slow ride, slow bike ride. There’s lots of different modalities and intensities of thinking and moving. They all play a role in thinking and moving well, or learning and being healthy.

[01:04:02] DP: I want to ask you about, just what it was like to be really early at a company that’s going to end up being as influential as AngelList? I don’t know, there’s a lot of things that you understand intellectually. Then you begin to understand things experientially. I think, one of the big ones is to have this felt sense for having been in a garage startup. Then have it be something that you read about the New York Times, have something that’s doing hundreds of million dollars in revenue or something. As you look back at that experience, how did it change you?

[01:04:43] AF: It’s a really good question, which is what does it feel like to be there? Then, how did that feeling change you? What it feels like to be there is really excited and aligned and focused, which is everyone in the room, when we were between five and 10 people at that company, everyone in the room knew why they were there, had a very strong sense of mission. They knew what they were going to do every day, because they were either working with one other person that also knew exactly what they were going to do, or they had complete freedom to do whatever they wanted to do.

We all just knew that this was going to be big. We all just knew that this is so useful to people, because we heard from them every day like, “Wow, you changed the trajectory of my company, you change the trajectory of my career.” We heard from people every day that this was so useful. Even though we weren’t dealing with many people at the time, we didn’t have many customers or users, or whatnot, the level of satisfaction they expressed to us was so great. We knew there were lots of other people out there like them, but they’re like, “Well, this is obviously going to be huge. It’s just a matter of time.”

There was this amazing feeling of feeling so aligned with the people around you, so aligned with the long-term mission, and this just like fait accompli. We just knew it was going to happen. That is just so motivating and great to be around. It’s this weird feeling of both being very at peace and steady in terms of knowing what you should do right now, but also so excited about what’s going to happen. In a sense, you couldn’t move quickly enough towards that goal, but you knew exactly what you had to do today to achieve that goal.

They’re all expressions of feelings, but you asked about feelings. That’s what it feels like to be at a place like that. It’s a sense of the inevitable in the long-term but a very strong sense of the practical and what you need to do today. What became obvious at AngelList was, one way you can help people self-actualize is to help them start a business and work for themselves. Because, then, they don’t have someone else getting in the way of their self-actualization. They don’t have someone else actualizing their goal, not the individual skill, or the company’s goal not the individual’s goals.

I solidified my mission, which was to help as many people as possible, get their company off the ground, and start their own thing, so that they could then have an environment in which they could self-actualize and express their values and principles and act according to those every day. That’s how it changed me. It really solidified that mission.

[01:07:17] DP: That’s a really beautiful answer. Well, thank you very much. This was lovely.

[01:07:22] AF: Thank you so much, David.