
Qasar Younis is the co-founder and CEO of Applied Intuition, a $15 billion AI company that adds intelligence to cars, tractors, planes, submarines, and other vehicles—essentially, Tesla or Waymo without the hardware. He was previously COO of Y Combinator, started his career as an engineer at GM and Bosch, and was born on a farm in Pakistan.
You decided to join Twitter recently, put out your first tweet.
Marc Andreessen quote tweeted it and said, "This is the best AI CEO nobody knows." Our best work is done alone and quietly.
Every minute you're writing something for public consumption, you're not focusing your very limited time that you have on your customers and your product.
You're building a lot of the future that we're going to be living in.
What does the next couple of years look like?
Us solving some of these impossible problems like cancer are directly going to be related to this AI boom.
Net suffering in humanity overall should go down significantly.
A thread that has emerged on this podcast is that AI is coming just in time to save us.
The real impact of AI in the next 5 to 10 years really is going to be in farming, mining, construction.
These industries, they need autonomy and it couldn't come soon enough.
If you look at farmers, the average age of a farmer is in their late 50s.
What does that mean in 10 years from now?
There's a lot of anxiety about what AI is going to do to the world.
The core root of fear is misunderstanding.
If you at home are very anxious about AI, the best thing that you You can do is spend time to understand and you will quickly see the limitations.
Get to know it, then actively make the technology be used for good.
Today, my guest is Caster Yunus, co-founder and CEO of Applied Intuition.
You've probably never heard of Caster or Applied Intuition.
This is the most important under-the-radar AI company and CEO that I've ever come across.
It's a $15 billion company that has been growing quietly over the last decade.
What they do is they add AI to vehicles like cars, tractors, planes, submarines, mining rigs, and a lot more.
18 out of the top 20 automakers are customers, as well as the biggest global construction, mining, and trucking companies.
Also the Department of Defense.
They're basically Waymo or Tesla, but without the hardware.
Casser himself was born on a farm in Pakistan, Grew up in Detroit, started his career as an engineer at GM and then at Bosch.
He then went on to start a couple companies before starting Applied Intuition.
I love everything about this episode and I am so excited to bring it to you.
Don't forget to check out Lenny's Product Pass.com for an incredible set of deals available exclusively to Lenny's newsletter subscribers.
Let's get into it after a short word from our wonderful sponsors.
This episode is brought to you by Omni.
Many product teams today are in the process of debating how to ship AI analytics.
The hard part is obvious: having an LLM guess at SQL in production is a huge mess and just a bad idea.
Omni takes a different approach.
They have a semantic layer built in so that when you embed their analytics, the AI actually knows your business definitions, not just your raw tables.
You can test queries, validate the reasoning, and lock down permissions before anything hits production.
If you want AI analytics in your product without building the whole stack from scratch, check out omni.co/Lenny for a free 3-week trial.
Companies like Perplexity, dbt, and BuzzFeed use Omni to ship analytics their customers can trust.
That's omni.co/Lenny.
My podcast guests and I love talking about craft and taste and agency and product-market fit.
You know what we don't love talking about?
SOC 2.
That's where Vanta comes in.
Vanta helps companies of all sizes get compliant fast and stay that way with industry-leading AI, automation, and continuous monitoring.
Whether you're a startup tackling your first SOC 2 or ISO 27001, or an enterprise managing vendor risk, Vanta's trust management platform makes it quicker, easier, and more scalable.
Vanta also helps you complete security questionnaires up to 5 times faster so that you can win bigger deals sooner.
The result?
According to a recent IDC study, Vanta customers slashed over $500,000 a year and are 3 times more productive.
Establishing trust isn't optional.
Vanta makes it automatic.
Get $1,000 off at vanta.com/Lenny.
Cássio, thank you so much for being here.
Welcome to the podcast.
Thanks for having me.
You're basically building a lot of the future that we're going to be living in, and people may not even realize this.
And then kind of there's two sides to this.
On the one side, let me ask you this question.
If things go really well, what does the next couple years look like for people with the emergence of AI, with physical AI?
What are— what's a vision of the future?
Let me take the broader AI point and then the micro, the more specific one on physical AI.
Macro, I think Think about this like the Industrial Revolution, right?
So if you're sitting, let's say, in the late 1800s, there's a lot of, you know, we can focus on a lot of bad things that happened because of the Industrial Revolution, right?
You have child labor and you have monopolies emerging and you have abuse of, you know, wars end up happening.
But there's also— it's an almost, almost unimaginable present without the values and without the kind of benefits we got out of the Industrial Revolution, which is broader access to
healthcare like we've never seen before, like access to goods, material goods, things like we take for granted, like heating and cooling your home.
There's this, there's this great
YouTube kind of channel that focuses on POW letters from Germans who are seeing America in the early '40s, and they're writing letters back to Germany about what they're seeing as they're
basically, you know, prisoners of war.
And they're kind of blown away that the towns that they roll by in these trains as they're going to their POW camps are all lit up, or that there's cars everywhere.
80% of German towns in World War II did not have electricity.
And that's kind of a mind-bending kind of thing, 'cause we just assume all this stuff, all this technology is, you know, equally distributed.
So the positive version is these things that we, let's say folks who are wealthy or folks who have access to technology.
These things everybody has access to.
The, the, the fact like simply having somebody who's a coach to you and having that coach very specifically to you, not a generic, you know, ChatGPT that's giving fairly generic answers.
I think this is a very powerful thing.
I think us solving some of these impossible problems like cancer are directly gonna be related to this AI boom.
So I think net suffering in humanity, I think just like the Industrial Revolution overall should go down and should go down significantly.
And I'm a fundamental optimist in that view that technology will bring that positivity in physical AI specifically.
Again, you know, when you have things like your, you have your own car and you have the ability, you have your limbs and you have your senses and you can drive, you take these kind
of things for granted.
You jump in your car and you go to the store.
For somebody who maybe is disabled or somebody who doesn't have the money to afford a vehicle, access to mobility that's nearly free or is free is a big deal.
And that, like, simple example of making self-driving cars free for everybody and how that would change the planet.
You live in Rwanda and you are 2 hours from the nearest hospital, that matters in a very, very true way.
And so I think a lot of, let's say, the negativity around AI comes from people who, frankly speaking, are living in a very, very good existence.
And when you live on the other edge of society, yes, and I'm not like some naive person who thinks that there's no downsides of technology.
We can discuss that.
But I just see there's a lot more positive.
So when you ask that question, What's the next— forget 3 to 5 years, what's the next 20 years?
These things that we take for granted that are bad suddenly are not there.
And I think certain diseases, certain accessibility to basic, you know, services suddenly start going away.
One last example of that is you take just the fact that you can message people basically for free, you know, for people old enough, like this is not the norm.
We came from Pakistan, we couldn't even communicate back to Pakistan because the long distance, you know, was so expensive.
And so it was handwritten letters.
Today you can basically contact anybody on the planet basically for free.
There's obvious downsides for that, but there, there's lots of upsides for that, which is being in touch with people that you care about and you love basically for free.
And so I think AI has the ability to bring this abundance to many, many more people at a near free, you know, free cost.
On the flip side of this, as you pointed out, there's a lot of anxiety about what AI is going to do to the world, to jobs, robots.
There are like these videos coming out of China with these robots with nunchucks, like the stock market.
I feel, you know what, I feel the nunchuck union is up at arms with that.
How dare they.
Yeah, but it's, you know, it's scary and, you know, the market's reacting more and more to the just like, oh wow, these companies are maybe not gonna survive long-term.
Again, being at the center of this and building a lot of this, the stuff that'll get us there, how do you envision that couple years playing out?
Like, are you optimistic?
What keeps you optimistic?
Any advice to people that, to help them kind of stay, you know, calm through this period?
So the two separate things, anxiety around technical shift and then the public investors reacting to specific stocks they've held, we have to separate those things.
So let's talk about them separately.
On the first one, You know, the core root of fear is misunderstanding.
And I think if you at home are very anxious about the impact of AI in some variant on your own job, the best thing that you can do is spend time to understand it, and you will quickly
see the limitations.
There's some great videos on YouTube which are like, you know, trying to get Gemini to understand what a cup is.
By just holding it upside down and it like really, and then they do it with ChatGPT.
So it's like if the revolution is coming, the AI overlords have to first understand like the top and bottom of a cup.
And so you realize that you can see the video of nunchuck wielding humanoids, which are pre-programmed and that cost $15 million to do that video.
Yeah, that is true.
It's not fake.
I'm not implying it's fake, but it's also not what your brain kind of fills in the gaps.
You see nunchuck robots and you just feel like, well, the gap, you know, these are sentient beings that are at their own volition going, rather than it's a bunch of motors that have
been programmed to do a certain thing.
If you really wanna be impressed, you go to a car factory and we've been doing that for 25 years.
We have very, very advanced robots moving extremely fast to build things.
And why are we, don't we have anxiety about the car factory, but we have anxiety about the nunchuck robots is because the human being doesn't, like that gap, we understand the gap of
a, you know, of a welding robot.
You see, oh, okay, that's a robot.
It's been programmed to make this weld, but we don't know the technology.
We as in just as an individual human being living in, you know, in the world, you don't know how that robot was made to do that nunchuck thing.
And so you substitute that with anxiety and fear.
And so I would really implore you to, you know, kind of learn more about the technology and you start seeing the edges.
Now, does that take away from the most fundamental thing that you're getting at, the string that you're pulling at, which is like, is society going to be fundamentally harmed?
And is this, you know, net-net bad for society?
I think in any technical shift, the emergence of WhatsApp, just as an example, there are people who are damaged by that, literally companies that go away, but also humans who are damaged
by the advent of that technology.
And so, I think as members of society and as leaders in society, we can
kind of move that funnel in whichever way, you know, technology first, remove the word AI.
AI is such a like emotional word because it's wrapped in these things you don't know.
And so that fear then kind of deforms.
So let's just say technology, you know, so I think it's up to us to recognize this technology can be used for good and technology can be used for bad.
And I think that's where really the focus is.
So get to know it and then actively make the technology be used for good as a participant, whether you're a founder or all the way as an individual, you know, employee or citizen of
a large company.
Then on the second part of the question about, you know, public investors and stuff, this is my, I don't have any, you know, particular research on this, but this is what my guess is
what's actually happened beyond being an engineer, which is maybe my core, identity, for the lack of a better word.
I was also, I did an MBA at Harvard.
And so that was the first time that this,
let's say, you know, I didn't come from very, you know,
very, very kind of wealthy upbringings.
And this was the first time when I went to Harvard, I saw like, you know, that world, that world of people having like private jets and stuff.
It was a really eye-opening experience for me.
But the real world I was exposed to was high finance.
And how high finance works.
And you might think, as I did from far away, that folks in hedge funds or at large, you know, public equities funds are extremely nuanced and thoughtful.
And they are like, you know, on whiteboards with, you know, you know, extremely deep and, and oh, maybe even theoretical math to figure out should they buy or sell.
you know, Figma.
And that's not actually how it works.
I mean, it really, what these folks are is, and in this specific case, I think what's happening is they are, they buy and sell stock.
They are smart people and they do work hard.
It's not to take that away, but they don't have a fundamental edge that you would assume that somebody who sits in, you know, skyscraper in New York has.
And by the way, that's why retail investors have become such an active and kind of significant part of the market.
So those folks have gone to, AI consultants and have gone to people who are literally developers at these firms and they'll do something like, hey, why don't you build me this app in
a week?
And then, you know, this like consultancy will come back with an app which kind of looks like maybe a Figma or another, you know, some web app.
And that hedge fund manager, they're like, well, and then, you know, if the company was sitting there, they would say, no, no, this is, this just looks like my app, but this is actually
not my app.
It's not as deep.
It doesn't have all these things.
There's the integrations with all these other, other systems.
But for the public investor buy, they're like, yeah, but it only took like a few weeks or a month to build this.
It took you 500 engineers for a couple of years.
This AI thing could be real.
And the things I'm reading on X about like just vibe coding your way to replace, you know, billion-dollar companies, that might be the case.
And the market immediately prices in that risk.
And that's where that sell-off comes from.
That doesn't necessarily mean all of those, I mean, I just within the last 24 hours I had a, I can't say his name, but like a very, let's say, calibrated investor who said this is the
time to buy because these companies are not actually going away.
And so I think those are two anxiety within society and the sell-off are two very different things.
They're motivated by different things.
They're part of the larger AI narrative, But I wouldn't conflate those two things.
It's not that the hedge fund investor's like, I'm worried about society, sell service now.
Like it's, it's, there's, there, there, it's, it's different than that.
At least that's my impression.
This is the alpha we've been talking about.
Time to buy.
This is not investment advice.
Well, that's really good advice.
I think the real advice is to, to fight fear.
And I feel that anxiety, especially when I go to Michigan and I, I, you know, outside of people in the Silicon Valley bubble, it's like just, just try to learn a little bit about the
technology that you're afraid of and you'll start seeing some of the edges.
I love your point about how self-driving cars are essentially robots.
We don't call them that, but they're robots.
Absolutely.
And you see a nunchuck wielding robot, a self-driving car doing bad things is— could be very dangerous already.
And so that's a really good reframe that if you just think of it as just another robot and it's been really good for us.
And by the way, the self-driving thing as an example, you know, whichever way you slice the statistics that are available from self-driving companies, they're supremely, supremely more
safe than human drivers.
And I do believe in 20 or 30 years, not that much longer, we'll look back and we'll kind of be like, it's kind of like we think about child labor, you know, in post-industrial revolution,
that was a normal thing.
You would send kids who are in middle school to go work.
It happens in third world countries today.
There isn't a lot of emotion behind it.
It is not considered to be exploitative because you have no choice.
Everyone's just— and I think we'll look back in 25, 30 years and we're like, people were just like tired, under the influence, extremely stressed, going through a traumatic life experience,
and then they jump in.
Into a car like that.
That's— it is crazy.
And just for everyone, everyone should really emotionally think about just in the United States, over 30,000 people will die
in the next year from these accidents.
That like the old Stalin line, it's like, you know, one death is a tragedy, a million is a statistic.
And we just let the statistic kind of go over our head like, oh, it's 30,000 people.
But if you ever have talked to a family of somebody who went through a tragedy like a car accident.
It isn't, it's unbelievable.
And, and, and, you know, you, you suddenly all the fear of AI robots goes away and you really see that human impact and you realize like actually us driving doesn't make sense.
And it's not for any other reason than literally people die.
I've become a, a huge, uh, I have a Tesla and the self, I've just used self-driving all the time now.
Just like a few months ago, it got very good and it's, used to be nerve-wracking, and now it's like, wow, this is much better.
And you're doing— you're not using— you're not doing driving as a job.
Imagine if you're a commercial truck driver or you work in a mine or you work, you know, like, they're a little bit of intelligence, a helping hand in that very dangerous task.
It's incredible.
And I think there's something about the human brain where You know, when you bring up that reality of like self-driving trucks,
immediately people are like, well, what about the trucking jobs?
Now, needless to say that we don't have enough people who wanna do that job.
So leave that fact to the side.
I think the fact that you really focus on is the fact that people die from trucking accidents.
Like we can't, you know, throw out the baby with the bathwater.
And so I think I implore everybody, who thinks about AI broadly and physical AI specifically, to always recognize that your monkey brain is programmed because of thousands of years
of being in, hundreds of thousands of years of living out in the wild and being in the cave, that when you hear the rustle in the bush, that is, you think it's a snake because that's
what our ancestors were programmed.
So now when something new enters our psyche, your, your, your view isn't, well, if, if mining, if, if mines became autonomous, well, wouldn't that lose jobs?
It's like, those are awful jobs that people die in.
And the best evidence is that people don't want to work in them.
Like, that's the best evidence.
Like, no, nobody's clamoring to go work in a, in a mine in a remote area.
And so intelligence can help make that, you know, make that reality much, much better.
People are seeing AI advance in all these different ways on the software side.
They see all these models being released.
It's driving 100% of people's code now.
What's really cool about you is you see the hardware side of this.
And I think one of the biggest changes to our lives will probably be robots walking around doing things for us.
Do you have a sense of just how close we are to just robots around us day to day?
So I would think about the— this— the framing here matters again.
On a spectrum.
So there are robots around us like Zumbas, you know, like they clean your carpet while you're sleeping.
There's robots around you when you make a coffee.
That's an automated machine that is taking an input and doing a bunch of things based on what you need.
So what you're really talking about is how fast can you go up that spectrum to where you have a robot that can take on lots of tasks with little guidance.
And the way that I would think about this is, let's say we're sitting in, this podcast is happening not in 2026, but 2006.
And you're asking me the same question about mobile.
And you say, well, mobile is coming.
This is, remember, pre-iPhone, which comes out in '07.
Everyone has got those flip phones.
So we have some, we have mobile.
It's not like a completely, you know, so we have some robots around us already.
But like, it's like, okay, so what, when are we gonna, and you asked me in 2006, when are we gonna get that Star Trek phone that can do everything?
And I think at that time I would say, because I don't even know the iPhone is coming a year later, I would say, well, Lenny, I don't, I don't know, maybe it's 1 to 5 years.
And if it's not 5 years later that Uber, WhatsApp, Instagram, Snapchat are all products and they're being consumed by many, many, many millions of people.
So what happens when you think about sitting in 2006 and why can't your brain figure out that Instagram is coming?
Instagram is very hard to even conceive without phones that have an app store, have cameras on both sides, are available, generally available, that lots of people have it.
And the fact that people are comfortable being on social networks in 2006 is still an early thing.
This is pre-Twitter and Facebook is not that big.
And MySpace is, but it's not the same type of private kind of communication.
And so, so the, the, the, the point I'm making is I think it can come pretty fast, but the way and the form factor it'll come is hard to pick.
Just like it's hard to figure out Instagram's gonna happen because the, the, the intelligence in that particular type of hardware, which will be generally available, that's a key word,
generally available, is really gonna impact the use cases.
So I think like the most obvious use cases that will come early are going to be use cases where you get the most amount of bang for buck.
And the bang for buck is a car that drives itself or a mining robot, which is a mining vehicle, which is now intelligent.
And the reason is all that, you know, let's say engineering required to make this giant, you know, machine that moves dirt has already been done.
It's been done over the last, you know, 50, 60 years.
So then you're just putting a little bit of intelligence into it and leveraging everything else that the companies and kind of people have developed.
So I think, I mean, and I'm not just pitching my own book, I mean, we're a physical AI company.
I continue to believe that we're, I think our brain emotionally loves the humanoid concept because we're monkeys.
And, but actually just like more pragmatically, it's actually just putting intelligence into things that already exist all around us.
And I think, and then once that happens, then new applications will emerge, which I think we'll talk about in 5 to 7 years, which will be, which we'll start seeing.
So let's, let's just move forward 5 to 7 years and let's see what reality exists.
And then maybe we can try to jump into the future from there.
I think generally speaking, every, every single car company on the planet right now is working on a product that's like a Tesla FSD product.
Every single car company there without exception.
Uh, many, many companies are working in versions of that that become fully autonomous within a cheap sensor suite.
So the, the fundamental difference, just to simplify it all, the Tesla approach versus the Waymo approach, just to really keep it simple, is the Waymo approach is lots of sensors and
lots of compute and maps.
And the Tesla version is very few sensors, no maps, or, you know, no high-fidelity maps— I'm just generalizing here— and cheaper compute, for the lack of a better word.
And The Tesla version of a product, this is in the industry is called an L2++ product, is gonna be available everywhere because it's literally cheaper and it doesn't require like HD
maps.
The Waymo product functions better in a geographically constrained area.
So you fast forward 5 years, both of these types of technologies will be much more ubiquitous.
L2++ and L4 will be much more ubiquitous.
Not only in the Bay Area or in parts of China, but really globally.
There are companies working on this globally.
So now, I don't know if you remember, but nav systems used to be a big deal in cars.
You would pay thousands of dollars and nav systems are kind of the thing that everybody wanted.
We're at that moment for L2++ systems where like people are willing to pay thousands of dollars for a semi-automated vehicle.
It will not be a long time.
You're already seeing this happen in China where the downward pricing pressure for that autonomous product, for the lack of a better word, will become close to free.
So now you fast forward 5 to 7 years and every car has some level of autonomy.
So now you have to like mentally live in that reality that everybody who's buying a car, they just get FSD with it.
Now you start seeing a different world because now the average person isn't wondering, is self-driving in a company?
They use it all the time.
They don't wonder, are navs?
And so what you have in nav systems is CarPlay emerges and Android Auto emerges and it's very natural.
People are like, oh, I have my phone, I just plug it in.
And, and it wasn't a big revolution, but the CarPlay and Android Auto revolution is actually huge.
It brings free navigation and free applications to your car and it's fairly ubiquitous.
And so I think the next thing that happens in 5 to 7 years is then full autonomy becomes the thing that everyone expects.
And so I think, and all of that, and you will see a clear decrease injuries and death because of that, because you have some intelligence helping you help drive.
Now again, I'm using the consumer vehicle analogies just so people can understand it, but this is the same in construction, it's the same in mining, it's the same in defense, it's in
every one of these verticals.
There's these big physical machines that humans are interacting with.
That teaming up with that machine is the future.
The productivity unlock from just you looking at a machine not like a sentient being, but almost like a physical agent of something you're trying to accomplish unlocks things that I
think are very hard to think about.
So I love, you know, things like Maltbook and I love the, let's say, open cloud revolution that's happening for the lack of a better word.
But I think the big impact, that's still such a small part of society.
My barometer of impact is like you go to, you know, the Detroit airport and you sit in a gate and you look around and you're like, how many people here are using Open Claw?
And it's like, let me look into like, like you might be the only person who knows what that is.
And, and whereas everybody, they're living their lives there.
And so it's like to them actually the, the impact of AI is going to be in this physical, physical world.
I see you also have a, you have some, yeah, there you go.
Check it out.
I'm a convert, Open Claw.
There you go.
Perfect.
My computer's coming on the bot soon.
So I got some, Some lobster claws.
Yes.
I think, I think the real impact of AI in the next 5 to 10 years really is gonna be in farming, in mining, in construction, in self-driving trucks.
That's where you're gonna have a real impact.
Though I think, I mean, I, I, I love the stuff that's happening on, on these platforms, but it's still segregated to like, frankly, developers and some, some, a small, very, very small
part of society.
I wasn't planning to spend so much time here, but this is extremely interesting.
And I think it's important for people to hear from folks like you about where things are heading, because as I said, everyone's just like, what is happening?
What is going to be my future?
The jobs piece is really interesting.
And you have a thread that has emerged on this podcast recently is that like people are afraid AI will take their jobs, but in reality, AI is coming just in time to save us because
populations are declining, people are aging, and we need something to help us there.
I know this is something you, and like this is something Mark talked about and you're really close to them.
Help us feel better about just how AI isn't gonna take our jobs and actually gonna save us.
Yeah, I think honestly speaking, these industries, like they need autonomy.
I mean, and it couldn't come soon enough, frankly speaking.
This is not like, people are not fighting for those trucking jobs.
If you look at farmers, the average age of a farmer is in their late 50s, 58 or so.
What does that mean in 10 years from now?
That means many of those farmers are going to be retiring if they're not already retired.
And 20 years, we have even a bigger, a bigger problem.
That, by the way, is every vertical is like that.
And, you know, my hypothesis here, but unlike, you know, sometimes people say like, Oh, you know, McDonald's can't hire, or like, you know, the mine, local quarry can't hire.
And where are all the people?
The people are still here.
I think they just, the trade-off is just not worth it anymore.
In the 1980s, in the 1990s, doing the long-haul trucking job was what the family has to sacrifice.
The father not being there for days and weeks on end.
And today that same working-class family has, can make that decision and say, you know what, I will drive for Uber or DoorDash and I'm willing to do that because I can turn that app
off and pick up my kid and I prioritize that.
The, that is where I think this, you know, this, this kind of intelligence kind of revolution in the real world is really, I think is gonna fill those gaps in
rather than like an entire industry suddenly gone and it's just automated.
This is, this is, This is, I don't believe that future, mainly because the realities of actually
replacing an entire industry with robots is still, that's too complex.
One day it will happen, but it's not happening anytime soon.
But the entire society will be different by that point.
And I think, again, use the Industrial Revolution as a good version of that.
The earlier question, if I'm somebody who is not in the AI ecosystem, and I have this anxiety, and, uh, how would I deal with it?
Reading history books is a great way to really understand how society deals with this.
And there are— there's a lot of literature because Industrial Revolution doesn't happen like, you know, in the dawn of Christianity where not many people are writing and not many people
are reading.
Lots of people are writing, lots of people are reading in the, uh, in, in the last 150 years.
And you can read both the people who are impacted by the Industrial Revolution, people who are benefiting from it.
And writ large, it's a very positive experience.
And that doesn't mean there, again, there are downsides.
We should mitigate the downsides.
But the thing that we can't do, and this is maybe specifically as America or society, as the global population as a whole, there's this impetus to say like, we got to pump this brake
on.
Again, don't say AI, say technology.
Pump this brake on technology.
The issue then is the American economy really ends up stuttering, and that impacts the lowest end of the labor market way more than anybody else.
And so in the attempt to help the people who are the most marginalized, we actually hurt them the most.
And the statistics between Europe and America are
pretty explicit, but in the last decade, basically the American economy is now, you know, grown at a much higher pace.
And that growth hasn't come from, you know, Detroit, Michigan.
That growth has come from Mountain View and it's come from Sunnyvale, it's come from the Bay Area.
And which is another way of saying it's because of new frontier technologies.
So putting brakes on frontier technologies because we're afraid of unintended consequences, will actually have real intended consequences on people who we're trying to help the most.
And the reality is very, very fundamental.
In a future that does not take care of the average worker and the average person in America, we'll have much bigger problems.
So we need a solution that takes that into account.
But that solution isn't just pump the brakes, AI is bad, or frontier technology is bad, or technology is bad, or whatever, you know, whatever thing that you, you don't like.
I think that, that'll have really, really bad consequences.
One of the reasons that we don't pump the brakes is just fear of China and competition with China.
Uh, the nunchuck robots, uh, being a recent example of like, oh shit.
Uh, and I have kind of a contrarian take on just how much of a threat China is and how they're approaching things.
The summary version of this is I think the way you, we, we recently read as a company, we read this book, uh, House of Huawei, which is just a really great, interesting book.
And, uh, Huawei is a really amazing company for the reason that it makes great technology.
But the couple hundred thousand people that work at Huawei, about a quarter of them are members of the Communist Party.
And Huawei's goal is not to grow profits or shareholders.
It's a private company.
It's really an extension of the state.
So literally, the name Huawei means China's ambition.
So imagine if you had a company called MAGA, and half of the company or a quarter of the company was a certain political party.
And they said, our goal isn't to make profits or to— our goal is just the expansion of— it's not even a company anymore.
It's something else.
Right.
And so I think we incorrectly, when we specifically speak of Americans, we think about China, we impart our understanding of markets and companies onto China.
So we think Huawei, since they make phones, they must be just like Apple.
It's like, no, no, no, actually they're not.
That's not like Apple at all.
And so I think the first thing I would implore everybody who thinks about China, especially with anxiety in America, is you're not comparing companies to companies.
This is, this is not apples to apples.
This is very, very different.
And so imagine instead of thinking OpenAI is competing against, you know, Deepseek, you say OpenAI is competing against the Chinese government.
Instead of Apple competing against Huawei, Apple's competing against the Chinese government.
And you can even remove the word Chinese government.
Government is the best word to define what this organization is, but it's not a for-profit, privately owned, independent group of people who are working on projects together to build
products to market.
So that's the first very important thing.
You cannot treat China like another America or another Europe or another whatever.
Number 2 is if your goal isn't to make profits, you can do incredible research and it can be extremely compelling.
But like we've seen, if the system is not sustainable, that's also not a company that's not sustainable.
Let me give a very stark example of that.
Chinese EVs are really lauded as being this exceptionally interesting product, right?
And you constantly get the streamer of, I would say, fairly shallow analysis, which says, look how good China is and look how Bad.
Munich, Detroit, Tokyo, Seoul are the other epicenters for automotive globally.
There is a Chinese EV-like company in America.
It's called Rivian.
Makes great products, but they lose a lot of money making those products, and therefore the company is not very highly valued.
I think if you said top 50 or top 100 companies in the Bay Area, I'm not sure Rivian would even make that list.
And it's not that the products are bad or the people at Rivian are incompetent or they're not working hard.
It's just the business is a tough business.
The EV business in automotive is a tough business.
So how can we hold these realities?
So we say, look how amazing these Chinese EV companies are and look how bad the home team is.
It's just because the home team is being assessed for being a business.
It has to make profits and because it doesn't, it gets hammered by public investors.
The other thing is not even a company.
Now, if we do apples to apples, you see, America just has to build great EVs.
That means Tesla and everybody else combined, and we don't care about profits.
I think America would field some very good products and there would be wow products.
So the comparisons are really, really off.
And I think that creates a misunderstanding.
I think, you know,
then the maybe the most philosophical question, can China succeed And does that mean America has to fail or vice versa?
If you believe in open and free markets, you believe everybody can succeed in those markets.
And that's been proven for over 100 years.
And I think what we're experiencing right now is how does China play in that ecosystem?
Because I said open and free markets and those are not open and free markets.
And so, but that doesn't necessarily mean that you have to have an antagonistic relationship.
It certainly doesn't mean that China is incompetent and it certainly doesn't mean that it's not, doesn't warrant our attention and that are our kind of, let's say, focus.
But it's also not a one-to-one comparison.
I think we should be very careful in implying it's a one-to-one comparison.
And by the way, that like 5-minute explanation is never going to get to the average person sitting at an airport in Detroit, Michigan waiting for their flight.
They just, all they consume is China bad.
Like it's like, that's not, it's not like that.
It's not that simple.
It's not that simple.
It's way more nuanced.
This episode is brought to you by Lovable.
Not only are they the fastest growing company in history, I use it regularly and I could not recommend it more highly.
If you've ever had an idea for an app but didn't know where to start, Lovable is for you.
Lovable lets you build working apps and websites by simply chatting with AI.
Then you can customize it, add automations, and deploy it to live domain.
It's perfect for for marketers spinning up tools, product managers prototyping new ideas, and founders launching their next business.
Unlike no-code tools, Lovable isn't about static pages.
It builds full apps with real functionality, and it's fast.
What used to take weeks, months, or years, you can now do over a weekend.
So if you've been sitting on an idea, now is the time to bring it to life.
Get started for free at lovable.dev.
That's lovable dev.
So you decided to join Twitter recently, put out your first tweet.
Your first tweet was just like, hello, I'm going to start tweeting.
That tweet got like 2 million views.
Elon replied to you.
Marc Andreessen quote tweeted it and said, this is the best AI CEO nobody knows.
Follow for free alpha.
Elad Gil, famed investor, describes you as the most successful, most quiet, company in AI.
And to me, this is really interesting because most founders are told, uh, build in public, build a following, be loud, get out there, talk all the time about what you're doing.
You did the opposite.
You were very under the radar, stayed quiet, build, build, build, and then decided later, okay, now time— it's time to talk about our story.
So I think this counter-narrative is really interesting, and I think will inspire a lot of founders to not feel like they have to do this.
What was your just philosophy of just start staying quiet and then starting Yeah, it's a great point.
So number one, it was intentional and I think if it was up to me, we would do that forever.
I think we're very much inspired by folks more like a Berkshire Hathaway and less like, you know, let's say a Silicon Valley darling.
And I'll tell why I changed the views and then just, but before some founders go and take that advice immediately without really thinking about it, I can do that because I'm known in
the ecosystem.
You know, I know these folks personally.
And so I don't need to have a brand out there that is getting a lot to, you know, remember me and think about me.
And so if you're, you know, if I'm doing my first two companies were a lot less known, it was before I really, you know, came to YC.
So, you know, all of our company values can be reduced to these two words of radical pragmatism.
So before you take the advice, make sure it applies to your situation.
One of the reasons, and Naval, who's one of our investors and a friend, you know, says, you know, fame itself is like a tool and it's powerful.
Now if you don't have a network and you can get a following, that's a fantastic way to get, you know, to recruit people to your company, to recruit investors to your mission.
And then of course, you know, customers.
And so, but for us, and I think that wasn't a hard requirement, you know, 10, 10+ years ago.
The other thing is I think Peter and I, you know, the old saying about life is kind of like you do things and then you rationalize the thing that you do, right?
So I think fundamentally Peter and I, we don't get a lot of, my co-founder Peter Laderwe, we don't get a lot of emotional satisfaction out of doing very public things.
And I think if I was really to play armchair psychologist and really try to get to the root of why, beyond the rational view, which is focus on your customers, focus on the product,
every minute you're doing a podcast, every minute you're doing an ex post, every minute you're writing something for public consumption, you're not focusing your very limited time that
you have on your customers and your product.
And ultimately that's the only thing that's gonna produce and yield results.
But the,
you know, the reality of the situation today, if you know, 2026 is even a company like us that's known or somebody like me that's known in the ecosystem, you still wanna get that broader
message out.
And that's what I talk a little bit about on X.
So it is definitely contrarian, but it's not just contrarian for contrarian's sake.
It plays a little bit of our own psychology.
And then I would say just to finish that thought there is, you know, I grew up, you know, I'm an immigrant.
I came to the US from Pakistan when I was a kid.
I have a little bit of a weird name and you feel like, you know, anybody— I grew up in Detroit and Warren, Michigan specifically for all those at home.
And when you feel that you're a little bit on the edge of society or you're not maybe in the mainstream and this is, you know, resonates with some people, that's not as resonate with
everybody.
You feel very skeptical of the mainstream because you're just on the outside for so long.
And I think you can trace a bunch of founders' psychology to this feeling of being an outcast actually.
And so then you find yourself in a situation where you're like the Y, you know, CEO, COO of YC and the narrative of I'm an outsider is like, you're like, I don't know if there's anything
more inside than being, you know, the YC COO, right?
So, I think that reconciliation, I think over my career also has had to happen, which is like maybe that's just kind of a weird kind of thing.
And so when I talk to, you know, Marc Andreessen, who really pushed me to go online, or Elad or whoever it is, their view is leave your baggage and your trauma, you know, in the background
and really let's think more pragmatically.
And the pragmatic thing here is whether I like to do these types of things or not, fundamentally, it helps get the message out.
And the message can be something very small and myopic, like what's happening in physical AI and machines becoming intelligent, or much larger, which is what's happening in society
through this fundamental change that we're going through.
I've had the rare privilege or the experience of seeing the full economic spectrum, I've really seen the extreme ends of both sides and truly I really mean that.
And so somebody like Marc who is close to our company says, well, those are some ideas that are worth getting out beyond just, you know, you're there promoting whatever, some, you know,
your company or something like that.
And that I actually, that I can get behind, which is like the debate and discussion about ideas and what's happening to our society.
Because of these technical changes.
And, um, and so, you know, here I am.
Amazing.
Okay, so there's a few threads I want to follow there.
One is you were, as you said, COO at Y Combinator.
You saw a lot of startups up close.
You had— you— this is your third startup on your own.
Something that I hear you talk about is that successful companies almost always show traction very early.
A lot of founders here are like, no, just keep trying fighting and maybe we'll be the next Figma, Notion 4 years in, we'll figure it out.
What's your experience there and what's your advice to founders who aren't seeing traction early?
Nuances.
I mean, if I was starting another company, I'd call it Nuance, right?
So the, I think what you're saying is correct.
I continue to believe that.
I think good companies tend to be, tend to have traction fairly early and then just sustain it for a decade plus.
To the founders that's toiling, let's say you're listening and you're about 2 years into your company and you're maybe having a tough time getting money and building that first product
that consumers or businesses really love, either through retention or dollars.
2 years is a very— is the difficult time.
The heuristic that I would use is if I'm not— if the information I'm getting from the market is not informing me on a more and more specific path, I would consider resetting.
And what I mean by reset is oftentimes— and this is wearing my YC hat, seeing, you know, hundreds and thousands of companies— is oftentimes is like the co-founding
like literally the foundation upon which the house is built is not correct.
It's like, imagine you built this house and every time you put a cup of water and it slides off the table and it falls on the ground and you're, do you keep adjusting the table?
It's like maybe the foundation is actually wrong.
The whole house is off-kilter.
And that foundation might not only be your co-founders, it could be the market that you're in.
It could be the phase of life that you're in and the amount of effort that you're willing to put into that thing in order to make it successful.
So there's a bunch of reasons that a company can fail and you have to be able to Somehow say, I don't know what is the reason.
I'm just going to have to hard reset here.
One thing I would tell founders, and I tell Applied is creating a founder class in itself.
People who've worked at Applied Intuition are now starting their own companies.
You know, we have 1,000+ engineers and over time they're starting their own firms.
And I say to all of them is just imagine the first time you're going to do startup for the first 3 years, it's a zero.
Just rid yourself of the expectation that it's going to be successful and that you're really— you're a craftsperson.
If we were making— if we were— if this was a woodworking podcast and you said, you know, the first table that you built was, you know, what, it was wobbly, you wouldn't say, well, go
work at Crate and Barrel.
You'd say, that's the first table.
We're going to— we're going to keep at it.
Being a founder is its own muscle.
And you want to exercise that muscle.
But I think a lot of founders, especially early in their founding career, put such an incredible pressure on themselves to make it great out of the gate that they actually miss the
thing that you're getting in that first round, which is learning and building that muscle.
And the second, third time, and I, I think it's not random that my third company is the most successful company.
I think, I think that's that, I think, and that you see that more often than not.
There are funds which are almost exclusively focused on multi-time founders, right?
For this reason.
What I love about that advice is often the best ideas come from when you have low expectations, you're just playing around, you're just tinkering.
You're not like, I'm gonna build the next great, I don't know, Google.
It's just you having fun.
And that's like how I found this world that I'm in right now, this path.
And OpenClaw is a good example of that.
I think why that advice is so difficult is if you hear this and you're like, you're in the proverbial war.
You're like, what the hell are these people talking about having fun?
I think this is hard.
And so you have to hold these contrasting or conflicting views in your head, which is like, it's deeply very, very important and you should give it your all.
And it's also not that important.
And that's a really hard thing to reconcile and keep in balance.
And the way that you approached this company where you stayed quiet like that, I think helps a lot where you're not.
Absolutely.
Yeah.
It's that like,
you know, even like at YC when I became COO, I told Sam Altman was the president and I told Sam, let's not announce this for like a year because, you know, if the partners don't want
me to be COO, it's not a successful thing.
I, you know, I don't have the pressure of the public of public scrutiny that, you know, why were you CEO only for 6 months or something like that?
And I think you have to be very honest with yourself as a founder and as a human being that those things matter.
What people think about you matter and it impacts yourself and having the spotlight on you.
You know, the, the, I always say it's very easy to pivot before you raise money and before you have employees.
No, nobody cares.
The moment you raise money and the more importantly, the moment you hire employees, employees join a very specific mission.
And you go and you walk into the office and there's 10 of them.
You say, guys, turns out this is wrong.
We're going on a different mission.
Imagine if this was war.
It's like, what the hell?
We just— we're attacking that hill and now we just say that hill is not important.
Like, how do you know the next hill is important?
And you as a lead, you lose a lot of credibility.
And it's not only for the superficialness of being a credible leader, it's a practical nature of when you're very, very public, they become— the startup becomes your identity.
And then suddenly you're having to reconcile that actually that thing is not correct.
So it is one of our— we have these core values in the company.
And early in the company, you used to have this line which says, our best work is done alone and quietly.
And I deeply believe that.
And so founders, I would think of it that way.
But it's for pragmatic reasons.
It's not like some just because it's cool to be under the radar.
It just allows you to maybe work in a bit more peace.
I love these core values you've shared so far.
The last one, the best work is done alone quietly.
I'm so on board with that.
Radical pragmatism is the other one you shared earlier.
Are there a couple more there?
These are gems.
Yeah, those are like, I would say, the meta values.
We have very specific,
let's say, operating principles.
And this is real, as tactical as advice I can give to founders.
So we— to come up with your values when you're getting a little bit of traction.
And the reason I say that is early enough where you— and the way you come up with the values is not like, what values should we have?
Like as philosophers?
No, no, you should figure out why are we being successful?
Like literally write down the 5 to 10 things that are the reasons you are being successful and those become your values and you kind of read.
And so we did that.
And so our first one was going to speed above everything.
And it was like us being fast.
The second one is like, you know, never disappoint the customer.
Technical mastery, high output matters, like all the way down to like, you know, ones that are not obvious, like laugh a lot.
That's been our core value from at the beginning of the company's history.
And it's like when you're working on intense things, if you don't have the ability to keep grounded, have perspective, laughter and humor also is a way to get subtle feedback in a slightly
different taste than this sucks, you can say it's not the best.
And that is slightly in the— and so you're really creating the framework in which people are learning how to behave with each other within the company.
And so today the values really serve us as almost like a— they're like guiding principles.
And so I, you know, we do new team meetings that every week Peter and I would meet all the new team members and we're almost always just talking about the values in some level of detail
and depth.
Yeah, another value, half the work is follow-up, like just taking notes and following up.
That is the business.
It's not more complex than that.
Laugh a lot is my new favorite company value.
Yeah, sounds like a wonderful place to work.
And then on the last piece there, there's this book that just came out by Stripe Press about maintenance and how valuable and underappreciated the maintenance part of work is.
Absolutely.
Absolutely.
Yeah, I think like if there's a takeaway that you get from, let's say, a bit of my philosophy on where we, you know, we started the conversation around like why,
you know, being promotional has all these negative connotations in it.
But so I'm careful using that word, but why not be promotional?
It's because there's costs to everything.
And so if you can focus on the craft and making the product really, really good and really listening to your customers you have a much higher likelihood of success.
And then you can always then go and scale there.
A part of that is the thing that you're talking about, maintenance or another version of, you know, my roots are in automotive engineering and automotive engineering is actually an
exercise in quality.
That's really what it's, you're building these very complex machines at scale.
You know, people talk about rockets being really, really complex.
You only gotta send up a rocket even at the highest capa— like once every couple of days.
You're making a car every 30 seconds and you have to make it extremely cheap and it's globally competitive.
So the, the, you really get into the nuance and minutia of how a factory runs.
And a factory is about safety and maintenance.
It is not, there's not a lot of complex things.
It's just, you know, it's like when you, when you break down what is being operationally, you know, strong.
Operationally strong is keeping an eye on a handful of things and making sure you're doing them really, really good.
And I'm one of those believers that, you know, there's this adage, a man who cannot command himself is not fit to command others.
And it's like that maintenance aspect is a part of that, right?
It's like if you maintain yourself and your own work, you maintain your team, you maintain the company, the products are almost— they come out of all of that system.
And I think a lot of founders don't think about their company as a system or almost as a machine.
But I would implore you to do that because then you really focus on the craft of the machine and building the machine and making it more hygienic and making it more well-tuned.
Just like, you know, you'll meet people who really love cars and they, they really obsess about the maintenance of cars, you know, like, like they will, they will detail like underneath
the driver's seat.
As somebody who details my own cars, like nobody's going to look at that.
But it's under that same ethos of really caring a lot about the craft and being— and frankly, since you have a limited amount of time, it's hard to really care about X and also making
your— making your companies hygienic.
And, and there's different reasons at different points of your company that you should do different things.
But that's kind of a little bit of the ethos.
I love how it keeps coming back to just staying quiet.
Working, just working on the thing and not talking about it.
Your point, last point there makes me think of that, the score takes care of itself.
Yeah.
Classic.
Yeah.
So Joe Montana is actually one of our investors and in our Series D post, we, the post was the valuation takes care of itself.
Like very much we fall into that category.
And it's like, it's like, you know, sometimes people will come to our office and they'll say, oh, like it's like such a clean office.
You guys must have like this giant cleaning staff.
And it's like, actually, we clean our office.
Just like in Japanese school, as I mentioned, I lived in Japan, like the students clean their own schools.
We have a cleaning zen every week and everyone cleans the area around them.
And I think it's important that like there's something about this ethos of like also like not getting so wrapped up in your own narrative of like, I'm a Stanford software engineer and
I do AI.
It's like, Clean up your desk.
So there's like some like basic things like that.
And I don't know what that larger philosophy is, but it is a philosophy that we kind of drive towards.
And I think like, you know, our claim to fame, which is kind of a crazy, you know, reality is we've never spent any money we've ever raised, you know, in the history of the company,
which is kind of— it almost sounds like it's made up.
It's as if the company is almost 10 years old, you know, 1,000 engineers plus.
And so we're a functioning business without using capital that we've raised.
And I think it's somehow connected to us cleaning the office.
I don't, I don't know how.
Because you're saving all these cleaning costs.
It all makes sense.
Yeah.
Yeah.
Yeah.
We still have people, you know, clean, but like we also are, uh, like our employees also are aware of their surroundings.
And I think there's a direct line between like be quiet and alone and clean your desk and well-written software.
and I don't know what that thing is, but it all falls in the same arc.
I know you also have a no shoe policy for that same reason to keep things clean.
Yeah, yeah, yeah, yeah.
And it also influenced by Japan.
I think the, you know, I worked there and we had a similar office setup.
The other way to think about this may be, as again, if I'm just trying to impart everything I've known to founders, 'cause I feel like that's my, you know, that information is so limited
and everyone's kind of trying to make it up, frankly speaking.
This is the alpha.
Yeah, yeah, yeah, exactly.
Is I would implore you as a founder to
really try to take the best of Japan and the best of Germany, the best of China, the best of Detroit, the best of Silicon Valley.
And, and, you know, I think sometimes people take that Steve Jobs line and they really like, you know, deform it where they say like great artists steal.
What he's really talking about is like the less, the less magnanimous version of that is be humble and learn from everything around you.
And as a leader, and be well-rounded.
I think like reading should be, you know, there's a Charlie Munger line where he says, I've never met anybody very, very successful who doesn't read all the time.
And I like very much fall into that category as well.
And so if you like unpack why that is, Like, why does reading a physical book make you a better founder?
Like, it just like ask that question in the most direct way is I'm not reading, you know, if you, especially my, my ethos of reading is read old books, don't read anything new.
Read, read old books because time has filtered out a lot of the noise.
So you get a lot of signal and in your life you, you're gonna, a thousand books maybe you'll read.
I'm like, in the best case scenario, you're gonna read probably 50 to 100 books, which is kind of crazy for the average person.
So you're just not gonna read many.
So don't read low-quality content.
You read— there are, there are true pillars of, of kind of, of human, uh, ideas out there.
You consume those ideas and then it's up to you to interpret how those ideas then reflect upon the business that you are leading or the technology that you're developing.
I absolutely believe reading, uh, a, a, a book like Malcolm X's autobiography will make you a better founder.
And it's not, I, again, it's like the whole cleaning zen all the way to clean code.
It's not directly one-to-one related.
I think we always want these very simple if-then statements, but I think being a well-rounded founder where you understand society around you and history around you, that somehow makes
you build a better product.
And I don't know how and how and why, but I think it absolutely is true.
I do see it.
Connection there.
And then people like Charlie Munger, who are not an AI founder, obviously also believe that.
And I think so there's some, some pattern there.
Like, it's interesting.
This is the same.
It's a metaphor for LLMs.
You feed all this data, somehow they become almost conscious.
How does that happen?
No one, no one fully knows.
Um, it's so interesting how similar you are to Marc Andreessen and your way of thinking and the way you consume content.
Like, there's a— we're both bald.
There's a thread here.
Just here's how— here's, here's important ingredients to being really successful.
Yeah, I mean, Marc, I mean, you know, we were fortunate enough to choose, you know, our investors and that's a true privilege.
I didn't have that in my first company.
We spent years, we didn't raise a dollar.
So I don't, I certainly appreciate it.
But, you know, if I've ever had like a mentor, you know, Marc would fall into that category.
And I said, you know, before Applied and we debated and talked a lot.
And I think Marc is also like that, right?
He really consumes content actually outside of this little industry that we're in.
And then I think it makes him actually a better investor.
Yeah.
You're, we'll point people to your website.
You have a list of the books that you recommend and love, and it's very long and very not what you often see.
I can't help but just ask, are there a few books that have most influenced your thinking, most influenced your life?
Yeah.
That list is like, I've very thoughtful, been thoughtful about that.
And the reason I use books like the, you know, the autobiography of Malcolm X as an example is I know that's not on on top of the list.
Everyone's gonna, you know, if I say High Output Management, classic Andy Grove, you know, you guys know that.
So it's partly— Yeah, yeah, exactly.
It's like partly the theatrics of also entertaining you, but also giving you new information as a listener.
I think like a good— the books I'm currently reading, and this is kind of a random slot of books, like I'm reading the Vibe Coding book that came out.
There were our whole company's reading that, which is a new book, and it kind of goes against my grain of my heuristic.
But The Emperor of All Maladies, the cancer book, fantastic book.
I'm almost done with it.
Like, I think it changes the way I think.
Like, you read that and it changes the way.
And I think that's the ultimate test.
When a piece of material changes your existing framing on life, this is, this is good.
In the LLM use case, this is somehow related in the sense of Diverse data
makes your understanding of the world more rich and nuanced, and therefore it's better.
But yeah, so like I'm always inspired to give like more wacky kind of examples rather than the obvious ones.
But the obvious ones that if I really wasn't, you know, wasn't being theatrical, I think Sam Walton's book Made in America is an unbelievable book.
It's very, very good.
He wrote it on his deathbed.
My American Journey is also very good.
Colin Powell's book, it's not on my website, but it's really good.
I'm somebody who tries to connect some of these dots from us being cave people to now living in Silicon Valley, working as a venture-backed AI company founder.
And so books like Guns, Germs, and Steel really are top of that list of fantastic Fantastic book.
Or Collapse, also same author.
So yeah, but my, my, my point to the founders is read the stuff.
You can still go to physical bookstores, read the stuff that is both old and well-regarded and you know nothing about.
I often, when I'm trying to find the next book to read, like I remember the way I picked up SPQR, which is the book on Roman history, was like, I was like, I don't actually know a lot
about Roman history.
I know the high-level stuff.
So like, think about all the ideas in the universe, from philosophy to history to, you know, Jainism to the rise of, you know, Japan as a feudal state, like areas you don't really know.
And then just find the best book in that space.
And I think you just start filling in the blanks.
And so often I, that's kind of like the way that I grok the ecosystem is like, what don't I know anything about?
And let me find the best piece of material in that.
And yeah, it doesn't work well.
I like that.
So Mark's philosophy is this barbell strategy of only today news like X and 10 years ago books.
I love that you're like, no, just like upside down T almost.
Just only— Yeah.
I mean, you know, Mark was a heavy influence in me getting on X.
So, you know, he's propagated that view.
The thing that I really do agree with him is on as our company becomes a larger, more influential and impactful company of society, it is my responsibility as a co-founder of the company
and the CEO of the company to at least propagate my ideas to our first AI founder community and then larger, the technology leadership and then the, the world large.
And so that's, that, that's part of it.
I think in that way, Mark is, Mark has really Taken it, you know, you, you think about VCs not long ago, you would never even know who their names are.
Yeah.
I mean, they were like this, and they were like, you know, they're like PE guys or, or, or, you know, hedge fund managers.
You can't even think of names.
They're, they're like, they're just blobs of ominous sounding, you know, Obsidian Corporation or something like that.
And it's only a16z and a couple of other folks, John Doerr, uh, who really kind of created the, Hey, I'm going to be the individual investor and I'm going to propagate a certain set
of ideas and that's going to create gravity within Silicon Valley and influence founders to then make certain types of companies.
And then of course, they invest in those.
On this thread about reading to find almost areas you disagree and, and don't, you know, haven't thought about.
I know one of your approaches to management and one that may be a value is to encourage your leaders to listen to naysayers.
To not create this positive reinforcement cycle.
Talk about why that's so important, how you operationalize that.
So imagine, you know, I'm not the, the, the founder of the company and Peter's not the co-founder of the, it's just, just a generic company.
The ideal that you, the ideal situation with a generic company is one that you can put in lots of different competing ideas.
The culture is one where you will shake those ideas out.
There's not an emotion in it.
Whoever brings the idea, the best, best idea wins.
So why can't companies do that?
Frankly speaking, a lot of times it's the founders and the founders are told the, you know, by, you know, popular media and the way that human beings experience life and our tribal
kind of outlook that you have to have this hard view and everyone has to, if they're not following you, then maybe you're a weak leader or something like that.
And I think we just don't believe that we don't just believe in that philosophy.
And I think we, we, we believe that what maybe a more tactical way of saying that is we take inputs of the environment, our customers, specifically our employees, our competitors, of
our investors, and we, of what's happening in society at large.
And that impacts our strategy.
And, and I think it's one of the reasons we've been very, very successful.
We're not, we're not so arrogant to think that we just have the answers because we had the ambition to start a company.
And I think that permeates into a very specific culture.
And I think the culture that we've built is one, it's also not being contrarian for contrarian's sake, right?
I, I like if you, what one, one view I do have is emotions are generally not helpful in making rational decisions.
They're almost like the opposite of it.
And sometimes passion and, you know, leadership are supposed to be about, they're supposed to be like, again, magnanimous or emotional.
And we just don't believe that.
And that's, I think, a bit more of our kind of Midwest
roots showing for Peter and I.
And so I think it's not that we say, hey, disagree with everything in the room.
What we specifically say is speak up, speak up.
Everyone has to speak up because that one person with their one experience, because they worked at, you know, Zoox or Waymo or wherever, Tesla, or they worked at a Chinese company,
whatever it is, that one idea they have in their head when the debate is happening about what we should do in,
you know, in space, like literally the space of space and something maybe we don't know much about, that one person's one idea, they have to feel comfortable sharing it even if they're
the most junior person or they feel that they, didn't get their way in the last debate, or they feel, you know, whatever anxiety that they might have, they have to share that opinion
that, guys, this is actually the right idea, or this is the wrong idea.
And if you can create that environment where the best idea wins, you know, Gandhi has this line, truth is what stands the test of time.
We're trying to, you know, and I think there's become a little bit of a meme in the Bay Area, like truth-seeking, you know, as a culture.
But it kind of is like that.
We're trying to find the best idea.
Maybe truth is the wrong word, Maybe it's the best idea.
Find the best idea and then let's go full bore against the best idea.
Let's maybe use a counterfactual.
Why do companies fail when they have great talent and they have, you know, seemingly all the same components set up that an applied intuition have?
It's because maybe the best ideas are not being surfaced and certainly maybe they're not actually being adopted or more often than not, when I think about companies that have been very
successful is they have momentum going in a specific direction.
And that momentum over that, that, that, that, that, that wall of sound overwhelms any new sound that's emerging, which is, hey, the market's changing, the market's changing.
You just can't even hear that because there's all this momentum going in a particular direction.
A good example, I had front row seats for this when I worked at Google.
It was the era where Facebook was emerging.
And here's Google.
And people don't remember Google in the late 00s and early teens.
Google wasn't just a company, it was the apex predator of Silicon Valley.
Apple was just, you know, the MacBook Air had just come out.
Steve Jobs was getting in the right direction, but nothing like Apple is today.
Amazon AWS was still a young thing.
NVIDIA was teetering off of bankruptcy.
I mean, all these giant companies that you think of, Microsoft was run by Ballmer, you know, Twitter was a small thing.
And then you, so it wasn't, it was, but Google was this already larger than life number one company.
Everybody wanted to work at Google.
And there was, there, there were not many companies with that stature.
And then in the periphery, this little company Facebook starts emerging and Google, who has the best engineers on the planet, is making a billion in cash flow a month.
Tries to fight this little company that, and I remember Facebook at that time maybe had 1,000 people and Google was like, you know, 15x, 20x the size with a lot of cash flow.
And why couldn't Google fight Facebook?
It's because Google is not Facebook.
It's like that Confucian saying, like, how does a gorilla learn how to fly?
By not being a gorilla.
The way that Google would've won the social media wars by being a social media company.
It's, it's just fundamentally not.
And so the large— this happens in companies all the time, which is you're just going in one direction with momentum, consciously or unconsciously, because that's where all the employees
are, that's what the culture is, that's what, what they work on.
And then something changes in the market and you just can't even, even, even move there.
And I think that also can happen uncharacteristically, surprisingly, at really small companies.
So where founders have a view and it's like that view is the view it's gonna be.
And actually that can be just 10 degrees off from what is the, was the correct path.
And the whole company's kind of led astray.
So they were in the right market, they might've even been solving the right problem, but they were just a little off.
And so we're so scared of failing and so scared of losing that I will humble myself and listen to other people and they say, hey, we're 5 degrees off, off course here.
And it's like, okay, let's, let's, let's, let's maybe fix the course.
And then once that becomes your culture, then it's really hard to lose because everybody's not about fulfilling a preset path.
They're just about finding the, you know, how to win.
This is exactly what I wanted to ask about.
Everyone listening to this either is like, oh yeah, we're very open-minded.
We're, we're absolutely gonna listen to everyone's opinions and decide rationally the right path.
In practice, almost never happens, right?
Or they're just like, we're just, we know we're not good at this.
We're just like too nice to each other.
How do you, how do you do this at a company that isn't good at this?
Is it like, does it have to be the CEO top down in your experience?
Does it have to be part of the culture?
How do you operationalize at a company that's not like you?
You know, the, the middle way is typically the right way.
And it's hard to find the middle way because these are conflicting ideas.
The, the, the, the guardrails or the flagpo— the posts you just set.
One side is like, we're just gonna go, and the other side is we're too, you know, maybe like almost like unsure.
And you have to somehow,
once you do make that, have that debate, you have to then confidently walk down that path.
And again, this is conflicting.
I just said be humble enough to listen to what's going on.
But then once that decision is made, the decisive, uh, in our values, that first value, uh, you know, the speed value, which is Specifically, the wording is, uh, move fast, move safe.
That's specifically the wording.
We assess our managers on adherence to those values.
Literally, we compensate and promote against those values.
They're not just like abstract values.
So the behavior we're actually looking at under speed is decisiveness.
So we're setting up a system that is looking at, you know, these conflicting kind of things.
One is like, be open, and the other one's make decisions quickly.
And you have to hold those in tension.
This is why you as the co-founder or founder of the company get paid the big bucks.
You gotta do that.
You have to know when to bluff and when to hold 'em and, you know, when to fold 'em, as they say.
So you, at some point, and that point comes sometimes faster than you think, you will not get any more information.
You have to make a decision.
So you're walking this very, very thin line.
Your point about emotions was extremely interesting, and I want to make sure people don't take away the wrong takeaway here.
Um, so what I actually found really helpful, which I think is aligned with what you were suggesting, is taking emotions out of it.
The way I've used this in my work is, uh, when you have to make a hard decision, pretend nobody's feelings would be hurt and emotions are not involved.
What would you do if nobody cared, if they're like, totally great?
So what would you do in that world?
and then that tells you, okay, that's actually the right thing to do.
And then it's, okay, how do I help people feel okay about this?
How do I deal with the downsides of this path?
Yeah, I think, I think that, that, that's the, that's the obvious version of it.
I think maybe another version of thinking is it's an emotion is let's take another route.
You
as the leader or as the engineer, uh, who is getting, uh, direction, you already have some preset view that this is my idea.
That's an emotional, you know, that's an emotional construct and it's around ownership and feeling of ownership.
So yeah, I really fall into that category of like the more, so like even maybe most fundamentally, like what is an emotion?
An emotion is like these set of reactions that have been, that a framework that's been imparted in your brain through life experiences.
And those life experiences might not actually make you optimize, have not been optimized for you to make a decision in a, product review.
And so the, the, the more you can pull that away, a good heuristic would be the same decision being made by multiple people in the company gets the same result.
So you're removing a little bit of that almost like filter.
You can almost think of that emotion as like a filter.
So I like to have the raw image come through, the raw decision come through so we can consistently classify it again and again, not to get too abstract, but I don't know if that makes
sense.
Yeah, it makes sense.
And, uh, I have one last question, but, uh, there's an interesting trend I've noticed with people talking about AGI.
The missing piece I've been hearing more and more is just emotions are what creates consciousness potentially.
Michael Pollan has a new book out about consciousness, and his take is it's not just more intelligence, it's actually emotions that led to the consciousness.
I think it's, it's, it's under, let's say,
undermining how complex human thought is to think that it's just the, let's say, the, you know, inputs and outputs of, or let's say, association, for the lack of a better word, of ideas,
facts, words, letters.
It's not just as associations.
And, and creativity is kind of a little, a little bit you know, of that as well.
Like, like what the old, the old saying of like technical mastery is mastering the complex.
And I think computers do that really well.
And creativity is mastering the simple.
I'm sure I'm going to eat my words on this as like the best artist in like 3 years will be.
Yeah.
Yeah.
And so I think, and this again goes back to my philosophy of like consume broad, you know, broad inputs.
but then try to remove that filter, see things as honestly as they possibly can, create a culture in the company that is also similar and doesn't put any weight on who the idea came
from or where it came from.
But then ultimately as a leader, you decide.
And by the way, you gotta be right.
Like that's the other thing I think a lot of founders that just, we're not, we don't emphasize enough.
Founders love to take credit for, for things.
Uh, it's just human nature.
Everybody does.
But they, the, the reality is you have to be right.
It's not enough to just start a company.
It's not enough to, you know, have this vision of the world.
You have to be right.
And the evidence is, is the company a sustainable standalone business?
Because we're talking specifically about, you know, venture-backed AI companies in Silicon Valley.
All of my, but I should have said this at the beginning, all of my, you know, all of my advice is specifically for that narrow group, which is founders of AI venture-backed companies
in the Bay Area.
Speaking of that, last question.
I've been wanting to get to this because it's an interesting spicy take that you have.
As the last question, I know you have to run after this.
You have this view that a lot of CEOs in Silicon Valley don't actually have great taste.
I'm excited to hear what your experience there and just what— Yeah, I also want to be careful to imply that I do.
I fall into that group.
I think, you know, it is true because A lot of the couple of reasons both, both taste in the most, let's say, you know, artistic sense and in the most specific, like running a company
and like what should be the policy, like HR policy for, you know, point X.
A lot of that is I think they're just not exposed to a lot of interesting good things.
And that's been a theme in this whole conversation is like just, just get more and more exposure.
I— it's very unfortunate when I meet somebody who— and I'm not I'm not thinking of anyone in particular.
So if this is you and you're one of my friends, I apologize.
I don't, I'm not talking about you, but it's like, you know, you grow up in Cupertino, you go to Berkeley and you start, and the first thing you do when you come outta school is you
start a company and then that's, that's all you do, you know, for 20, like you've never even been an employee.
And why I think that's so important, like I spent over a decade working in large organizations and like truly large, like more, more than 100,000 employees.
Like a General Motors or like a Bosch.
And when you're in the, you know, the, you know, the back alley of that organization, the bowels of those organizations, you learn how bad it is to be an employee.
Like the bureaucracy above you, leadership doesn't know what's going on, the industry, you know, your antiquated tools, all this stuff.
Why that's so important to experience as an individual is then when you become a leader, you're making policies and you're creating culture and you have to keep that in mind.
And a bunch of founders, we just never had the frankly, the fortune of being at the bottom of the totem pole.
And that's just one version of how to, you know, that doesn't obviously seem like, you know, read, you know, consume the photos of Bresson or, you know, Picasso or whoever it might
be.
But it's something similar.
There's something similar about you.
You can sometimes meet founders and maybe a good heuristic here is like, there's some founders that would be good at lots and lots of things, actually, not just being a founder of an
AI company in the Bay Area.
And there's something about taste there because you're really, what you're talking about is like understanding humans and understanding life and then being able to discern with some
judgment what is good and what's not good.
Because that's really what we're talking about.
And, and so if you, if you're, if you're, if your life experience is very narrow, you could still be good and you might, or have the ability to discern what's good and what's not good.
But I think like there's something about like if you've backpacked for a few years around the world, I somehow believe that's gonna be a better founder.
It's like, you know, I don't know how I can, there's no peer-reviewed research that I can point to that says that.
So I think that's what I'm getting at.
There is some developing of taste.
Yeah.
Well, I feel like we have helped people build their taste, build, feed their, feed their model with more insights and different perspectives in this conversation.
I feel like we could chat for hours, but I know you got to run.
Yeah, I'm not sure if there are any real takeaways other than— Okay, zero.
We really, we really went everywhere.
I'm sorry if you had a particular line of questions you wanted to go down.
We went in all the perfect directions.
Okay, good, good.
Cássio, thank you so much for doing this.
Thank you so much for being here.
Final question, just where can folks find you online?
How can listeners be useful to you?
That's a great question.
I mean, I love to hear what are books that I don't know.
So that's always good.
I've, some of my favorite books have been just randomly kind of recommended to me.
So I'll take that.
Of course, I consume, you know, research as well.
And so if there's something particularly novel that's going on, obviously all the mainstream stuff, we as a company and me as an individual are gonna consume, but things that are a
bit off the beaten path, we're always looking for that.
But yeah, and then if you, if you have a particular opinion about specifically our domain, physical AI and how AI is gonna impact mines, farms, constru, you know, construction sites,
robotaxi, all of that stuff, I'm always interested to hear new, new opinions on that or even old opinions maybe with a different viewpoint.
So, Yeah, if you, if you see me, you can see me online, of course.
I'm always around as well to, I'm always, always open to that feedback.
And you're on Twitter now.
There you go.
Yeah, exactly.
Yeah.
Follow me there.
So I, oh yeah, exactly.
That, that's, that's the answer.
That's the call to action.
Channeling my inner, inner Mark.
There you go.
Caster, thank you so much for doing this.
Thank you for being here.
Yeah, thanks for having me.
It was a lot of fun.
Bye everyone.
Thank you so much for listening.
If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app.
Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast.
You can find all past episodes or learn more about the show at Lenny's podcast.com.
See you in the next episode.