
In this episode, the conversation explores what it really means to build software and companies in an AI-native world. Rather than treating AI as a simple replacement for existing tools, the discussion looks at the deeper shift happening underneath: how agents may change software interfaces, reshape workflows, pressure companies to expose better systems and APIs, and create entirely new economic models around computation, usage, and automation. At the same time, the speakers argue that AI adoption will likely be far slower and messier inside large enterprises than many people in Silicon Valley expect, because real-world organizations still have to deal with legacy systems, security, permissions, and operational risk. Overall, the episode is a thoughtful debate about where AI agents will truly create leverage, what kinds of businesses may benefit first, and why the biggest opportunities may come not from replacing old systems overnight, but from gradually rebuilding how software, work, and digital infrastructure operate.
The diffusion of AI capability is going to take longer than people in Silicon Valley realize.
It's just absurd to think you're going to vibe code your way to, like, SAP.
All of that domain knowledge, it's not just represented in some well-orchestrated data layer.
The engineering compute budget conversation is going to be the most wild one in the next couple years.
The biggest problem right now is everybody is trying to figure out the economics of all of this when they're off by at least an order of magnitude on how big the opportunity is.
If you have 100 or 1,000 times more agents than people, then your software software has to be built for agents.
People in the abstract say things like, now you're marketing to agents, that you're like an API, you've got a good idea.
I actually think that's almost exactly wrong, which is— wow, this is breaking podcast news.
If you start to imagine that we all have to build software for agents, I think we're like all clear on that, right?
So like that trend is happening, which is like we spend as much time now thinking about the agent interface to our tool As we do the human interface.
Sure.
Okay.
And the reason we're doing that is because our hypothesis would be that if you have 100 or 1,000 times more agents than people, then your software has to be built for agents.
And then what, what is the way that those agents are going to interact with your system?
It's going to be through an API or CLI or MCP or whatever.
And the paradigm that appears to be taking off and is quite successful so far in terms of efficacy is what if you give a coding agent access to your SaaS tools and a coding agent access
to, you know, your knowledge work sort of workflows and context.
And that kind of becomes this superpower, which is it's not just like the agent is not only capable of like, you know, reading some data, understanding some information, it can actually
code its way or uses APIs, you know, through whatever task it's trying to achieve.
That appears to be like a paradigm that is starting to compound.
And, you know, that was the— that's the Cloud Cowork phenomenon.
That's the whatever OpenAI is kind of cooking up, you know, with the super app, Perplexity Computer, etc.
And, and I actually think it kind of makes sense as like the ultimate manifestation of this stuff.
I mean, I think you're right.
It, it makes sense in a, in a theoretical way.
Yeah.
But in a, in a practical way, We have to be really careful in that,
that the way to say it is algorithmic thinking.
Yeah.
Is really, really, really hard for the vast majority of people who have jobs.
Yeah.
And so the easiest way to think about it is if you were to go into any person and ask them to create a flowchart for a particular thing that they have to go do, they would probably
fail at producing that flowchart.
Yep.
They're— so within any organization, you know, say doing a marketing plan and there's 50 marketing people working on a giant product line.
Yep.
One person probably understands and could document the flowchart 100%.
So if you put one of these agents or you put this, this, this tool, this coworking tool in front of people to create these things, their ability to explain to it what to do is really,
really limited.
100%.
So then you're— but what if that becomes the new This is the new way you have to interface with computers.
Well, and you just have to cycle that through.
Well, then you're basically just
developing the next abstraction layer for how people interact.
Yeah.
Developing an abstraction layer has historically at each level of the abstraction layer been a highly skilled, very specific individual within an organization developing that.
Yes.
Then the little parts that they build just become little toolkits, in the world of people doing particular tasks, and some people are able to stitch them together and some can't.
But that happened with paperclips and thumbtacks before, and it's gonna happen with whatever we do next.
I think that, I think basically the timeless part is the job just moves up a rung, and you learn a new set of skills, and that's why I actually don't think anything about this is any
different.
It's just now the leverage you get is obviously, you know, fantastic.
There was this viral kind of tweet that went around, which was the Anthropic Growth Marketer.
Do you guys see this?
It's basically one person and he was using cloud code at the time to basically, you know, more or less automate what maybe 5 or 10 people would have done in various kind of siloed jobs.
And, and I think the reason why it's interesting is, is, yeah, you had to have been a systems thinker to be able to accomplish that.
So like, clearly he already was.
Technical enough to be able to pull that off.
But it did kind of represent like, what would each of these jobs look like if you had like, imagine you had, you know, X job in the economy and right next to that person was an infinite
pool of engineers that could automate whatever that person wanted.
And, you know, what would that job look like in the future as a result of that automation that now is possible?
Yes, I agree that you'd have to find a way to like, you know, think through your job as a system to be able to pull that off.
Maybe the agent gets better and better over time at being able to like nudge you in that direction.
But It does sort of stand to reason that you will start to try and automate a lot of that kind of work of like, well, why don't I take the keywords that are working in Google AdWords
and then port them over to Facebook and make sure that those are replicated and then take in the new signal from what's happening in the market.
That's a big leap.
One thing first— I almost had you.
You were nodding a little bit and then I said something that went too far.
Growth person as an example, like that's a job that is the rest of work.
Yeah, I could do that job.
Everybody's obviously going to be like, demand is infinite and you've got the best thing.
Like when demand is infinite and frankly supply is infinite, this is not a difficult job.
And so let's— the guy that runs the petrol pump in Australia right now is amazing.
Right, right.
So like be instead be the $600 PC marketing person and see how you could do against the Neo.
That's a real job.
All right, fine.
We need a better example.
But there is, I mean, It is, it is really interesting.
Like I, here, let me do an old example, an old person example.
Like my, my cousin, MBA, elite school, joined her first job.
She's a little older than me, joined right on the cusp of computing.
Like she actually didn't use a spreadsheet in grad school and then they all, spreadsheets showed up, but she wasn't a spreadsheet person.
So instead they told her, hire as many interns as you want.
And so her first year on the job, she like supervised like essentially a whole room of agents.
Yeah.
And the kids, who was me, not literally, but they were in college, came and just did all the spreadsheeting.
Yeah.
But then what happened sort of this magically over the next couple of years was she and her cohort all became the spreadsheet people.
Yeah.
And then this idea that you being a manager in a bank or just a 2 years in meant you had a cadre of people doing spreadsheets.
You know, the whole abstraction layer moved up.
And the old job before those interns was you just sat there with basically calculators and an HP calculator figuring out the model for some M&A deal or whatever.
And you only got to do like 2 iterations before you had to put out the pitch deck or just go to the customer or the client or whatever.
And then all of a sudden they're doing 30 iterations themselves.
Yeah.
But they see— and so I think where we are with agents is just at this step where you think you need 50, and the abstraction layer is such that we're dividing up in these really small
pieces with one super smart person coordinating them all.
And pretty soon that whole thing is just going to— they're all going to collapse on each other.
Yeah.
And there is just going to be like a skill set amount of code, call it an agent, that is like marketing-ish.
Yes.
And you'll be able to ask it marketing stuff.
Yeah.
And then the next step will be and have it go do things.
I'm a little skeptical of the— until the whole non-reproducible, non-random element of this AI stuff goes away, the doing stuff is going to get very costly.
And so then you get into the human in a loop discussion and all of that.
But I think we're just— we're at that exact— I feel like when I talk to people trying to do stuff that we're right at— I feel like I'm at Thanksgiving dinner talking to my cousin 6
months in her job.
When I'm using a spreadsheet, already.
And I'm like, I don't know why this is so hard.
You should just use one.
And then 2 years later, she's doing it.
And I think this right now, you have to be an absolute— you have to be a rocket scientist and the growth marketing person to create 42 agents and spin them all up and do all of this
stuff.
But the rocket science part of it just is going to evaporate in very short order.
And then you're talking about, wow, there's a giant chunk of domain expertise that goes back to the domain expert.
So I actually think something that you said I'll take the other side of which is I think it's very tempting to be like these agents are going to code and do X.
Yeah, but I think we're going the opposite way.
So I think actually where we started was we'd like take like a piece of SaaS software and we'd add AI.
Yeah.
And then that's like the new kind of like AI enabled.
So that's like the extreme version of using code for these types of things.
But now what are we actually doing?
We're like, okay, the SaaS software is still SaaS software and the agent uses it as a computer because like it's actually very good at that.
So I'd say like we started with code.
Then we went to the terminal, which is actually less code.
Yeah.
And now this year is going to be the year of computer use.
Yes.
So it's almost like they're much more like humans using computers than them generating code.
And that feels like very much like this mezzanine step.
Yeah.
And I actually come from like the generating code type of the world.
Yeah.
Like I would argue that that's happening less, not more.
Yeah, I think so.
To me, whether it's computer use, API use, or writing code on the fly, I kind of maybe erroneously put that all in one category.
Well, they're very different.
They're very different, but we have an agent that we're working on where it just makes a determination whether it should use an existing skill, it should be using an existing tool from
Box, or it should write code to solve that problem.
And its ability to do any one of those three at any moment ends up being incredibly useful because sometimes there's just some specific operation you want to be able to do where writing
code to be able to do that operation is just faster.
and we don't have to, we don't, we can't possibly, you know, kind of pre-plan for everything that anybody would ever want to do on their documents.
And so the fact that the, the, the model is good enough to also write code on the fly for that use case ends up just being like an amazing property, even though maybe 90% of the things
that it's gonna do should just be using an existing API.
And over time, Predo takes over.
And over time, there's like literally like 7 apps on our iPhone.
There's 7 SaaS apps we end up like over time, these things tend to consolidate?
But the 7 apps on the iPhone is an issue of humans don't want to learn these things over and over again.
And so I as a human, I don't have the mental bandwidth to learn that many apps.
But an agent that is going to use tools and APIs and be able to code things doesn't have any of the same constraints that we have.
So I don't know, like I don't mind— Well, you could argue that there's just so many things to do and you can make interfaces sufficiently general.
Yeah, fine.
Well, fair.
I think I like what you said then because— Oh, we're back.
We're back.
We're aligned.
We're aligned.
No, but I think there's something super interesting here, which I do really, really like, which is that where software has evolved, like I use SAP all day.
I work in finance.
I have to go and generate all these reports.
Then somebody shows up and says, I want a report that does this view sliced this way, and I'm like, oh God, I don't know how to make that.
And like, now let me go wade through the SAP help system and try to find it.
One thing that, let's just say AI could be very good at is it actually can navigate that surface area much, much better.
You know, the help is all there.
And so it's a matter of finding it, mapping language.
And humans have been a bottleneck in tapping the past 25 years of software capabilities.
I mean, like I spent my life my life with sitting next to people on airplanes saying, how can I make PowerPoint do X?
Just go to the ribbon.
Well, and you know, it was because it hurt, physically hurt to watch somebody suffering with bullets and numbering in Word or trying to figure out, you know, like, oh, let me just make
a two-sided, a two-axis graph in Excel, which like is rocket science.
Like almost no one can do that, but yet it's super common.
And so people are like, have no, and so that impedance mismatch was a human User interface design.
I totally buy it on the consumption layer.
I totally buy it, which is like the perfectly fluid UI or consumption layer.
I just feel the backend, like the systems of record, it'll probably converge into some database, some generic set of APIs that they'll connect to.
And that seems to be the direction it's going.
I agree.
I think— go ahead, sorry.
So I spent all weekend implementing my NanoCloud bot.
And when you first start out, it's like you're building an integration for everything.
Nanocloud's very like, like OpenClaw has all of the integrations.
Nanocloud has very few of them.
And so you have it build all of its own tools.
But after, you know, 2 or 3 days of these, like, you know, you kind of have the tool integrations that you need and yeah.
You know, like, yeah.
But back to the SA— I mean, we're talking about personal productivity, probably like you're like organizing your life or something.
Well, it's, it's work productivity.
Okay.
Okay.
Fine.
Work productivity and then an SAP system and like.
And like, and so, so there's like an infinite— like there's an infinite amount of complexity when you get to, okay, some company that has a global supply chain and they're dealing with
75 pieces of information across, you know, 30 different systems that does require a certain amount of horsepower from the agent that is just— we have— I mean, we just haven't been able
to get from, from any architecture up until now.
But like, take— but that is what you just described is literally what IT has been doing for 50 years and will continue to do, which is, I have a friend who is the CIO of the VA, and
all he spent his time on was gluing the 75 VA systems together.
It's all just integration, redundancy.
Perfect.
For integrate, yeah, this I totally agree.
Great.
For integration, these things are the best, but it's integration.
Yes.
It's literally how do I stitch these two systems together?
But now, the thing that I think is happening, It's kind of like integration on demand.
It's my new query in the system that the IT team didn't pre-wire.
Now, I need it to happen at runtime.
Let me get off my lawn.
Okay.
So, the reason I just was in a room filled with a bunch of CFOs and CIOs, and they all looked at me when I said something along these lines, although not as optimistic as you can imagine.
But they just— More realism was in there.
No, it caused like 6 of them to come running up afterwards and and say, you're insane, you've lost all credibility with me.
Because it's backwards.
Wait, wait, wait, what specifically?
That the agents are going to do integration?
That the integration is a problem that will get a lot easier.
Yes.
They were against that?
No, no one's against it.
I know, they think it's practical.
But their fear is like unleashing not just the agents themselves, but humans to do integration.
Because you put people creating new integrations, and you just say, please break my system of record.
Oh yeah.
And so this idea that you just create a new API between, you know, system 27 and system 38.
Yeah.
And then you're— that might be fine for a report because if that person wants to be wrong, that's their business.
Yeah.
But you're not— I think, I think we have a read-only version of this for a number of years before where N is very large.
Yeah.
And a lot of it's just the consumption layer where the consumer is a human being.
Right.
Right.
It really feels right now a lot of the stuff is consumption.
But yeah, I mean, it's
You know, we actually have— so we just rolled out the official Box CLI.
Thank you for liking the tweet on that.
I, I used it.
I have some feedback.
We'll talk about it.
I'll take all the feedback.
But it's a really interesting thing.
So we have these, all these debates internally of like, okay, you give Claude code, the Box CLI, and you can now interact with your entire Box system via natural language and you get
the horsepower of Opus 4.6 being the orchestrator of doing a bunch of operations.
And it's like, you know, blows your mind.
I guess I'll get some feedback, but it blows your mind in some ways because you could just be like, upload this entire folder from my desktop in the box and it'll work, or process all
these documents in this folder and it'll work.
And it's amazing.
And then we started thinking through, like, well, let's say you were a company with, you know, 5,000 employees and everybody had access to some shared repository, like, you know, engineering
documentation and, marketing assets or whatever, and everybody had Cloud Code or Codex running with the CLI, wow, we now have some really interesting new challenges, which is like,
how do you coordinate possibly the fact that you might be hitting the system like 10,000 times an hour or something?
Not from a performance standpoint, but just like, how do you make sure that people didn't move a file from one thing accidentally from one folder to another folder while the other person
is trying to do a write operation and somebody else was trying to delete something?
'cause you just have these agents running wild.
This is going to be like the new big question that every CFO, CIO, et cetera, is running around trying to, with their hair on fire trying to figure out.
[Speaker] Well, there's just, that's exactly what I ran into, which is I played around with your example, which is create, the video example, which is create like a marketing plan directory
or something, and like all of a sudden I'm like in some loop creating directories.
[Speaker] Yes, yeah, and it's gonna go on as long as it can.
[Speaker] Right, and I was like, I wonder what the limit is on Box for nested directories, 'cause I'm about to hit it.
Actually, we're going to find out too.
Yeah.
But it does feel to me that a lot of the intuition is to build a new layer of controls and whatever, but what's actually happening on the ground is the opposite.
So I'll give you an example.
When we all picked up a lot of these personal agents, we would give them our API keys, we would give them our email addresses, and then they would access those things.
They're like, oh, but how can I stop it from like whatever.
And so what everybody's doing now is you give it its own phone number.
I actually gave my Nano Claw its own credit card.
Hopefully just a Visa debit card that you bought at CVS.
It's got all the money.
No, no, but then I gave it its own Gmail account, which you can log into.
And then Gmail actually has all of these RBAC permissions that you have to do.
So you could make an argument that like, you know, we've actually built in a lot of these permission systems.
You have to treat it like a human, as a separate human.
And then instead of like building another auth layer, building another
Okay, now can I instantly take— do a takedown of this element that we're gonna run into?
Please.
Yeah, okay, so that is fantastic for personal productivity.
And the question that we're gonna run into is in an enterprise.
Let's say I have— let's just make it a simple example.
I have a 50-person team of something.
Should everybody also— basically, will we have a hundred— will we have 100 people now collaborate?
I mean, basically 50 humans and then 50 credit cards and then 50 agents in that same shared space.
and do I have— I obviously have complete oversight over my agent, but what if my agent collaborates with somebody else and then accidentally gets access to some resource because they
were sharing with that other person, and I'm not supposed to have access to that resource, and now this autonomous sort of stateful, you know, agent is running around working on somebody
else's information.
The default end-to-end argument is you treat them like human beings.
It doesn't work.
So you can't fully treat them like humans because here's the thing, and with regular humans, you don't get to look at the Slack channel of the person that is working with you or working
for you.
You don't get to log in as them.
You don't get to oversee them.
You are— they are accountable for their own set of execution in the real world.
You don't get penalized for what— how they screw up.
The agent, you have all the liability of whatever they're doing.
You do have complete oversight and you're probably going to need to have that complete oversight.
They have no right to privacy.
So there's going to be some of these breakdowns that aren't as clean as just treat them like a person, because I need to be able to kind of— I need to be able to give access to something
to them, but I also need to be able to log in as them at some point and be like, no, no, you fucked up the whole thing, and I need to undo it all.
But if I can log in as them, how could they have operated in the real world working with other people and keeping anything confidential or secure or whatever?
So it really is still an extension of you.
It's almost impossible to get around them being an extension of you.
So now the thing that we're thinking through that we're not going to be able to do anytime soon— I— I— I— so this doesn't logically follow.
Yeah, maybe.
But for example, for my employees, I can log in as them.
You don't though.
You don't— you don't— I can get access to their email.
Yeah, no, in like if you get like sued, you're not logging in.
You're not logging in as them on a regular basis because they sent one email.
Isn't the right operating model with an agent the same thing?
The risk is like 1,000 times greater.
Like, these people, like, they will just leak your information whenever they want.
Like, they will happily just go and send some email to somebody because they got prompt injected.
You think the terminal state is that these things are still these sloppy computers and therefore they will always leak it?
I don't like the word sloppy, unless we're saying it very in a colloquial sense.
But like— They'll never be able to contain information.
They'll never— So, like, I think the ability for you to keep something in the context window a secret
As in like you tell it, do not reveal X thing in the context window, I think that's a very hard problem to solve.
Let's say— So then thus, if anything can ever enter that context window because they have access to a resource, then in theory you should assume it can be prompt ejected out of the
context window.
And I don't know that we know of a way to solve that at the moment.
So if I know your new agent's email address and I email it like it's It's an assistant, but like, I can, I can social engineer it 10 times easier than a human.
Like, it'll be hard for you to pull off that that agent is now also has access to your like M&A documents and stuff.
But isn't this like literally all of AI right now?
Which part?
I mean, the fact that we've got these shared systems that we use the intelligence for that have shared context.
But what do you mean by it's all of AI?
Well, I'm just saying like right now when we use AI internally in agents internally, this This is exactly how we use them.
But this is why you were there working as, as you effectively right now.
And we don't yet know how to make them not work as you.
Let me offer an example.
Let me, let me offer an example.
And then solving this problem though, like, like the,
like,
like the issue will be, will be like you will just be able to trick the agent to reveal information.
So then, so then that's why like having them have access to their own resources where they can fully make their own decisions.
Is not yet something that we've been able to pull off.
There's a perfect example for solving your problem, which is we already lived through this with open source.
Yes.
The model for open source was it's all there and you just use it and you pick and choose.
And then like nobody debated it because the world was much smaller then and we weren't all on X doing podcasts when this was all happening.
But then quickly everybody realized all the problems you, you were just talking about, like if you're running a big company, You can't have some person just go copy in a bunch of source
code from open source into your commercial product like that.
There was a whole licensing problem, a whole bunch of stuff.
So, all these norms got developed.
The debate that's happening right now
is this really interesting modern artifact of how new technologies develop, which is, this is all happening in real time.
During open source, we met in a conference room this big, and debated how much open source we could use in Windows or Office.
Nobody on the internet knew we were having this debate.
I think it's just so interesting that not just this debate about specifics, but this whole notion of where is this heading is happening in writ large, and everybody is just trying to
get to the end state like way, way more, like in a sense more quickly than we can actually reach the end state.
And so what really needs to happen is people just need to go build.
We need standards.
What?
We just need some standards.
No, I think we've got different intuitions on the end of the stick.
No, no, we— look, look, you don't want my intuition, but like— One could make an end-to-end argument that these things actually converge on the same type of reliability as a human being,
which is exactly how we view like self-driving.
And in that case, you use the exact same mechanisms that we use to protect with human beings.
You consider insider threat.
You consider the fact that people can be bought off, you consider the fact that people make mistakes.
Yep.
And you build a risk.
And that's operational process.
Yeah.
But so one intuition is like, that will be the end state.
Yeah.
There's another intuition.
Well, don't point at me.
I'm just saying I'm talking about where we're at now.
I actually, I don't know that we disagree on the end state.
And by the way, like strategically we're hedging because we're going to build, we're going to, we're going to build users and we're like, so we're like, I love the idea of Open Claw
having a box account and it operates and you just like twice as many accounts.
Yeah, exactly.
This is great.
Double this.
I love it.
I'm just saying on the ground right now, we don't yet know how to give it an M&A data room to fully securely be able to— But that— Yeah, but it's actually— it is harder than that though,
because the threat— He's the skeptic.
I'm good.
The threat vectors are going to be way more sophisticated.
So we do have a cat and mouse game going on where you can't just assume that the agent acts like a human does today because it's going to be the fastest, most thoughtful, craziest-ass
human that ever existed trying to actually leak the information because it got injected in some way.
Yeah.
And so part of what's going to happen is we're going to go through this phase where like the enterprise customers are just going to like close everything off until there's some sense
of sanity in all of this.
And then, but in the meantime, the individual and specifically the developers are going to— and that's going to be— that I think is the most exciting tension that's going to happen
is that the enterprises are going to get left behind by these advanced individuals, which will then start to look like the startups.
And the startups will start to move much, much faster than enterprises because they just don't have any of these problems.
You could end up with the agent going rogue in a startup and doing bad things.
You have employees that go rogue routinely in startups.
Yeah, yeah.
Well, it'll just be an episode of Silicon Valley, and so, you know, big deal.
I agree with you on like the— okay, it's people, etc., the same risk.
I think you— there's a couple, you know, differences though, uh, in the sense that, that I can't really threaten, you know, the like cloud code that it's just I'm gonna pull the plug
on it in the same way that you do have that threat as a regular employee.
It's like you at least like 95% of people are not you know, trying to do bad stuff, you know, within an organization.
They're not trying, but the ability to inadvertently do bad stuff.
Yeah.
To your point about it still not having that stuff, I would argue that, that, uh, it's a lot easier to have people not share, let's say files with somebody outside the company in a,
in a, in a wrong way more than it is for an agent right now to have that same set of instructions.
And also you have the tools so that you, you can basically stop that at a whole different level of abstraction, which is why you have to build this into software.
But I do think actually if you were to like, if you were to like put a bow around your last point, a lot of this is actually why the diffusion of AI capability is going to take longer
than people in Silicon Valley realize.
Because what's happening is like we see startups that can start from the ground up without any of the risks that we're talking about because they have nothing to blow up.
And, and so we look at that as the trajectory that we're on.
And then you go to like JPMorgan and you're like, how are you going to set up NanoClaw?
To be able to, to actually like, you know, automate your business anytime soon.
And it's like, oh, okay, there's going to be like a little bit of a gap there.
Yeah.
Well, what do you guys think?
Here's— I think that that opens up a pretty interesting problem, which is this split between big and small startup and enterprise, which is just that, that the, the enterprise, the
current SaaS vendors who are all struggling in this SaaSpocalypse weirdness that I don't really agree with.
But they are struggling with this problem that they, they don't really sell
the line of business data.
They actually sell this intelligence and domain expertise in this whole system.
And the agent side of things wants to only buy the data now, and they only want to license the data and they want to have unlimited access to the data.
But they've actually never really enabled that.
Like, that's never been their business.
And it's been a longstanding tension point with the likes of Workday and SAP and stuff like how much API access to have.
I mean, Salesforce went through 3 different massive platform redesigns.
You know, it's— I think that that's a particularly interesting problem.
So not for the same reason that Wall Street does.
Wall Street's all wrong about the economics and the problem and all that stuff.
But from a technology perspective, what does system of record mean in the face of people wanting to access the data when the data— when for training or for— well, they are talking about
executing the day-to-day operations.
Their concern is that somebody— that they want to do the training layer on your data.
Like, I'm a big customer, they want to do the— my vendor wants to build a training— actually, even if you don't even get into training, they're concerned because, oh yeah, because like
monetizing,
you know, sending a little bit over the internet versus like you're in my UI, it's a very different level of monetization initially that you see.
But that's sort of— that monetization part is the Wall Street point, because I think like, look,
There is so much domain stuff in, in an SAP, just to pick an example, not to pick on them, right?
But like, they're not going anywhere.
Like, it's ridiculous.
It's just absurd to think you're going to vibe code your way to like SAP.
But also all of that, those, all of that domain knowledge, it's not so, it's not just represented in some well-orchestrated data layer as much as they tried.
There's like a whole bunch in the, in the UI.
There's a whole bunch of middle tiers.
There's a whole bunch of just how you use it.
And so I'm really unsure how this thing evolves because SAP isn't going anywhere.
So then that's going to slow the diffusion of AI on that particular data source, independent of whether or not it's agentified AI that's doing stuff or just read-only reporting on stuff.
So where do you come down on it?
Where do you think that's going to go?
I'm afraid of saying something that— Well, I'm watching to see.
Okay.
Like, that's— otherwise you're not going to get invited back.
So say something good.
I think I've drunk the Kool-Aid on
build something agents want.
So this kind of the Paul Graham term kind of like emerged on, you know, the past year on this topic, which is just like, I think we would actually then I fully agree on this, which
is at some point you do enough sort of iterations of this and at some point the agent is largely in charge of what tools it wants to implement and use and whatnot.
And yes, it can't— the agent is not going to be able to change out an enterprise system.
But like, again, enough generations later, the agent might, might just run into so many walls with your software that it's just going to say you need to finally rip out your legacy
HR system or I'm not going to be able to automate this workflow for you.
Yeah.
So I do think you have this really interesting dynamic, which is back to this whole point of imagine that there's 100 or 1,000 times more agent volume on software than people.
You do that enough times and eventually the software stack that agents talk to has to be built for them.
And, and maybe there'll be a couple holdouts, maybe, maybe a couple ERP systems are like the final holdouts that don't do that.
But everything else, you basically like your business will, will be your business performance will correlate to how well your agents can get access to the information they need to do
their work.
And so thus your enterprise IT stack has to be set up in such a way to support that.
And so agents are kind of in charge because basically your software has to support those agents being effective.
And that's going to mean everybody that built a SaaS business or a software business is like, the game is can you build really, really high quality APIs?
Can you have a way of monetizing that?
You know, do you have a way of handling the identities and all of the access controls for agents?
And like, like that becomes the new problem you have to solve if you're building a software company.
And so, yeah, like, and then, and then how you monetize it, like, do you monetize it?
Like, does Workday charge a penny for every HR record it pulls?
Like, we'll figure that out.
I do think that in some businesses it could mean less revenue, and then in other businesses it can mean a lot more revenue.
Like, the thing we get excited by is like every agent really loves working with files.
So there'll probably be more files in the future than there was going to be before.
And so, you know, can we build a platform that like makes it really easy for agents to work with that data?
You know, we're betting that that's actually a really optimistic outcome for, for, you know, our kind of business model.
There might be some business models that are like more constrained because like the agent is doing more of the value than, than the software is in that kind of future scenario.
And then there'll be everything in between.
Can I, can I quibble with one thing?
You're going to quibble with that?
I thought that was like so not controversial.
No, no, I generally— we're here to quibble.
No, no, no, no.
But there's one thing I think like Paul Graham and many actually gloss over which is they focus on the interface.
They'll say things like you build something for the agents.
Yeah.
And I actually think that's exactly wrong.
Okay.
In the sense that— and to be fair to Paul Graham, uh, he didn't— he had that extrapolated.
Yeah, exactly.
Yeah.
I have brought— I brought Paul Graham into this.
So, okay.
Let me talk about something.
People in the abstract say things like now you're marketing to agents.
The most important thing is to being like, whatever, you're like an API, you've got a good idea.
I actually think that's almost exactly wrong, which is— wow, that's— this is breaking podcast news.
That's the one thing agents are really good at.
Oh, okay.
Is finding their way through.
And at the end of the day, like, it's the semantics that end up mattering a lot more.
Yeah, right.
And so like the agents in my recollection or in my experience are very, very good at picking the right backend for whatever they're doing.
So they don't— they're not like, oh, like the interface for this is very good.
The documentation is none of that.
They're like, They're like, the cost parameters of this, the durability of that.
Like, and so like they actually have the collective wisdom of our experience using these platforms.
Like, let's take cloud platforms.
There's a bunch of cloud platforms out there.
Yep.
And whenever I ask an agent to choose a platform, it's actually using meaningful stuff, not interface stuff.
So I think as an industry, we're so focused on these interfaces.
Yes.
Like, oh, you need to like market to agents, this and that.
Yeah.
But really, I think that we're going to be pushed to actually build better systems.
Yes.
And that's what's going to be chosen.
Okay.
Actually.
So then there's probably no quibbling.
I think we're actually fully aligned.
I'm sorry to ruin the quibble thing.
I don't treat I don't treat this as like a, you know, kind of a marketing, you know, esque thing.
I more mean like, if your tool is closed off to the agent, the agent eventually will find a better tool for that company to go use.
And so, and so what will happen is, is it used to be that you would go to like Gartner to be like, tell me what, tell me what to do, tell me what system to use or whatnot.
At some point with enough iterations, the agent is going to say, you should probably use this kind of database for this type of operation.
And if you're not in, if you're not in there, then you're— it's your DOA.
And I think we should actually be celebrating this because agents are actually pretty smart at choosing the right technology.
Yeah.
In the past, I really think it was a lot of the other things that caused people to buy it, which was like— but don't worry, we will— in Silicon Valley, we will ruin the meritocracy
of this very quickly because we'll just like, I'm going to outspend— well, the agent, they'll be an API to incent the agent.
But, you know, there's a marketing— the marketing agent at Workday The marketing agent at Workday will have the ability to purchase the recommendation.
You need to find a way to replicate steak dinners for agents.
Here's the thing that, again, that happened with the web internally.
Internal, just pick internal sites.
Every company had file shares with the best documentation, the best slideshows, the best financial models for any department or working area, and people got familiar with that.
And then when they didn't find the one they wanted, they created a new one.
And many organizations sort of operated like that was essentially a free market.
In fact, because before the world of Box, like IT didn't— if it was in a file, they just didn't care, right?
They only cared about if it was in SQL.
And so one of the risks with the model you're describing is that the agents themselves will spin up what becomes like a de facto new system of record.
They're going to fragment the heck out of— in what the IT people think of as some middleware end-user BS area.
And I think that that is a real risk is that like in a sense, like
the macros end up running the corporation.
And so I think that they've seen this movie and they've seen what happens when you let marketing just go buy a website on the internet to do an event.
And then it's like a huge security vulnerability and the mailing list is leaked and the whole company gets sued.
Totally.
And so I think there's a lot more real-world tension in this dynamic than, than we just let on.
Yeah.
But I also think it's one of these ones where, you know, their organizations are going to run at different paces and JPMorgan is going to be the slowest at doing this and the startups
are going to be the fastest.
But the delta is huge.
But even the startup one is a little far off because even startups do need some systems of record.
At some point.
And they are going to all start with some SaaS and they're not going to replace it very quickly.
So I think it's a little trickier.
So it feels like there's like there's two very competing viewpoints on this one.
And like Elon said, it was like, okay, we're going to like issue a prompt and it's going to like spit out machine code.
And that's basically the collapsing of layers view, like whatever existing interfaces and layers that we've created in the past are all going to go away.
And it's literally like prompt to machine code.
The other argument, like the history of systems is layers never go away.
They just get layered.
Right.
And because a lot of the layers are actually more of like organizational boundaries or like state boundaries or— or compatibility, they're just— they stay for compatibility.
Right.
So the other argument is, is like we've actually evolved these layers very specifically and because of like more human and organizational needs and they're not going to change and the
agents are going to go ahead and map to those.
And I tend to be in that latter camp.
Like, I don't think that we're— I think like systems are going to continue to be used in fairly similar ways.
Maybe there's more agents using them.
I don't think they're going to evolve as much.
Elon might be back in the like anthropic category of the anthropic growth marketer.
Yeah.
Which, which is like he like, you know, over the years when you kind of like study the various IT departments of his companies, like they are the most— I mean, he could do that.
He can do it.
He's the most homegrown.
Like, everything is his first principle.
Elon AI would do that.
Elon AI.
Right.
But also it's first— and then from your mortals, you're like, yeah, we kind of just want a CRM system that like kind of works the same way every time.
I mean, this is not— this is not— it also hasn't been not tried before.
Like if you were to look at an ERP system from first principles, you know, well, in 1970, whatever, when SAP started, there were a bunch of different assumptions.
And today you would start from a different set of assumptions about what's important.
Right.
And you would architect the thing completely differently.
But then it would still only last like 10 years until you thought, wow, that was a broken decision.
And I, and so I think that, that there's intentionality in layers, but you, you, but there's also this first principles thing and you know, there's that always will exist because the
decisions you can make at first principles at any given time mandate a whole bunch of different stuff.
And so even if you don't go with LiDAR, which made total sense 10 years ago, You still need 10 or 15 years to get to where LiDAR— not having LiDAR worked.
And then now there's going to be a whole bunch of other things that you're like, wow, we could have done that completely different.
And so I feel like this is again like this discussion about trying to race to an endpoint.
Yeah, but let's see a first example of what you describe happening.
And I think that that's going to be the real tell because I think that there were just— companies will figure all this out and I think that they will just fall back on layers and architectural
models because it's the only way.
We know how to think about it for policy.
We know how to think about it for security.
We know how to think about it for— But it's also the only way to build a system.
Otherwise, you're just building an app.
And if you're building an app to do one thing, we don't need all of this.
There's a whole different way to do it.
The thing that I'm pretty fascinated by is, and I don't even have any amazing data points or anecdotes, but at least the notion of of these sort of companies that are emerging in this,
in these kind of services categories from the ground up, from the pure first principles approach, which is like, okay, well, if I could start a marketing agency or consultant, you know,
engineering consulting company, or I don't know, maybe somebody is doing this for law firms, construction worker.
Yeah.
Like, well, maybe because it's design, construction, design, architecture, architecture, design, anything that would be like a knowledge worker kind of services company.
Because you could kind of build your company pretty differently if you had no constraints of I have no information barriers and boundaries of what people should have access to.
I can give the agent just all the context it needs to do its work.
I can write software on the fly for particular things like, like I do think that will be relatively disruptive, you know, for some time until the bigger incumbents can kind of, you
know, get out of the way on this.
And that will at least create, you know, some precedent or case studies of what, what this new sort of corporation could look like.
But I do, you know, over time they'll still run into the same exact problems of every other corporation, which— well, they'll run into— they'll run into geography or market segments,
you know, or, and just distribution challenges.
Yeah, like those, those things.
Anything outside your little walls, yes, you will run into the physical world, right?
I do kind of like the idea that there are some new business models that open up now.
Oh, of course.
Oh yeah, yeah, yeah, yeah.
Because like There's so much either information or software that, that basically goes underutilized by like 100x relative to like what its economic value is simply because like nobody
wants to pay $0.05 for accessing a piece of data or use a tool for $1 once.
But like you do give these agents, you know, a budget and a protocol to work with and all of a sudden you're like, oh, like on the fly they can go get medical research for some deep
research task they're doing.
And I'll pay like $3 for that and the agent is able to go and transact.
Like, it kind of opens up a whole new world of business models for the internet.
Let me— oh, I'm going to— you're— that was too nice.
Oh, okay.
No, no, you're going to go farther.
No, that one is one where that's actually the biggest— I think that the biggest sort of
in-the-air problem right now is everybody is trying to figure out the economics of all of this when they're off by at least an order of magnitude on how big the opportunity is.
Okay.
Because the new models that people will come up with that nobody knows what they are right now, but they will absolutely come out with new models because that's what happens with every
new technology.
And the thing that holds back the sort of the discussion now is you basically have a bunch of finance and Wall Street people trying to justify GPUs and tokens and things like as if
we're in some old world.
And they're, they're so they're, they're viewing the world of revenue as sort of this linear step, literally linear growth curve.
And so I think justify All the, all the expense when people are going to create— like this was the problem with PCs.
People viewed PCs as a finite market because they just viewed the consumption of MIPS as some finite thing.
And they didn't think what would happen if we put all those MIPS on every desktop.
And in particular, people thought software just came with the MIPS and nobody thought, oh, well, they'll just sell the software.
One guy did.
And it turns out that was like a really good idea.
And the same— yeah, Bill and Paul.
And the same The same thing happened with the cloud, which was people looked at the cloud and they said, oh, we're going to take all of the server business, which was like literally
like 60,000 units a year, and we're just going to move it to someone else's data center.
That would be the business, and then we'll divide up the price.
Nobody went, oh, people are going to use 1,000 times as much of the resource leveling if we move it there.
and that's exactly— I mean, that's the thing that I— it just drives me absolutely bonkers that the Wall Street models have this fixed revenue pie.
Right.
Zero-sum thinking.
And it's this weird zero-sum where they just think that the amount of money that a company is going to spend— and like, this was the problem with Salesforce that they faced when you
were starting too, but like, Marc was just blazing the trail, which was like the CRM business was like— Yeah, that was the big issue.
—was $2 billion a year, and it was $2 billion in like you had to go buy all these servers and these Oracle licenses and this huge headache and years of deployment and consulting.
Yeah.
When if you could just get salespeople to sign up individually, they all will sign up, right?
With no friction.
And that's— that is exact— there is no, no doubt that that is what's going to happen with AI.
Let me give you an example of this.
So I, you know, I've been in for investing for 10 years now.
I probably have a portfolio of 240 companies I work with.
So visibility, let's say, in the 50 of them.
These are all infrastructure companies.
Some historically have done well, some not so well.
Every single one of them has gone asymptotic in the last 6 months.
And you're like, okay, why is this?
It just turns out there's so much more software being written now than ever has been.
Of course.
And so it's like, and it's not because they've got enterprise customers, you know, it's just because there's just so much consumption of the, of the infrastructure layer right now.
And so with more software, with more agents, there's gonna be a lot more consumption of computer resources.
So certainly in the case of the computer side of things, yeah, well, we haven't even gotten to the point yet where everyone's phone is a huge consumer of AI, right?
So once everybody's phone and on-device, yeah, like once your phone on-device is consuming AI, the amount of it is gonna go up by a billion.
So do you like the micropayment piece?
I think all of them, the micropayments, there's a little bit of micropayments that has come with every technology where they always think that like you'll be able to get like a free
fraction of a penny.
But in the end, especially in the enterprise, like, people are just going to consume things.
It's just cheaper and easier to buy like a bulk license for a bunch of stuff.
Yeah, you want some predictability in that.
Well, you want predictability and you just want like to not have to think about it.
I like the idea that it is the first time that you could— like, the agent doesn't care about the friction of a small transaction.
And so it's the first time that you could have resources behind a paywall that something will actually be willing to pay for that resource.
The world has built up the infrastructure to aggregate those payments into something efficient for
customer or a service, right?
And because tokens are such a significant part of COGS right now, it is pushing the industry to do usage-based in a way that we have.
Like, I remember when we went from like perpetual, right, recurring, and that required like a bunch of huge changes.
Like, we're like, we're going through the exact same change right now towards usage-based, and usage-based is pretty granular.
And it actually allows— I mean, again, you will have a contract with like, you know, we went through this with AWS, like people learned to do that, to do this.
And we went through the phase where like people were like so terrified of cloud compute that they were like, we need companies in the middle to help us find the cheapest and to arbitrate
it all.
Okay, well now you write tokens into this and I don't see how we possibly have time in this, in this conversation.
But like, like the, the engineering compute budget conversation to me is going to be just like the most wild one in the next couple of years.
It's just like, how much should you allocate of your engineering expense to token?
And it's like, you know, depending on who you read on Twitter, it could be 1% and the other— and the other side could be 100%.
And it's like, yeah, but this stuff— well, no, no, CFOs have to literally— they actually have to know the answer.
I understand they have to know.
Okay, CFOs always want to know the answers to things that don't have answers.
Wall Street is going to make them know the answer.
No, no, Wall Street is going to make them come up with some number.
And hold them to it, then they'll get fired and then it'll— but it's okay.
Okay.
I hear you.
R&D is somewhere between 14% to 30% of revenue of any public company, let's just say.
Okay.
The difference between compute being 2x the cost of your engineering team or, you know, you know, 3% more is like that's all your EPS.
I get it.
So like, we will have to know the answer.
I'm perfectly willing to sacrifice a few CFOs.
I want that.
That's a good clip, by the way.
But, but the reason, the reason is, is because again, this is, this is trying to know what we just don't know right now.
Yeah.
And, and this has happened with internet bandwidth.
This has happened.
No, this is not even close to internet bandwidth.
Oh no, no, no.
I beg to differ.
Like, like people were free.
It happened with vacuum tubes.
It happened with transistors.
It has happened with every technology.
There was this, oh my God, it happened with programmers.
There was a, there was a time when programmers were gonna swallow every company.
Yeah, and that's not— it was in my lifetime, not some made-up— yeah, but I don't think we've ever had a point where— no, we didn't— the end— you— every end user in an organization has,
has sort of a completely elastic ability to spin up a resource on their behalf.
Well, it certainly— that actually is actually in many cases very valid for them to go spin it up.
But it certainly, it certainly rhymes with what happened in the early 2000s with cloud.
I remember very similar discussions when we went from CapEx to OpEx and then unlimited spend.
Oh no.
And there were— remember there were companies who the CFOs would sit in our briefing center here and say, you don't understand, we are like, we are an agriculture company.
We are an agriculture company.
We only know CapEx.
We have no— or no, we're sold through this.
Right, right.
No, we both did.
Or like, oh no, we are an OpEx-based company.
So if you tell us We love the cloud because we just shifted everything to OpEx.
So all of this stuff, like the rules of accounting work out.
Also, I keep thinking, do not discount the local compute engine as being a release valve for all of this.
When's that going to happen?
Well, the question is, it's not when does it just happen with today's view of the technology, but how all of a sudden, wow, there's a whole— Has that historically ever gone that direction?
Yeah, exactly.
It goes the opposite, right?
No, it went all to the client.
Well, okay, go back to the '80s.
Yes.
No, that's most of the examples that we're hearing so far.
Whoa, that was uncalled for.
Okay.
Since then, I did vacuum tubes.
He's talking about vacuum tubes.
But I do those examples because you can't argue with them and it's much easier that way.
You're right, I can't prosecute where it's all— No, but it's only been 10 or 15 years that it moved back to all cloud and then what has happened recently with that?
A lot of people wake up in the morning and they say, oh, we're moving back to doing some critical but stationary workflows on on-prem.
And with AI, that's true.
Dude, you wrote the blog post, man.
Don't make me go through the archives.
I had to deal with so many Wall Street questions on that one, by the way.
Well, because you're also— because your competitor went back to— oh yeah, we're talking about two very different— I agree.
I agree with like building your own data data center.
I'm talking about this notion of edge computing where things go to devices.
Like, that seems to be— I'm more in the, like, cloud maximalist camp.
But sorry, so you just don't think, you don't even think for, like, one second that it matters whether, like, how you're supposed to be an engineering leader right now managing the
compute budget of the engineering team?
No, of course it matters.
I just think in the long term this thing will get— Oh, sure.
Oh, long term, all of them.
What do we even— who cares?
We don't even need podcasts.
Here's what I think.
Let me— here's our plan.
But here's a rule, here's a rule of thumb.
First, like, the startups are gonna burn through available capital pretending like it's not a problem.
Yeah.
And they are gonna do that.
Yeah, a lot of that anyway, right?
Right.
And a lot of big companies are gonna be so terrified they're just gonna freeze and not do anything.
And then people are gonna actually start buying it on their own, and they're gonna do all the things that companies do when they're big, have a lot of money but don't want to spend
it.
And in the middle, we are gonna see, like, if you pick a category of product or go-to-market or something, there are going to be people who are willing to make the bet for whatever
reasons that they can because of their financials.
And they are going to go ahead and they are going to become the people who lead in the space so long as they can maintain the financials.
Now, they might do it in— they might say, oh, we're going to just do it here in this particular application space or here in this particular usage space.
But this, this idea that there —nobody is going to go in because they're so terrified that the CFO is going to get fired or something.
No, no.
It's just crazy.
Yeah.
But then there are going to be CFOs who make a mistake and like, OK, everybody gets a little.
Yes.
Well, if they do that, that's a complete fail.
Yes.
But also, or like you get— there is a really interesting
finesse here, which is like you don't really want your engineers right now having to think about compute budget because we're still developing the— oh, OK, so that set you over.
No, but I just feel like we've been having discussion for 15 years when it comes to cloud.
This is totally new.
Like, only, only like 10, only like 10% of your engineering had to think about it.
In 2016 to 2018 timeframe, there's a whole set of companies that was basically like, like the dashboard for— what was it called?
FinOps.
Yeah, where the developers— FinOps is very cool right now because of that.
Would have— would developers have access because cloud spends were getting out of control and API spend were getting out of control?
And so it was like like, you know, here's your Twilio spend, here's— But, you know, it's pretty different, and I'm going to wait for all the comments to come in on YouTube to call you
out on this.
Like, it's like you can get into a conference room and just be like, hey, can you make that one, you know, kind of algorithm a little bit more efficient so you don't use as much, you
know, of our cluster at this time of night or whatever, and then you get out of the meeting, somebody improves it, and you're good.
This is like every single prompt that every engineer is doing, like Do you like— you have to decide, like, do you want that to be a long-running prompt?
Do you want that?
Do you want to be a long-running agent?
Do you want to parallelize that?
Like, like, do you want— like, what is your comfort level of wasted tokens?
Like, for me right now, I'm like, yeah, we should probably waste a lot of tokens because that means that we're like trying new things.
And like, like, should your head of engineering be happy if, if you run 10 experiments in parallel and thus you're obviously going to be wasting 90% of the tokens?
But you're going to choose one of the successful paths, or do you want to tell the team, no, before you go do that, make sure to like, like really go and design the perfect system?
Like, we actually have a whole bunch of open questions that are going to start to happen.
Like, literally, as of this recording time, people are freaking out right now on the new Claude Code Max plan, you know, whole, like, like because they're getting blocked after like
3 prompts.
Well, this is, this is going to be like a very like real topic until we can actually find a way to build data center capacity.
Capacity?
Oh, that's a different problem.
Okay.
No, because no one is— well, assuming— well, wait, no, you can assume that if we build more capacity, the price will drop because there is more capacity and we're priced now based on
limited capacity, whatever.
But like, this is just going to get worked out.
And I feel bad for those that have to make a decision immediately about which 17 people get no more tokens this week or whatever, and that the whole company is walking around with like
a token card and the person in the lunch line Yeah, like it's the person in the lunch line is punching their card every time they do something.
But like, you know,
I don't know, like somebody we were talking about today about performance and how like, you know, we used to write command line tools that spit out the time it took after you ran a
command line just so you knew.
And if you knew you were getting better or worse and, you know, but the thing is, this is all going to go away.
There's absolutely no doubt that this just goes away.
I think on the 10-year timeframe, 100%.
And the biggest reason it's it does is because you have to do the Benioff kind of math, which is if you're paying an enterprise salesperson, you know, $1 million a year, you have to
ask how much is their tool worth?
Yeah.
And if you're paying an engineer X dollars a year, well, at some point their tooling is worth— it's absolutely worth it.
And it's not going to even be an issue.
And yeah, yeah, I don't think it's— I think— and so if there's a capacity thing in the short term, yeah, That's a different— that is a different problem driving the price than this.
Just we're going to forever have to be in some budgeting exercise.
I think, I think law of large numbers solves this because, because eventually you have enough engineers using this much compute.
But like we're in a transition phase where like most people thought, you know, the 2-year-ago level of spend on AI, which is like, oh, it's a chatbot and yeah, yeah, but they were wrong.
Yeah, right.
Okay.
But they were wrong.
But they were wrong.
We tried to warn them.
No, but they were wrong because they saw saw it as this, this particular use case.
Yes.
And the— but again, like, you, you know,
like the vacuum tube thing you made fun of.
Yeah.
But like, there was a time when they thought that the— that like whole— like all of the Dakotas would be covered in vacuum tube warehouses, and people on roller skates would be running
up and down the aisles replacing vacuum tubes just so we could fight World War II.
I mean, like, that was how— that was the— and they thought that.
And then someone said, hey, how about a transistor?
And like, we are going to have a transistor moment with all of this.
It might just be more supply the way we think of it, but it also might be an actual algorithmic fundamental change.
It could be a change in the hardware.
There's a lot of stuff that can happen that changes this particular moment in time.
It's just this— I think it's particularly weird that everybody has just gotten to token.
Yeah.
Which is the same thing that happened with IBM and mainframes.
People were on MIPS and then one day the reality was IBM was selling more MIPS for fewer dollars every year and didn't even realize it.
And they were still pricing their mainframes by MIPS until it got pointed out to them that they were on a decreasing curve because they were making MIPS faster than they can charge
for.
And that's what's going to happen, guaranteed.
I just said that in a hardcore way.
I think that was great.
Yeah, like, sounds really great to sound like I know what I'm making.
Guaranteed.
Guaranteed.
I actually probably believe it.