This conversation explores the rapid rise of AI investment, the differences between AI infrastructure and startups, the implications of general intelligence on business models, the impact of AI on work and creativity, and the competitive landscape between incumbents and startups in the AI space. In this conversation, Daniel Faloppa and Dan Gray explore the evolving landscape of technology investments, particularly the tension between hardware and software, the implications of AI on capital and innovation, and the future of productivity and work. They discuss the risks of overinvestment in AI, the importance of understanding market dynamics, and the potential for hardware to provide sustainable competitive advantages. The dialogue emphasizes the need for a recalibration of investment strategies and the role of innovation in driving value in the economy.
Listen on Spotify
Listen on Apple Podcasts
Listen on SoundCloud
Takeaways
AI investment has accelerated significantly since the launch of ChatGPT.
The current AI investment landscape resembles the .com boom.
AGI discussions often shift, making it hard to define progress in AI.
Current AI models exhibit a level of general intelligence.
The future of AI may lead to fewer jobs but increased productivity.
Distribution is becoming a key competitive advantage in AI.
NVIDIA is profiting immensely from the AI boom.
Startups must innovate in distribution to compete with incumbents.
The best creative outputs will still require human involvement.
Incumbent companies may struggle to leverage their advantages effectively. Coding new things is easier compared to the past.
Europe excels in hardware innovation but lags in AI software.
Investment in AI software may overlook hardware opportunities.
A competitive landscape with many LLM companies is safer.
Capital deployment can accelerate innovation significantly.
Overinvestment in AI raises questions about opportunity costs.
Productivity growth is key to increasing overall value.
Constrained resources will continue to drive prices up.
Understanding moats and competitive advantages is crucial for startups.
The future of work may be transformed by AI and hardware advancements.
Chapters
00:00 The Rise of AI Investment
09:14 Understanding AI Infrastructure vs. Startups
18:22 The Future of AI: General Intelligence and Business Models
26:47 The Impact of AI on Work and Creativity
34:22 Incumbents vs. Startups in the AI Landscape
36:20 The Evolution of Coding and Data Utilization
39:02 Hardware vs. Software: The Investment Dilemma
42:45 The Impact of AI on Capital and Innovation
48:00 The Risks of Overinvestment in AI
53:53 Productivity, Value, and the Future of Work
01:00:57 Navigating the Future: AI, Hardware, and Market Dynamics
Transcript
Daniel Faloppa (00:00)
doing the AI podcast that we talked about for a while. think just to set the scene, we could understand what’s happening in the AI startup investment right now. How are things evolving? And I know you follow a lot of the literature, what’s written about this and the data. happening?
Dan Gray (00:20)
I think probably everybody exposed to this at all knows by now that it’s it’s accelerated rapidly in the last couple of years, basically since the launch of a chat GPT. But, you know, actually it’s kind of funny. It goes back, you know, it’s it’s a field that’s existed for a couple of decades, well, a few decades VC investment in AI has been happening for a long time, but then you had like this LLM explosion with chat GPT and
just a tremendous amount of money being invested in this type of AI. I’ve got a few charts we could look at quickly to help illustrate that trend.
Okay, can you see that? right, perfect. this is up until 26th, 2023. I think that’s a mistake on PitchBook’s part. It should be January 2024. But it’s just really to show we like up until the beginning of this year, that tremendous acceleration, the number of deals and the amount of capital is…
is kind of crazy. It’s reminiscent of like .com boom, for example.
then there’s the Stanford AI index report shows a similar thing. You could see obviously investment was going into AI before chat GPT, before this moment, but at a much lower rate.
this goes back a bit further to 2013. this chart particularly is showing the investment by region. And what’s most interesting to me is it was the US more than anyone else that in 2023 exploded. again, it’s related to OpenAI, XAI, Cohere, Anthropic, they’re all based over there. Yeah.
Daniel Faloppa (02:08)
Rock, perplexity,
yeah.
Dan Gray (02:11)
Yeah, absolutely. All the giants, particularly those like LLM infrastructure or LLM model companies, the ones that are absorbing all of the money to build the data centers, to do the processing, buy the Nvidia chips and so on. They’re pretty much all based in the US. I mean, of course, there’s a lot in China too, but the ones in China, I guess, are getting funding from different sources.
Daniel Faloppa (02:31)
and to the right.
not in the charts. They don’t make it into the pitch book data.
Dan Gray (02:41)
Yeah,
yeah. And yeah, again, slightly different perspective. rather than geographic area, this is looking at sector. there’s lots of shifts and generally you see, I actually like strangely, you see a lot of sectors falling a little bit 2022 to 2023, except for the AI infrastructure, which exploded like between those two years.
It’s night and day. went from one of the smallest sectors to by far the largest in one year, which is crazy.
Daniel Faloppa (03:16)
And I think
we saw a little bit of a contraction of the hype or at least people starting to ask questions, maybe in like Q2, Q3 before GPT 01. And then like after that, yeah, this year, right? And then after that, we went back basically and forgot about those questions and we’re again on the full speed ahead it seems.
Dan Gray (03:29)
of this year. Yeah.
Yeah, absolutely. was already evidence as of the first quarter of this year, as you saying, of kind of like a fall, bit of a drop in the level of interest. does seem to have reversed a little bit in more recent data. So I wouldn’t read too much into that. But certainly the exuberance of last year, I’m not sure we’ll see it return to that level, but maybe.
Daniel Faloppa (04:00)
indeed.
and we know that what is it 95 % of Y Combinator cohorts are doing AI or at least they’re claiming, but it’s likely that like at least 80 % is actually doing like the whole funding word or at least whatever makes the news is revolving around the AI, right? So maybe the first topic is, you know,
Is that a good thing? And why is it happening?
Dan Gray (04:40)
I think it’s also important to draw a line between, this is where a lot of the data fails, like LLM infrastructure, the big companies making the models, the ones that are absorbing the crazy amounts of capital, and then all of the other much smaller startups who are using those models to do stuff. you know, some of those are quite big as well, but like, it’s kind of a different game, really.
Daniel Faloppa (05:06)
Yeah, for me, for me, there is a parallel in how things are behaving, but then a big difference in the fundamentals of things when you look at the SaaS wave, right? So the SaaS wave, had like low valuations, that was like accumulating and was starting to get into all these companies. And then like a ton of them, right? And I’m talking here about the application layer, not really the…
Dan Gray (05:18)
Mm-hmm.
Daniel Faloppa (05:33)
Like, let’s say that the LLM have enabled like a new type of business model or a new type of thing, right? Kind of like SaaS did, right? And then SaaS saw all that growth. But it looks like that’s the trajectory. Everybody’s after that. But the differences for me are huge. Like the SaaS was replacing a cost center for companies that had to have infrastructure in prem and
and everything and it was ending up in Capex and they didn’t want it in Capex. So all that really helped ton of revenue for SaaS companies plus the fact that revenue is recurring. Nobody really understood that at the beginning. What type of risk does that create? And these companies were created when software was a lot more expensive. So you had to raise five million at the beginning.
know, 5 million to actually bring the innovation to market in a reasonably decent way.
Dan Gray (06:34)
Mm-hmm.
Daniel Faloppa (06:34)
happening now, it seems to me, like all these verticals trying to apply LLM to a problem that is fairly small in a sense is Like we try to apply the SaaS model to problems that are fairly small, right? But we don’t have any of the money switching between like CAPEX or this. There’s no different business model. It’s just a different feature. And there is…
Dan Gray (06:59)
Mm-hmm.
Daniel Faloppa (07:00)
The game is known, there is no more defensibility. If you make a software that is fairly easy to build, especially now with AI, two months later, you’re going to have five competitors. God knows about these waves of developer tools. This week is Cursor, next week is… I forgot how the new one is called Windsurf or something like that. That, for me, is really…
to do. Like these companies and these investors, I feel they are being attracted by the end game, the AGI, the big payoff, but then we are not considering and how to defend those markets. we still have this paradigm of winner takes all, but if the software can be rebuilt in a week or at least the core feature that gets you popular,
can be rebuilt maybe in a better way in like even in months is not a sustainable competitive advantage. So I have my doubts and I know you have yours on this type of companies.
Dan Gray (08:05)
Well,
you mentioned AGI. I think this may be something I’ve mentioned to you in the past, but it’s always been funny to me since the beginning of this wave. First of all, AGI appeared as a term, think, to try and help people understand the difference between AI and AGI. And then there’s ASI, which Sam Altman is now referring to. it’s all like…
feels a lot like moving goalposts because go back a couple of years and like they were talking about like, know, general AI is coming soon. It’s within a few years rather than a few decades, that kind of thing.
if you go back to 2014, 15, when I was at Data Economy and we were working with companies doing neural network type stuff, like precursors to large language model type things, called that stuff AI. You would have been kind of left out a little bit if you did, because it was machine learning, was neural networks, was possibly small language models.
or medium language models, but like nobody called it AI. AI was this thing where like machines learn to reason and think like humans. And that’s it. Like, you know, that the sci-fi concept, there was kind of a purist way of thinking about that. But today we have much confusion about what it means, so many different definitions. And it’s all, it feels like, because like that’s the North star that they’re using to try and like, bait people’s enthusiasm.
Daniel Faloppa (09:37)
Yeah, no, I mean, I agree. the, don’t know, for me, like, you know, I am very optimistic about these things. The that even like ChudGPT 4.0 has today and the other models as well to perform tasks and the way that we use it our work even today for multiple things, checking text and improve text for interfaces.
make better follow-ups. So that’s already replacing some And I know the copywriting was the first thing that it did. in terms of the terminology, to me, it really feels like the difference between painting and photography. Or let’s say digital cameras versus analog cameras. At some point, digital cameras were getting better and better.
Like I’m sure there wasn’t maybe a name for it, but somebody had the name for like when digital is going to be better than analog. But then nobody noticed specifically the day when that happened. But then in like two years, digital cameras had 80 % market share and it reversed in two years. So don’t know, to me, like, the specific terminology doesn’t really matter. It’s more like…
What can you do with it? And what are the implications for people, for companies, for investors, for
current AI is already fairly general, in my opinion. It doesn’t have arms, so you can only manipulate digital stuff, but it can read and understand and more or less operate on any website. Being it like a doctor website or a construction booking platform, like anything. Right. It can understand what it is. can tell you.
things about it, it can operate it maybe like a little bit still janky on that, but it’s happening. So in my opinion, it’s fairly general.
know, again, yeah, what does it allow to do?
Dan Gray (11:40)
Yeah, it’s just like,
then like the question is like, okay, that you could qualify that as general intelligence. Like, is it we used to know AI? I’m like, maybe that’s not a very helpful definition, but you think about for example, and like how valuation is essentially looking into the future the potential of companies to try and understand their worth.
So you’ve got to think about like, if you’re fundraising for open AI or Anthropoc or whoever, what does their business model look 10 years out? And they’re telling us we’re going to have like true AI and that’s what they’re building their story on. Is it actually realistic? Can you actually do that with LLMs or does it need fundamentally different architecture? Which is what I think, know, Elia who left open AI, I think that’s his belief, which is why he’s departed.
if that’s the case, then there’s a huge amount of more risk in OpenAI’s story, because now they don’t have the architecture to do what they’re kind of hinting that they were getting towards.
Daniel Faloppa (12:42)
Yeah, I don’t know. And that’s right? I don’t know. For me, the opportunity there is that if you don’t get there, right, with whatever we have today and maybe the next six months, like refine the tools that they can use websites, they can use things, you have…
the knowledge worker tractor. You know, you go from like the carrying the thing to a tractor, right? You can increase your productivity by a lot. If one single company does it and it adds, I don’t know, 100 trillion of productivity, then how much is that company worth, right? And so that’s, think, a thesis that I support personally,
I don’t support the fact that it’s gonna be one single company doing that. it looks like everybody is betting on the fact that it’s gonna be one single company. To me, because of how quote unquote easy these things are, and we need to thank the open source community for that, we need to thank open research community, like all those kind of things.
Dan Gray (13:44)
Mm-hmm.
Daniel Faloppa (13:47)
how easy quote unquote they are and how easy quote unquote software is becoming. don’t see how one company is gonna dominate and we’re not gonna have a hundred gazillion little companies, which is in my opinion, a better for everybody. one dominates is because it gets to this, ethereal, AGI or whatever they call it, super intelligence and
and it can evolve so like at such a pace that the other ones cannot even compete. And of course, sounds a bit doomsday, but not. Exactly. Yeah, indeed, indeed. And investor returns and stuff like that’s secondary at that point. I do see like, you know, the best case scenario, maybe the optimist scenario is like this is a
Dan Gray (14:14)
Mm-hmm.
Yeah, at that point we have bigger worries than a monopoly, probably.
Daniel Faloppa (14:33)
tractor for knowledge workers. We are getting, and then we get into sort of macroeconomic theory or like capitalism itself, but we hopefully get to a place where resources are even take even less time and work and things, right? And then all of us can maybe work less or or things like that because so much is done by other systems.
Dan Gray (14:59)
I mean, that’s a good question. One of the notes I have written down to discuss was like, you imagine that AI, generative AI, whatever you want to call it, if it makes everything, like take the movie industry with like Ben Affleck’s talk that was released earlier today, the video that’s gone around and fairly popular, talks about how it will provide tools.
will save off a lot of time in many of the processes and components of building a movie. it does that, it cuts down the time involved and then the outcome of that is it reduces costs. The end result is you have fewer dollars being circulated. So maybe then you could say those dollars can go into producing more movies and more content, but there’s a ceiling to that. Is that overall better?
don’t know. then a degree to which people need to be in work, so people need work to do. And if you start cutting out loads of work, this is kind of where you get into the political side of it. And yeah, it gets very messy.
Daniel Faloppa (15:51)
Yeah.
Yeah, I think it’s interesting
though. It gets messy, but it’s very interesting. the Ben Affleck thing, thing that I read into it, he was saying like AI is gonna simplify things so that instead of making one season per year, you can do two seasons, right? For me, that…
equation is like instead of making one season per year, you’re going to make 2000 seasons. Right. That’s that’s the difference between the analog camera and the digital camera. Right. you had to you could make 20 pictures, you had to bring them to being developed by somebody and then you would have them printed. And of course, you have limited physical space. Right. Whereas now at some point I wanted to check like how many pictures have been taken of the pyramids.
Dan Gray (16:42)
Mm-hmm.
Daniel Faloppa (16:48)
Like how many trillion of pictures of the same exact thing, probably all slightly worse than like whatever best picture has won the National Geographic Award, right? So that best picture is going to still be one, right? But there’s going to be like 20 trillion pictures. same, think, is going to happen with movies and things like that. And when you bring down cost of making a movie like so much, what can you do with that format, right?
Dan Gray (16:54)
Yeah.
Daniel Faloppa (17:14)
you can do, maybe you can do personalized movies that make every single person feel like they’re the protagonist, right? No matter their gender, no matter their, like a specialized movie for each person or language, obviously, like their understanding of the world. I don’t know, right? So like the, that’s what I’m saying. Like this, even with current or like evolution of the current tools, changes…
for me can be incredible a lot of places.
Dan Gray (17:44)
really like that’s we could diverge there and into a whole rabbit hole about like creativity and AI and what AI is really capable of doing. Like I’m definitely more of a skeptic than you are on that. I definitely take Affleck’s side that humans will always be involved and like specifically that the very best of those fields, whether it’s like movies or TV shows or books, are going to be like
visionary person at the top directing everything that happens below.
Daniel Faloppa (18:17)
sure, like that, that I do believe as well. And that’s the is so much, it might be that those ones, they rise so much to the top, like actors or so, like these days are, could argue, a lot more famous than…
because we have a globalized market, because we have, know, the really the iconic are very few and they are the same ones for the whole world. It could be like that also for a specific creator in a specific field, right? Because everybody else, like the specific talent is going to rise to the top. So I do agree with that.
Dan Gray (18:42)
Mm-hmm.
Daniel Faloppa (18:56)
it could be that we get into so many specific niches and we generate the smaller hierarchies and then everything is a bit more better distributed. That’s also a pretty okay option. for me, the argument that people have on the work side is… we’re looking at it from the point of view that everybody needs to work.
like we need to find work for everybody rather than, you know, the basic, the more basic version of that is everybody needs to survive and have what they want in a sense. Right. So what they need and what they want. And we can talk about where the line is, but more and more of the less and less basic needs are satisfied, right. Because things get cheaper.
Dan Gray (19:33)
Mm-hmm.
Daniel Faloppa (19:48)
we can work less and less in a sense. We don’t need all to work or if we work, we can work in a sense of like doing some research and things like that. This is not possible if a single company captures the whole value and the only people that are allowed not to work are the shareholders and people that were part of the capital of that company. So the idea of like a universal basic income or like technology, productivity-based tax.
of course has different prisoner dilemma problem. Nobody’s gonna do it first, but like these type of things to redistribute gains these things, I think would be, again, the most optimistic. This is the most optimistic scenario that in my mind can happen. don’t know how realistic it is. That’s for
Dan Gray (20:21)
Mm-hmm.
You
in the near term, let’s say, there’s, don’t think it’s today. I don’t think it’s going to be even within the next two or three years, but you have a ton of new vertical AI solutions that are available in pretty much any industry you can think of. And they displace old SaaS solutions. And probably they do it initially by like lowering costs to get people on, but then ultimately probably the costs are going to rise to broadly what they were replacing to eat up all those budgets.
And then you just have like, it’s almost like a.
change really. I think SaaS represented a bigger change in the realities of the businesses and the economics of the businesses than AI will in the next five, maybe to 10 years.
Daniel Faloppa (21:20)
yeah, could be. It could be. It’s interesting. Yeah, I think at the level of costs and prices, wouldn’t assume that the new tools are gonna cost the same, are gonna replace and get into those budgets and stuff. For example, I was already seeing some developers making their own Notion just not to pay 10 euros a month for Notion or however much it is.
know, some simple things you can make yourself and it’s so much simpler with AI, host them yourself and money in that way. Yeah.
it be bigger than SaaS? That’s a good question. That’s a good question. And how much of those gains can you capture? already SaaS, SaaS, in my opinion, managed to capture some gains because pricing got anchored at the wrong levels. know, like a lot of these tools that are like 10 euros, 15 euros, and it’s a little bit the same right now for substacks. You know, so some substacks, like no matter how good it is.
Like the value that you get from one single sub stack for like $10 per month compared to a subscription to, I don’t know, the Wall Street Journal or your media outlet of choice. Like it’s incredible, right? difference in pricing. And it’s almost, in my opinion, all anchoring. Like there is no specific reason, right? Of course, becoming a trillion-dollar sub stack, so.
Dan Gray (22:42)
Mm-hmm.
Daniel Faloppa (22:48)
You know, not it’s probably has some fundamental mathematics underneath it to just sustain the single writer. but yeah, the price anchoring is huge. And in SaaS, I think it was also huge. The first company started like this 10, 15 euros a month. But then certain should be more expensive like Spotify. Spotify hasn’t managed to to make a dime in the past 10 years.
because the cost of what they’re selling is higher. Other websites still pricing at 10 euros per month or something like that are making huge margins because what they sell is a lot cheaper. So I don’t know. I think there is a lot of mispricing in the world and a lot of encoding of stuff.
Dan Gray (23:35)
It’s interesting to think about in a world where you can build your own notion and suddenly there are competitive modes are almost non-existent for products. Potentially it becomes a much more service driven thing where almost all the value that is offered by those companies is now in what they actually provide to the user on an ongoing basis as a service, whether it’s updates or support or additional features, whatever it may be.
even then, prices will probably have to come down as well. Those companies providing those services will also have to use AI to find efficiencies to live with lower prices. yeah, that’s an interesting one. I hadn’t thought about that.
Daniel Faloppa (24:19)
yeah, and then the next kind of question is what are the modes, right? Because then like software, like the cost of software development going through like making a very complicated product that takes a long time to make is not there anymore, right? So Data, networks, like modes like that are.
Dan Gray (24:26)
Mm-hmm.
Mm-hmm.
Daniel Faloppa (24:41)
Still very interesting. And I think one thing that lot of new startups are not thinking about is distribution. Because if you manage to lock in the distribution and to be the first in the mind of your customers for a specific reason, for example, perplexity with that brand of AI search, that’s a fairly good mode. like the…
Dan Gray (24:58)
Mm-hmm.
Daniel Faloppa (25:03)
advantage that distribution, the power that distribution is going to have versus production is shifting, right? A few, like 20 years ago, it was all about, can you make the website at all? Right? And if you make it, then people will find it because you’re the only one that was able to make a website about eBay, right? now is the opposite. There are a billion Ebays, but there is only one Mr. Beast, right? That has the distribution.
Dan Gray (25:21)
Mm-hmm.
Daniel Faloppa (25:30)
to get to the audience. yeah, it’s big difference.
Dan Gray (25:35)
that reminds me of a few people, even going back a couple of years now, have been talking about AI as an incumbent strategy because Adobe, Microsoft, Salesforce, those kind of companies, they have the data and they have the distribution. So for them, it’s a huge value add that’s relatively easy. They have everything they need to train the models and they have everything they need to put it in the hands of the consumer.
for any startup, you don’t necessarily have either of those things. You can use an open source model or whatever, but it has limitations in terms of specialization. So then that adds into this argument of whether or not AI, going back to thinking about it as an investment trend, is it just driving money into the arms of incumbents? Thinking then of obviously, NVIDIA as well.
Daniel Faloppa (26:14)
Yeah. Yeah.
Dan Gray (26:32)
has made an insane amount of money the chips that power all of this.
Daniel Faloppa (26:36)
Yeah.
Yeah, it does. It does feel a little bit like those like gold rush moments where the only people making money are the people selling the pickaxes, you know, at least for now. And I do think also and we were talking today about Apple intelligence and seems so far that
AI, yeah, it’s been made, but it’s not very useful. It might even be sold to customers, but nobody’s being really happy about it. copilot.
Dan Gray (27:09)
The Adobe tool
is quite nice. Adobe Firefly, like I use that every now and then. Like it’s, it’s, it’s nice that it’s integrated in into the software directly, you know, it’s my, my only real experience with it as a like incumbent tool.
Daniel Faloppa (27:24)
Yeah, no, but it’s true. They do hold the cards when it comes to advantages. I don’t know if they’re playing them very well for now. But again, if somebody figures it out, right, if a startup figures out how to make a better Apple intelligence or whatever it may be. then the incumbent can copy that strategy, because again, copying things is easier.
Dan Gray (27:31)
Mm. Yeah.
Daniel Faloppa (27:48)
coding new things is easier compared to the past. And they do hold the actual modes. So the data and the distribution, how can that startup make it? It’s a very good question. I mean, even like a lot of these to sales, right? Sales and customer support and things like that. can easily see like Salesforce or Intercom having
Dan Gray (27:59)
Mm-hmm.
Daniel Faloppa (28:13)
a wealth of data that, you know, startup cannot really have. So even if they come up with a better UX or like that key to customer adoption that right now seems to be a bit elusive, like Salesforce can copy that, right? So yeah, it’s a good point. It’s a good point.
Dan Gray (28:30)
Mm-hmm.
Daniel Faloppa (28:34)
that’s, be honest, I’m really puzzled about like all this investment in AI software, because like, me right now, the modes are in hardware. And we are, you know, and we’re talking a lot about Europe and how behind it is in AI software. And I fully agree. And that’s a shame.
you know, if it wasn’t pretty, we would really be screwed because at least it’s pretty. got tourism, right? But, if it wasn’t pretty then, then, but aside from that, so, so like, that’s extremely true, but I think what we are underestimating or at least what we are playing with the wrong strategy on is like, we’re very good in hardware, like Europe as a whole, right? patents, research, even pharma.
as long as it doesn’t require too much money, because when it does, then it moves to the US. But engineering talent and… Yeah, yeah, yeah, it is. But the research, the core research is very, very good. you know, if we wanted to play, like those things have a mode, right? You make a patent for like 20 years, have a mode. You also have modes of…
Dan Gray (29:30)
kind of a limit for hardware too.
Daniel Faloppa (29:49)
just building an ecosystem that works, that is at the cutting edge of a specific field. And you see it here in the Netherlands with semiconductors, aerospace a little bit as well, of course, SpaceX type of things now. you know, there are things that I think we can play. And I don’t understand why all this money is going into AI software applications, not…
I understand the GPUs, I understand that part like is easy, infrastructure investments. If you think that general AI is gonna be used, then you can invest in GPUs, you have 10 % return per year, fine. But as a startup investment, investing in the second layer that solve a niche, are in my opinion fairly easy to out compete afterwards, and they don’t have very serious mode.
how AI can accelerate hardware research, that’s moving a little bit more, but very strange. If you see the Y Combinator cohort, it’s all software still.
Dan Gray (30:51)
I think little bit of that is kind of reflected in you go back and look at the rise of software and it took these, it took venture capital a while to understand software and the risks of software. And then I think if you apply that to AI, the software, like open AI is basically a SaaS company kind of like it’s very easy for VCs to understand to the hardware side. For example, there’s
Etched in the US who is building a new type of a new kind of platform, new architecture for compute. I think it’s like some kind of optical compute instead of They’ve done very well. Like marketing wise, they’re quite a big name. They got a big fund on board and like got loads of attention. I know of at least three other companies doing something very similar, of which are based in the UK.
they’re just like two PhDs working on it at like the Imperial College London incubator or something like, and like, nobody’s looking at them. Nobody understands it. Nobody seems to care. They don’t even seem to understand that they need to put themselves out there in the same way. it’s just such an eccentric world. Like this idea of like new hardware for compute and like VCs have to try and understand it.
Daniel Faloppa (31:55)
Yeah.
Dan Gray (32:12)
They need so much education.
Daniel Faloppa (32:14)
Yeah, yeah. And I think that their mandate as well, right? They raised the capital in 2021 or 2022 where software was at the peak and that’s what they promised like to invest to their own investors. Like you’re gonna get into the software trend and stuff. now AI is fairly trending, at least in public opinion. So then the logical thing for them is to invest in a bunch of AI verticals and AI startups.
yeah, I do believe like the opportunities are in software in medical in hardware. Sorry nothing software in hardware in medical in Things that can build a moat it’s yeah, it’s very it’s very strange and yeah, the the two PhDs in Imperial College London that to bring a product to market will need Hundreds of million right because it’s a hardware product. They’re gonna have a hard time or they’re gonna go to the US
Dan Gray (33:09)
They’re gonna
move. Yeah, I strongly suspect they’ll end up moving. It’s a funny contrast though. Like I think almost like if you look at the dot-com boom, you had this like, roughly speaking, it was a four year period where like the hype was the greatest or like from the beginning of the ramp up, let’s say like 90.
Daniel Faloppa (33:11)
Yeah.
Dan Gray (33:36)
94, 95, maybe 96, where a lot of very good investments were made. And then you had 97 through to 2000, where the heat was a lot greater, there was much more hype, prices were inflated, and most of those ended up working out very badly for investors. essentially, that was because you had, you expect with like power law, you had a huge number of companies that got investment and they all concentrated down into a few winners, as you expect.
Like that’s totally normal. almost like with AI, it’s the reverse because they bet early on a few winners and they’ve concentrated an insane amount of money into those few winners. When actually maybe with open source, they’re going to all be like diversified away. There’s going to be more companies doing, doing this and making money out of it than perhaps they can see.
Daniel Faloppa (34:27)
Yeah,
yeah. It is, my opinion, a lot safer, the second option, like for the world in general, right? If we have like a thousand level companies and they’re all competing, is a lot safer. Yeah, it’s interesting. It’s interesting to see. interesting to see. One thing that is unprecedented, which is very interesting to see for me is that
Dan Gray (34:37)
for sure.
Daniel Faloppa (34:53)
we are able to deploy insane amounts of capital when there is a reason for it. Whether the capital is correct or not, already that we have that capability is quite interesting. And we can just accelerate a lot of innovation because that is possible, right? And that’s also standing on so many…
so many previous steps, right? You need to have a legal system, you need to have a capital markets functioning, you need to have understanding of valuations of many things. So that’s already pretty interesting. And it’s good to see that if somebody has that idea, then capital is not main constraint anymore, almost.
You know, and we touched upon it once. Maybe I’m just optimistic today, but that’s the positive of it, right? So it would be nice if we want to, if really, like, these large language models require so much capital and we do want to have a competitive field, we need to figure out how to…
that type of capital for more than one company. Because otherwise we create a capital mode for that first company. is probably, you know, it’s what the US is really good at doing as well. then there needs to be competition. I do think it’s beneficial if we have competition at that level. And, you know, thank God that the incentives aligned for Metta to spend however much they spent training those models so far.
but it cannot be left chance, right? We cannot have, China is investing for sure, but Europe is incredibly left behind and now with regulation even more. So think it’s a bit of a shame.
Dan Gray (36:36)
Mm-hmm.
I think so, but just to take the slightly more cynical side of it.
However many billions I can’t I’ve know, I’ve lost count however many billions have been invested into open AI up to now all money that could have gone to other startups doing other things as well the question is the investor view of open AI at all realistic and that goes back to the question about like the degree to which they’ve used like the the general AI or AGI hype to get to where they are
degree to which they built a lot of hype early on.
the premise that like, think people understood open AI is like,
having some, some edge where it would be very difficult to compete against them when actually if you have like the open source model from, from Meta, you can build similar products pretty quickly. The model is no longer the advantage and the model is what has cost so much. So like how much of that money now is kind of looking a bit foolish. And it’s basically like where, where all that money could have gone, but that bothers me slightly.
Daniel Faloppa (37:36)
Yeah, Yeah, no. Yeah.
Yeah. And how much of it is still invested because just to save face on the previous money. Right. So yeah, so that’s another like big, big question because like a lot of companies are in too deep. And especially if you see the, Apple intelligence review from, from CNET that we talked about and how little real world benefit it has so far, of course, like, you know, but, like so far,
Dan Gray (37:57)
Yeah, absolutely, absolutely.
Mm-hmm.
Daniel Faloppa (38:19)
much of that money is in AI just to save face and just because it’s a good question and how much of it is removed from other…
yeah, and the opportunity costs of investing in, you know, and partner was coughing for the whole week. And we were thinking like, we have, you know, rockets that are able to land, but we still don’t know how to fix a cough, right? So it’s a weird thing to think about.
Dan Gray (38:30)
Yeah.
You
Daniel Faloppa (38:51)
know that theory that investment drives innovation Sometimes it’s the other way around, which is probably how it should be. But sometimes it’s also investment drives innovation. And if you have hundreds of billion on AI, something will happen. it’s a very good question. Undecided yet, I think.
Dan Gray (39:10)
Yeah.
sure the incentives it very difficult to like, you can’t take what you see at like, you can’t take it as the truth. Kind of like there’s so many incentives that underlie this activity, the drive behavior. Like, as you said, you know, if you invested in Open AI two rounds ago,
are 100 % going to support the narrative that today it deserves to raise more at a higher valuation and then two years time even more at a higher valuation even if you as an investor have kind of given up like if you if you no longer really believe it you’re still going to be supportive and you can see that you you can see that in A good a good example of this maybe is like the bg2 podcast bill girly and brad gersner
Daniel Faloppa (39:50)
Yeah, you’re stuck in the bet.
Dan Gray (40:03)
Altimeter Capital, Brad’s firm invested in the most recent round for open AI. And immediately there’s like a very clear line drawn between the two of them. Bill is the cynic like me and Brad is the optimist like you, but now Brad has this like financial incentive as well. So it’s a bit more friction and, and know, he, he now is kind of like compromised a little bit by that position. Maybe he’s right. And it’s a great investment in which case, you know,
Well done, but like the way it changes incentives is kind of crazy.
Daniel Faloppa (40:35)
But that one is a small commitment in a sense, right? Because probably they’ve done another 50 investments or it’s a small part of his identity, right? But if you think about Microsoft, right? How much of the market share of Microsoft is today contingent on the success of OpenAI, right?
Dan Gray (40:47)
Mm-hmm.
Daniel Faloppa (40:57)
Like we are in the double percentage point, right? And double percentage point of the market share of the enterprise value of Microsoft is fortunes of thousands of people that work there, their executives and stuff like that, right? So all those people have an incredible incentive, like almost like I had a successful life versus I had a bad life, like almost, right? Hopefully they have other things, but like they really stacked their reputation on
Dan Gray (41:07)
Mm-hmm.
Daniel Faloppa (41:26)
on these things going well, right? And so if you stacked it with 10 billion and you still have 80 billion in cash in the bank, like what are you gonna do, right? keep investment up until it’s for certain not a winner and you build a house of cards.
Dan Gray (41:44)
Yeah. then on top, like another layer of craziness on top of that is the fact that Microsoft isn’t investing money, it’s investing credits. And then potentially those credits can be reflected by Microsoft as revenue. Don’t know if they do that, but like in theory, unlike that, then. Microsoft is the largest shareholder and the largest source of not funding necessarily, but resources.
influencing the valuation and the fundraising in a way that just skews everything beyond understanding. Like you can’t read what’s happening anymore.
Daniel Faloppa (42:18)
or like we just have to think harder and make more podcasts. Which, which I think is, possible, you know, the.
like we get stuck into models that don’t work anymore. Right. And if we think about investment in compute and like how like it makes sense and kudos to them to think about a new model. Right. And if you think about a new model, you make it up when everybody’s stuck in the previous one and you can again, anchor it to the previous one and then make an incredibly outsized return, then, you know, all the all the kudos to you.
which is also why I think it’s, pretty interesting what’s happening, like with this incredibly large bets is the, think it’s the first time. Well, maybe there were previous times, right? When the East India company was like risking half of his fleet to, conquer a different territory in the hope to find gold under the mountain or something. that’s
Possibly right and then the East Indian company was the Netherlands in a sense like it was like 90 % of the stock market and a lot of the GDP right so But in the end we’re thinking about we’re thinking like I think what a lot of people are missing is Like what creates value overall in the world, right? Like money is just a measurement. It’s just like what we what we use to exchange services and it’s more convenient that
Dan Gray (43:29)
Mm-hmm.
Daniel Faloppa (43:51)
than shipping a billion apples to somebody. then grows the actual value is if you keep resources like constraint, it’s just productivity. and number of people. So again, you can grow the number of people that has its own implications. You can find new oil deposits and that adds a little bit.
Dan Gray (44:06)
Mm-hmm.
Daniel Faloppa (44:16)
but really the growth of value in the world is around productivity. What can we do with the inputs? what gives it value is what humans consider valuable. If all that productivity is done to, I don’t know, make the perfect printer that nobody uses, then you’re not really creating value. And that’s why I think, you know, we…
we will plateau at some point in terms of like the average human doesn’t need, like, we’re already seeing it in microchips, right? The average person doesn’t need the speed and the capabilities that the phone has today. Like, which is, in my opinion, is great. Like, you know, we don’t have to chase forever, right? If it doesn’t make us happier, safer,
Dan Gray (45:03)
Mm-hmm.
Daniel Faloppa (45:11)
fed, whatever your measure of value is for human life. If it doesn’t help that, then don’t buy it. And if you can, like an interesting statistic I was looking at is in, I’m gonna, like in the 1700, right? You had to work like a day for an hour of candlelight, So if you wanted to read or if you just wanted to interact with your…
kids during the night, you had to work an incredible amount. Maybe it’s an hour to 10 minutes, but an incredible amount. Right now is something along the lines of you work an hour and you can up a candle for three years. That’s the exact point. As you get into a world where you don’t need any more candles, then you can start reducing how much you work.
Dan Gray (45:52)
Mm-hmm.
Daniel Faloppa (46:04)
if you can, if you have like a robot that takes care of the house, and if you have a robot that grows your produce, and if you have like, and everything is done by technology with very fairly little materials, and like a few people managing it and engineering the next version, everybody else can, you know, spend like two hours a month doing what they consider work, and then spend the rest of…
their time doing something else, because their basic needs are satisfied, right? And again, the line of basic moves, right? So 200 years ago, you would have never thought about microwave as a basic need, but now it’s fairly basic, right?
Dan Gray (46:47)
not reflected in things like house prices or like energy costs though. that’s an area where it is maybe arguably hard. Like I don’t want to get into like comparing prices across like generations of people and inflation and everything, but it’s arguably harder today to buy a house than it was 50 years ago, 100 years ago.
Daniel Faloppa (47:08)
But it is because that’s a constrained resource, right? So like the population, 100 years ago or so was like half, like it’s crazy how much the population has changed in the world. you look at cities, believe 10 or 15 % of Madrid moved to Madrid in the last 10 years, right? So yeah, so the urbanization, so then.
Dan Gray (47:19)
Mm-hmm.
Daniel Faloppa (47:32)
the price in cities is skyrocketing, but in other places is the same or less, especially inflation adjusted. So, but again, that’s a constrained resource. So yeah, the only things that are gonna keep increasing in price are constrained resources, right? So, land, obviously, materials, and you can see that, right? You can buy a 50 inch TV for the price of a table.
Dan Gray (47:41)
Mm-hmm.
Daniel Faloppa (48:00)
which is very weird, right? But because the wood costs as much as all the engineering and plastic and everything that went into the TV is very strange. But yeah, so materials are constrained, land is constrained. if we, again, with technology, we increase productivity from a piece of land, now we can like fit, you know, a thousand people in a skyscraper compared to a little hut.
Dan Gray (48:02)
Yeah.
Daniel Faloppa (48:27)
We need less land for agriculture and things like that. Plus, hopefully, population is to a peak. things could be better. then digital goods are helping a lot of people, and they are very, very cheap to produce. Books and knowledge and movies and things. that’s the optimistic version, maybe, again.
Dan Gray (48:49)
Definitely a lot of resources currently constrained that shouldn’t be housing is one that is close to home for me. also, mean, energy is a choice that goes back to AI as well. know, there’s Google and Microsoft both trying to either get deals with or possibly even acquire like nuclear plants in the US to power their operations, which is fairly unbelievable. And I think we will only see more and more of that.
You know, the UK is looking at small modular reactors, possibly from Rolls-Royce, putting those all over the country. perhaps we’re getting, we’re moving out of an era of artificial constraint.
Daniel Faloppa (49:35)
we’re diminishing the constraints on a lot of things and hopefully we’re getting also, well, it’s from like Keynes and stuff like the era of abundance in a sense where your needs are satisfied and you can pull back a little. hope that’s the direction we’re heading to.
Dan Gray (49:59)
And maybe you have to wrap your head around that and around the potential impact of AI both together. The only way you can start to understand what the next 10 to 20 years are going to look like, because each individually doesn’t really give you the whole picture.
Daniel Faloppa (50:18)
No, no, it doesn’t because AI per se doesn’t satisfy any need apart from companionship. you know, that’s hopefully like a small, a small portion, but its own, right. But it does, it does accelerate everything else. So if it accelerates productivity so much, then we can get to that scenario hopefully a lot sooner. This is assuming that we don’t.
all the gains from this productivity gain in one company. And that’s actually for me a pretty big point.
Dan Gray (50:46)
Mm-hmm. Yeah.
Daniel Faloppa (50:52)
mean, like OpenAI is losing more and more of its moral qualities, like as the years go by. But like one of the things that they also always mentioned is this idea of basic income and like capping the profits for to up to a point, which now they want to remove. But like in theory, that was all good in this direction. If you look at…
Dan Gray (51:12)
think they’re removing
that stuff because they see the end of the line for their dominance. It’s less clear now, so they’re having to be bit more cutthroat.
Daniel Faloppa (51:21)
don’t know. mean, I can imagine the pressures, outside pressures on principles when you get to that size and importance are incredible. So yeah, yeah. It’s easy to do that when you’re like three people in a garage and like, yeah, we really believe in our principles, you know? But I hope they continue. Yeah, I wanted to say something else, but I forgot.
Dan Gray (51:30)
Mm-hmm. Immense. Mm-hmm.
I knocked you off course there.
Daniel Faloppa (51:50)
No, no, no, no. It’s good.
Dan Gray (51:52)
the biggest worry for me falling slightly on the cynical side, we came out of ZIRP, investment, which has collapsed and basically gone to nothing. All the metaverse stuff disappeared. Going back a little bit before that, know, creator economy, micro mobility, rapid delivery, all these trends, but produced one…
maybe two big winners but were largely kind of capital destructive and everything in 2022 first quarter when Y Combinator wrote this letter to all their startups saying like get to default alive things are going to be terrible
that point, AI came along and saved everything, know, scooped everybody up, huge amount of interest again. And my worry is like, if, if this ends up being trend that destroys capital, even if it leaves it important infrastructure, which it will, and I think is great. you know, think about like railway mania and canal mania as bubbles that left important infrastructure compared to like
Tulip mania or NFTs, didn’t much less, let’s But if it ends up like being just net destructive of capital and like, don’t know what the fallout is gonna be. Like venture capital is gonna be destroyed. will wanna invest in it. Like a second huge betrayal to LPs.
Daniel Faloppa (53:04)
Yeah, yeah, fair.
Mm.
Yeah.
Dan Gray (53:21)
It’s like,
that’s what worries me. Like, where does this leave startups if it, it kind of collapses.
Daniel Faloppa (53:28)
Yeah. And then if you get into like a proper winter of funding and like how much you slow down innovation and yeah, just research and yeah, yeah, it’s a good question. It’s a real potential issue. Yeah.
Dan Gray (53:46)
We felt the chill in 2022 and fortunately it got turned around by this enthusiasm.
Daniel Faloppa (53:49)
Yeah. No, in that sense, yeah.
Yeah. In that sense for me, we need to wake up about moats and defensibility of competitive advantages and stop being anchored at what was a competitive advantage two years ago, which is not anymore, not at all. Not even capital is a, like, you know, like roughly speaking, but, and we really need to recalibrate on
that and really think about investments in that way. If we keep on betting, yeah, yeah, that’s a good point. It’s a good point. If we keep on betting on software as it was two years ago, but then now it can be recreated much faster and then each niche is like five times more competitive like, let’s say five times less winner takes all yeah, then we could have very, very sore.
LPs and a lot of less a lot less funding in the next decade or so. Yeah
Dan Gray (54:47)
I think example of that is drift of some VCs towards hardware in the last couple of years, because they see that hardware, a sense, software has large margins early on, it’s cheap to build, cheap to operate, but then as you scale and grow and fight with competition, which has all those same benefits,
Your margins get compressed, competition is much harder, your sustainable advantage is much more difficult. That’s the problem I think we’ve had in companies produced in the last decade or so. Hardware on the other hand, those moats are much more sustainable. marginality is better. They’re better long-term prospects. I think that’s what some VCs are seeing now and are kind of returning to. that has all the same issues with incentives like…
Daniel Faloppa (55:21)
Yeah.
Yeah.
Dan Gray (55:36)
companies much easier to measure the progress of in the short term, much easier to report in terms of markups and valuation growth and investment performance. But you kind of have to just like put that aside and focus on the long term, which I think the best VCs do.
Daniel Faloppa (55:40)
Yeah.
And that stuff doesn’t matter, right? It’s not like, yeah, we can, we can reflect on it. We can figure it out. But like, what matters is like, are you delivering something that makes people’s lives better? Or are you, you know, investing in no technical innovation and just like hoping that the marketing will somehow make this a winner take all company in a very, in a niche that can even be a big niche.
Because that’s also how it feels like when during the doctor.com boom and stuff, Like the innovation focusing on product research and things like that was happening before. And then like people thought, okay, we understand the model is just an internet service. We just need to make sure that it wins, right? And then the innovation goes down and the spending like marketing or anyways like
trying to make that story work becomes a lot more important. And then is no benefit for the final customer. And those gains are not there. it’s a good point.
Dan Gray (56:51)
Mm-hmm.
wish the lesson that Y Combinator took from this, instead of having a cohort just stacked with generative AI companies was maybe to update their from make something people want to make something humanity needs would be better.
Daniel Faloppa (57:19)
Yeah, make something physical. I do believe, right, that the inertia that they have in software could be an issue for the next 10 years, right? Because you build a brand, right? And you build skills and you build an ecosystem. You have standard terms. have safes, right? You are so committed to the software side.
Dan Gray (57:28)
Mm-hmm.
Daniel Faloppa (57:45)
as Y Combinator that this, if there really is a shift in this direction, it could even be a chance for another player to become the Y Combinator of hardware or different types of modes. I’m still unsure, like for example, on biotech and pharma and things like that, I don’t think Y Combinator has the same leadership that it has in software. There isn’t an equivalent. At least I’m not aware of an equivalent.
that are different and they picked the largest wave so far, so that’s for sure. But yeah, I’m curious to see where that goes as well.
Dan Gray (58:22)
For sure, I think you’re absolutely right about their inertia problem. And probably that’s where that inertia is most concentrated. But also it’s absolutely everywhere because virtually every bit of advice, every bit of startup wisdom is through the lens of software and, know, SaaS type growth and SaaS like multiples or SaaS type acquisition metrics. And I just imagine like I never really thought about it.
this before, but like if you’re a hardware founder out there, what does all the content online about startups look like to you? Mostly irrelevant, if not misleading. Must be kind of frustrating.
Daniel Faloppa (59:04)
yeah, well, we got a call yesterday from a friend of mine. They’re working on an electric airplane. And I was like, yeah, you know, like you can do projections three to five years on the platform and do the valuation. He’s like, yeah, we don’t really forecast any revenue for the next eight years. Like, huh, okay, that might be a non-trivial valuation to perform, right?
Dan Gray (59:26)
you
Yeah. But not a bad one, potentially. Like, yeah.
Daniel Faloppa (59:33)
but no, no, no, not exactly. Yeah.
Yeah. And, and the thinking needs to happen in this, at this, sort of first principle level, like one thing that, that I’m glad is happening more and more is, debt for SaaS, right? Cause that’s, that’s what, that’s what you need. Like if you have subscribers, if you have regular revenue,
And if you’re growing 20, 30 % per year, can do venture debt. And if the industry is extremely competitive, think about booking websites for X, like booking some service or something like that. There are a ton of those services. can work with loans on those. You don’t have to follow this unicorn paradigm.
again, the competitive advantage there is there like you have locked in customers and things, but it’s not the Facebook competitive advantage. It’s not the, know, the winner take all over the world type of advantage. So, and that’s fine. Like there is a lot of value to be created by those companies with the right structures on capital. That is one of the resources they need. we need to recalibrate the whole
Dan Gray (1:00:41)
Mm-hmm.
Daniel Faloppa (1:00:53)
the whole thinking, get out of the idea that software is the only thing that happens and the only thing that has returns because yeah, there are doubts that it’s gonna be actually.
Dan Gray (1:01:04)
Yeah, for sure. think so. Next, Three or four years are gonna be very messy, I think.
Daniel Faloppa (1:01:12)
100%. And extremely positive view, which think we should adopt because why not, right? Is can accelerate so much different things thanks to what we found out in terms of basic research on transformers and LLMs. course, language is one thing, but the same idea can be in images, in data, in…
they’re doing it in molecular research and biology. So that can accelerate so much that yeah, everything is on the table. And I was speaking with another friend of mine. was like, yeah, that he was missing the cowboy years of SaaS, when everything was an opportunity and people were daring things and stuff. this change is different, but.
think it’s like a very comparable magnitude. It could be smaller, it could be bigger, but we are there. brought a big change. has the opportunity to be the same. we, especially in my opinion, especially in the direction of the tractor for the knowledge worker, what that can do is transformative to almost any industry.
80, 90 % of employment now is knowledge workers.
Dan Gray (1:02:24)
think we have, or at least, you you as the optimist, maybe you’re sold on that idea. I think I still need to see it a little bit more. You know, can the models get a bit more competent? Can they become more specialized without losing too much capability? I think there’s a little bit more to be proved there.
Daniel Faloppa (1:02:43)
shall see.
Dan Gray (1:02:45)
Yep.
Daniel Faloppa (1:02:46)
I think that’s a good place to leave this. you for listening if you have gotten to this point and talk to you next time.
Dan Gray (1:02:55)
done if you did.