Did OpenAI End the Software Industry With One Release?
A recap of some conversations on AI I've been involved in recently.
Being the CEO of a SaaS scale-up massively investing in AI, I'm making some decisions regarding our infra, and often talk with peers facing similar choices.
This is a recap of recent conversations I've had, including if we SaaS application builders should build or buy our AI infra and where is the opportunity for us in AI.
TLDR: while OpenAI announcement might seem like a surprising move from infra to apps, it leaves us with a lot of question regarding the company vision, focus on building a safe AGI (is this still the main goal or is it to dominate the world?!) and roadmap. I doubt it's the end of software or that OpenAI will monopolize AI-first app distribution -but it might be there ambition. Yet, it does bring up worries about being captive of OpenAI as a platform, and forces application builders to rethink their build vs. buy strategy.
Hope this helps, feedback welcome.
The shock wave
Last week's OpenAI release was an inflection point. As Bill Gates noted, it caused “a shock wave in the tech industry”. Bill’s main goal is to sell us on the new OpenAI product: GPTs.
However, the real shock stems from OpenAI becoming “like a black hole that will suck anything monetizable across a data event horizon”.
Or like Thomas Wolf, one of HugginFace co-founders and chief science put it on Linkedin last Sunday: “Weekend take: ran the numbers and Sama is now crushing startups with OAI at roughly the same rate he was growing them with YC.”
In infrastructure and apps, people are afraid that OpenAI’s new plans equate to ending software.
What’s changed?
OpenAI had gained the lead in LLMs almost a year ago with their ChatGPT release, catching everyone off guard with their astonishing technology.
However, in the first half of 2023, at least 30 other LLM teams, including Anthropic, were closely following OpenAI's lead and had secured billions in funding.
Just three weeks ago, Anthropic's CEO said on a podcast he was planning to invest $1 billion in training a model in 2024, that would be ten times more powerful than GPT-4, that probably cost around $100m to train. And that’s credible: he just raised $6.5b in a mix of equity and debt that’s becoming market practice to finance compute, from Google and Amazon.
The LLM space clearly felt like a race to the bottom, and until a week ago, the prevailing question was how could OpenAI maintain its competitive edge.
The common view was that as everyone focuses on enhancing these models and investment continues to pour in, we'd see increasingly high-quality models at lower and lower costs. Race to the bottom.
Because pure software technology, no matter how powerful, on its own, offers little protection as a competitive moat. Whenever a new innovation boosts LLM performance, other teams adopt it within a week, diluting any long-term competitive advantage.
An LLM may have billions of parameters, the science behind it only has a limited number of dimensions. Not enough to enable significant long-term differentiation.
Between talent mobility, researcher incentive to publish fast, and Hugging Face expediting the whole process, maintaining a technical advantage is particularly challenging. Especially given the worldwide focus and money prize motivation.
Innovation is replicated within months, sometimes days, in that space.
The main exception is the dimension of compute and data resources invested, which essentially equates to financial resources and can’t be replicated as fast as a new set of model weights.
But it does have decreasing returns, and anyways, money is just pouring over the AI ecosystem.
In that context, how would OpenAI defend its current leadership? It was not clear.
The shock
Typically, dividing tech into 'infrastructure' and 'apps' categories helps us see them as two distinct areas with their own distinct history, skills, business models, customers, culture and usual business dynamics.
It’s 2 different worlds.
Infrastructure, for example, involves servers and data hosting. The cloud business, largely dominated by Amazon AWS, Google Cloud, and Microsoft Azure, falls into this category.
Apps focus on using infrastructure to develop software beneficial for end users, businesses, and consumers.
In both cases, it's commonly understood that leading is more about distribution than the technology itself. To be a long-term, large-scale leader, you need not just a good product, but also to establish various mutually reinforcing network effects in distribution. A great product will then follow.
Back to OpenAI, it seemed to be an infrastructure company: LLMs process data in the cloud and also somehow host the data they were trained or fine-tuned on. The question of if an LLM can host data and how exactly is interesting, but I’ll leave it for another day.
So: the LLM space felt similar to the cloud industry. This led us to speculate about the likely evolution of the space, based on our knowledge of the dynamics among cloud giants like Google Cloud, AWS, and Azure in recent years.
Typically, from there, most people had the intuition that incumbents had an advantage and would end up dominating the LLM market — similarly to Google, Amazon and Microsoft dominating the cloud while none of them was a native cloud company.
The shock came with last week's OpenAI release and OpenAI basically saying: I will own apps too and look at how I’m already creating network effects. I will own it all!
This was unexpected. OpenAI is now engaging in both infrastructure and apps, surprisingly early in its history, with an initial infra product that still appears to be maturing.
While the analogy was with Microsoft, Google and Amazon, it’s now with Apple.
With OpenAI moving into apps and making moves like Apple, many of us need to completely reassess our understanding of the dynamics in the AI space.
In addition, for those involved in AI, last week's shock included the realization that 80% of the AI startups they had invested in, believed in, or saw great potential in, were rendered obsolete by just one OpenAI update.
And that felt a lot like an Apple keynote too. Every Apple conference being an occasion for dozens of apps to be disrupted or at least see their TAM divided by 5. Remember password manager apps?
So it was not just unexpected, it also felt brutal. Including damages made to early app builders, early OpenAI adopters and fans who saw their apps and startups become obsolete overnight.
The combination of this brutality toward its own community, and the unexpected move to apps is likely contributing to some confusion, if not in some cases, panic.
Let’s relax, that fight has been fought for the past 30 years and OpenAI is currently at least 1 year away from an app platform.
If you need to relax even more and take some distance, here is a great video of Steve Jobs discussing the same topics when he was iterating on them, 31 years ago, while referring to events in software from the 60s and the 70s to highlight how he was thinking of them in the 80s.
To finish on this, the shock also came from OpenAI announcing that it would work on helping companies train models on their own data with the Custom Models program. Here too, many companies had started to build LLMs thinking they would differentiate from OpenAI simply by building smaller models, easier to customize and run, and more performant for a specific sort of tasks. Well, they were wrong. OpenAI wants the smaller-custom-model piece of the LLM cake too.
They want it all.
Apple is amazing, and OpenAI wants to replicate
I own and use two Macs, a MacBook Pro, an iPad, an iPhone, and always keep a backup iPhone updated in case I lose the first one. I’m not even sure how much I spend on Apple Cloud subscriptions and other app fees.
Recently, my friend Thomas Otter shared that for the past 15 years, each time he bought a new Apple device, he also purchased the same amount in Apple shares. That’s wise, and made him a lot of money.
Apple is amazing. It’s the largest company in the world, for a good reason. It nailed vertical integration from hardware to apps through operating systems, while taking a cut on each app.
By unveiling a GPT application store in a conference clearly inspired by Steve Jobs' style, OpenAI is following in Apple’s footsteps.
But why so early, when they are leading and while their infra LLM star product is still maturing? Why not focus on making that work really well first?
Many tech leaders, some speaking publicly and others privately, believe that the OpenAI board’s conversation was probably framed along these lines: well, that LLM tech is going to generate multiple thousands of billions of dollars of valuation — let’s keep it all for ourselves. We must own app distribution and network effects, it can’t wait. We might even build all apps ourselves, we’ll figure out later if our AI can do that too. In any case, there are some players already aiming to build AI-first platforms. Let’s put them to sleep without delay.
Given that impression, of course these tech decision-makers feel the need to reassess their AI bets. While investing in AI aggressively is the obvious choice, the question everyone’s grappling with is if they should build or buy the infra (buy from OpenAI or host and run and maybe train or fine-tune your own LLM).
And that question just got a lot more complicated.
Because OpenAI is not just infra to build apps, it might well be competing with your app.
Some are even questioning whether non AI-first software still has a future and a non-negative valuation, and if AI-first software can actually be developed anywhere other than on OpenAI.
Let’s deal with these 2 questions: build or buy? And does software still have a future?
Build or buy?
For app builders, the dilemma is whether to leverage OpenAI, or to build by leveraging open-source pre-trained LLMs — including finding servers to host them, and in some cases, retrain these models with their own data.
To be clear: you could start coding on top of LLMs in a few seconds with OpenAI or Claude, vs needing to hire a full time team with other options.
Before last week’s announcement, the decision was straightforward: in light of the LLM race to the bottom we discussed earlier, let’s just wait and see! Soon enough, we’ll have a toggle allowing us to switch from one LLM to another like between Azure and AWS. LLMs will be commoditized!
Although OpenAI's recent release accelerates development by including exactly what developers need while also enhancing feature capabilities, it's important to recognize the trade-off: by building on OpenAI, you're now generating value that will be harder to transfer elsewhere compared to a simple switch.
You’re now building in an OpenAI proprietary environment and language. You’re more captive. And you’re captive of a potential competitor.
But hey, good news, OpenAI cut prices by 3! It’s a nice counterpart to help you forget about that step you hadn’t planned to make in the direction of a long-term commitment to sticking with OpenAI.
It’s all very smart and aggressive from them.
Do you want to be captive from a golden jail that has more powerful features and cheaper prices, and has, through network effects, gathered a vast community of Assistants, code libraries, and a GPT marketplace? Not sure there is a happy ending for app builders in that world.
During a dinner last week, where 20 European CEOs discussed these implications, I asked whether they believed that in 5 years, 80% of the text-to-text LLM market would be dominated by 3 companies, or a larger number closer to, 20, 50 or even more. We took a vote and it was 50 / 50.
However, someone who had come from SF, quite credible and informed IMO, said: there will only be one.
For those who are experimenting on the OpenAI platform, last week’s experience was a bit confusing. If you were creating GPTs, their IP (prompts and instructions) was accessible to anyone using them. And no monetization. Nor any timeline.
Does this mean OpenAI wants us to create things on their platform, giving them access to our ideas? Then, they’ll use the most successful applications and make them a native feature of their own platform?
It’s probably not the plan. But maybe it is.
OpenAI's communication doesn’t compare to Apple’s on similar topics: it’s very confusing. They are sending mixed signals to the community here. Make some developers rich before you start cutting some of their apps at least!
I think they're doing a lot and very quickly, which makes their comms unclear and confusing.
But for now at least, I don’t see anyone making the decision to build on OpenAI for the long term. We need more clarity.
The end of software?
In the last 6 months, many ambitious AI startups have emerged. Most are a different version of a cloud-based, AI-first agent platform, each with a different color (collaborative, multi models, verticalized, etc.).
It seemed the right pitch to complement and not compete with OpenAI.
But most of these value propositions have been wiped out by last week’s OpenAI release.
These AI-first startups now need to rethink their value prop and strategy.
It looks like OpenAI wants to develop everything on their own.
Now, it's unclear what parts of the AI-first internet they plan to build themselves versus what they want to partner on.
Here again, clarifications regarding OpenAI vision and roadmap would help promising startups like Lindy and Dust to figure out where to build in order to complement OpenAI, move faster and create more value in AI.
Is there a scenario where OpenAI eats all software? Yes.
For a start, Assistants can probably be coded to write other Assistants, starting with all Assistants dedicated to integrations. Making OpenAI collaborative and integrated with everything.
And from there, only a few steps are missing for OpenAI to become the ultimate internet AI-first OS on which everything is built in natural language and every data inevitably attracted.
And these steps are not too hard to roadmap for any software engineer with some business culture.
Antitrust laws are probably an execution risk, but I guess they’ll leverage Bill Gates’ experience.
But that’s probably not the only obstacle.
Can OpenAI really do all that?
It seems unlikely that a young startup like OpenAI can 1/ fight and win the LLM infrastructure innovation game, 2/ build the AI-first internet OS and 3/ keep identifying all end-user use cases that have network effects and make sure they all happen first on their platform.
So, at first sight at least, it does not seem possible that OpenAI executes well on all these fronts.
But…OpenAI is said to have reached $86b valuation and Altman is rumored to privately claim it would become the tech company to burn the most money ever.
In other words: we’ve never seen what can happen at a startup where all the best people of Silicon Valley are operating together, including Reid Hoffman, a former head of YC, plus Bill Gates and a few others, basically a lot of the Silicon Valley bosses, cumulating 60 years of culture on how these things play out and who probably see this as the opportunity to deliver their masterpiece in software innovation, strategy & execution.
OpenAI can raise a round of $30b USD in the blink of an eye if it needs to, and is already leading on the most pivotal technological breakthrough of all time.
Never say never; the future won’t look like the past, and the parameters here are completely new, at a completely new scale. So maybe they can.
But if it wasn’t for that, the feedback for OpenAI would be: you’re trying to do too much, in too many directions, while nothing is yet clearly achieved. It negatively impacts your communication to your power user community. You seem to be losing focus.
However, in my personal mental model, I will now attribute a non-negative probability to a version of the future where OpenAI owns all layers of software. That can happen if they execute really really well on all fronts (LLMs, OS for AI apps, owning some part of the AI apps business), above any execution level that ever existed, and if (like it seems to be the case) competitors only bring a knife to what’s going to be the most competitive and strategic of all tech fights.
Because OpenAI competitors seem to believe it’s all only about LLMs performance, and they don’t have much software culture around winning distribution, and don’t seem to have a strategy. How can they hope to win? Yet, by design there’s room for exactly one Android of AI, so at least one will win something. HuggingFace?
The Opportunity
Building the infrastructure for AI first apps.
LLMs are key enablers, but the there’s a lot left to do to build the A-first infrastructure: we need systems for developing, deploying, testing, managing user rights, dealing with data access, interfacing users, interfacing other apps & agents, distributing AI-first apps, etc.
From my first-hand experience leading a team developping AI applications (like this and that) on top of LLMs for now almost a year, I can tell you an agent marketplace does not resolve most of it.
This is the opportunity for builders right now, and it shows that OpenAI announcement is not an overnight capture of the AI-first application market.
Regulation, a DDOS attack on smaller startups
Last, a quick word on regulation.
Here is a fascinating video of Bill Gates saying his biggest mistake ever was being distracted by the Microsoft anti-trust lawsuit, instead of being focused on winning the battle for mobile.
He is laughing that it’s $400b of dollars that he lost and that could have been sitting in Microsoft’s instead of Google’s valuation, if he hadn’t made that mistake.
You might think: avoiding being dismantled still should be considered to have been a life or death priority for Microsoft. But here is Bill Gates saying it was a distraction.
Regulation is a distraction, when you are competing in a winner-takes-all game.
It’s Bill Gates saying it.
Of course AI builders are flattered to be speaking to the highest authorities. It makes anyone feel important. But it’s a waste of time and focus.
In that area, I think the opportunity is to follow Bill Gates advice and disengage from the conversation with authorities. Countries involving their startups in the regulation endless conversations are actually not helping them.
Conclusion: don’t panic!
There are more opportunities than threats with AI. While there’s no use repeating I’m an optimist all day, I do believe we’re going to build some very special things with these new technologies!