Did OpenAI End the Software Industry With One Release?
A recap of some conversations on AI I've been involved in recently.
Being the CEO of a SaaS scale-up massively investing in AI, I'm making some decisions regarding our infra, and often talk with peers facing similar choices.
This is a recap of recent conversations I've had, including if we SaaS application builders should build or buy our AI infra and where is the opportunity for us in AI.
TLDR: while OpenAI announcement might seem like a surprising move from infra to apps, it leaves us with a lot of questions regarding the company vision, and roadmap. I doubt it's the end of software or that OpenAI will monopolize AI-first app distribution -but it might be there ambition. Yet, it does bring up worries about being captive of OpenAI as a platform, and forces application builders to rethink their build vs. buy strategy.
Hope this helps, feedback welcome.
The shock wave
Last week's OpenAI release was an inflection point. As Bill Gates noted, it caused “a shock wave in the tech industry”. Bill’s main goal is to sell us on the new OpenAI product: GPTs.
However, the real shock stems from OpenAI becoming “like a black hole that will suck anything monetizable across a data event horizon”.
Or like Thomas Wolf, one of HugginFace co-founders and chief science put it on Linkedin last Sunday: “Weekend take: ran the numbers and Sama is now crushing startups with OAI at roughly the same rate he was growing them with YC.”
In infrastructure and apps, people are afraid that OpenAI’s new plans equate to ending software.
What’s changed?
OpenAI had gained the lead in LLMs almost a year ago with their ChatGPT release, catching everyone off guard with their astonishing technology.
However, in the first half of 2023, at least 30 other LLM teams, including Anthropic, were closely following OpenAI's lead and had secured billions in funding.
Just three weeks ago, Anthropic's CEO said on a podcast he was planning to invest $1 billion in training a model in 2024, that would be ten times more powerful than GPT-4, that probably cost around $100m to train. And that’s credible: he just raised $6.5b in a mix of equity and debt that’s becoming market practice to finance compute, from Google and Amazon.
The LLM space clearly felt like a race to the bottom, and until a week ago, the prevailing question was how could OpenAI maintain its competitive edge.
The common view was that as everyone focuses on enhancing these models and investment continues to pour in, we'd see increasingly high-quality models at lower and lower costs. Race to the bottom.
Because pure software technology, no matter how powerful, on its own, offers little protection as a competitive moat. Whenever a new innovation boosts LLM performance, other teams adopt it within a week, diluting any long-term competitive advantage.
An LLM may have billions of parameters, the science behind it only has a limited number of dimensions. Not enough to enable significant long-term differentiation.
Between talent mobility, researcher incentive to publish fast, and Hugging Face expediting the whole process, maintaining a technical advantage is particularly challenging. Especially given the worldwide focus and money prize motivation.
Innovation is replicated within months, sometimes days, in that space.
The main exception is the dimension of compute and data resources invested, which essentially equates to financial resources and can’t be replicated as fast as a new set of model weights.
But it does have decreasing returns, and anyways, money is just pouring over the AI ecosystem.
In that context, how would OpenAI defend its current leadership? It was not clear.
The shock
Typically, dividing tech into 'infrastructure' and 'apps' categories helps us see them as two distinct areas with their own distinct history, skills, business models, customers, culture and usual business dynamics.
It’s 2 different worlds.
Infrastructure, for example, involves servers and data hosting. The cloud business, largely dominated by Amazon AWS, Google Cloud, and Microsoft Azure, falls into this category.
Apps focus on using infrastructure to develop software beneficial for end users, businesses, and consumers.
In both cases, it's commonly understood that leading is more about distribution than the technology itself. To be a long-term, large-scale leader, you need not just a good product, but also to establish various mutually reinforcing network effects in distribution. A great product will then follow.
Back to OpenAI, it seemed to be an infrastructure company: LLMs process data in the cloud and also somehow host the data they were trained or fine-tuned on. The question of if an LLM can host data and how exactly is interesting, but I’ll leave it for another day.
So: the LLM space felt similar to the cloud industry. This led us to speculate about the likely evolution of the space, based on our knowledge of the dynamics among cloud giants like Google Cloud, AWS, and Azure in recent years.
Typically, from there, most people had the intuition that incumbents had an advantage and would end up dominating the LLM market — similarly to Google, Amazon and Microsoft dominating the cloud while none of them was a native cloud company.
The shock came with last week's OpenAI release and OpenAI basically saying: I will own apps too and look at how I’m already creating network effects. I will own it all!
This was unexpected. OpenAI is now engaging in both infrastructure and apps, surprisingly early in its history, with an initial infra product that still appears to be maturing.
While the analogy was with Microsoft, Google and Amazon, it’s now more with Apple.
With OpenAI moving into apps and making moves like Apple, many of us need to completely reassess our understanding of the dynamics in the AI space.
In addition, for those involved in AI, last week's shock included the realization that 80% of the AI startups they had invested in, believed in, or saw great potential in, were rendered obsolete by just one OpenAI update.
And that felt a lot like an Apple keynote too. Every Apple conference being an occasion for dozens of apps to be disrupted or at least see their TAM divided by 5. Remember password manager apps?
So it was not just unexpected, it also felt brutal. Including damages made to early app builders, early OpenAI adopters and fans who saw their apps and startups become obsolete overnight.
The combination of this brutality toward its own community, and the unexpected move to apps is likely contributing to some confusion, if not in some cases, panic.
Let’s relax, that fight has been fought for the past 30 years and OpenAI is currently at least 1 year away from an app platform.
If you need to relax even more and take some distance, here is a great video of Steve Jobs discussing the same topics when he was iterating on them, 31 years ago, while referring to events in software from the 60s and the 70s to highlight how he was thinking of them in the 80s.
To finish on this, the shock also came from OpenAI announcing that it would work on helping companies train models on their own data with the Custom Models program. Here too, many companies had started to build LLMs thinking they would differentiate from OpenAI simply by building smaller models, easier to customize and run, and more performant for a specific sort of tasks. Well, they were wrong. OpenAI wants the smaller-custom-model piece of the LLM cake too.
They want it all.
Apple is amazing, and OpenAI wants to replicate
I own and use two Macs, a MacBook Pro, an iPad, an iPhone, and always keep a backup iPhone updated in case I lose the first one. I’m not even sure how much I spend on Apple Cloud subscriptions and other app fees.
Recently, my friend Thomas Otter shared that for the past 15 years, each time he bought a new Apple device, he also purchased the same amount in Apple shares. That’s wise, and made him a lot of money.
Apple is amazing. It’s the largest company in the world, for a good reason. It nailed vertical integration from hardware to apps through operating systems, while taking a cut on each app.
By unveiling a GPT application store in a conference clearly inspired by Steve Jobs' style, OpenAI is following in Apple’s footsteps.
But why so early, when they are leading and while their infra LLM star product is still maturing? Why not focus on making that work really well first?
Many tech leaders, some speaking publicly and others privately, believe that the OpenAI board’s conversation was probably framed along these lines: well, that LLM tech is going to generate multiple thousands of billions of dollars of valuation — let’s keep it all for ourselves. We must own app distribution and network effects, it can’t wait. We might even build all apps ourselves, we’ll even figure out later if our AI can build the apps itself too haha. In any case, there are some players already aiming to build AI-first app platforms and agent platforms. Let’s put them to sleep without delay.
Given that impression, of course these tech decision-makers feel the need to reassess their AI bets. While investing in AI aggressively is the obvious choice, the question everyone’s grappling with is if they should build or buy the infra (buy from OpenAI or host and run and maybe train or fine-tune your own LLM).
And that question just got a lot more complicated.
Because if OpenAI is more than an infrastructure (with a demo chat publicly available), then it might end up competing with the app or platform you are building on top of their models.
Some are even questioning whether non AI-first software still has a future and a non-negative valuation, and if AI-first software can actually be developed anywhere other than on OpenAI, by OpenAI.
Let’s deal with these 2 questions: build or buy? And does software still have a future?
Build or buy?
For app builders, the dilemma is whether to leverage OpenAI, a less performing competitor, or to build by leveraging open-source pre-trained LLMs—including finding servers to host them, and in some cases, retraining these models with your own data.
To be clear: you could start coding on top of LLMs in a few seconds with OpenAI or Claude, vs needing to hire a full time team with other options.
Before last week’s announcement, the decision was straightforward: in light of the LLM race to the bottom we discussed earlier, let’s just wait and see! Soon enough, we’ll have a toggle allowing us to switch from one LLM to another like between Azure and AWS. LLMs will be commoditized!
Although OpenAI's recent release accelerates development by including exactly what developers need while also enhancing feature capabilities, it's important to recognize the trade-off: by building on OpenAI, you're now generating value that will be harder to transfer elsewhere compared to a simple switch.
If the GPT Store becomes successful and evolves into the production-ready B2B AI-native application store, it might be that building in AI requires the same kind of relationship with OpenAI than building in mobile requires with Apple.
If you’re building in an OpenAI proprietary environment and language. You’re more captive. And you’re captive of a potential competitor.
But hey, good news: 1. as we said, they don’t have a clear moat at that point 2. OpenAI cut prices by 3! It’s a nice counterpart to help you forget about that step you hadn’t planned to make in the direction of a long-term commitment to sticking with OpenAI.
I think it’s all very smart and aggressive from them, probably also precisely because they don’t have a moat.
It will be interesting to monitor the efforts they put into making the GPT store more than a gadget. Will it become a true app distribution channel like the App Store or like Microsoft, and especially in B2B? It’s not obvious imo.
If they do, it’s a real question if as an app builder you want to be captive from a golden jail that has more powerful features and cheaper prices, and has, through network effects, gathered a vast community of Assistants, code libraries, and a GPT marketplace? Not sure there is a happy ending for app builders in that world.
During a dinner last week, where 20 European CEOs discussed these implications, I asked whether they believed that in 5 years, 80% of the text-to-text LLM market would be dominated by 3 companies, or a larger number closer to, 20, 50 or even more. We took a vote and it was 50 / 50.
However, someone who had come from SF, quite credible and informed IMO, said: there will only be one.
For those who are experimenting on the OpenAI platform, last week’s experience was a bit confusing. If you were creating GPTs, their IP (prompts and instructions) was accessible to anyone using them. And no monetization. Nor any timeline.
Does this mean OpenAI wants us to create things on their platform, giving them access to our ideas? Then, they’ll use the most successful applications and make them a native feature of their own platform?
It’s probably not the plan. But maybe it is.
OpenAI's communication doesn’t compare to Apple’s on similar topics: it’s very confusing. They are sending mixed signals to the community here. Make some developers rich before you start cutting some of their apps at least!
The end of software?
In the last 6 months, many ambitious AI startups have emerged. Most are a different version of a cloud-based, AI-first agent platform, each with a different color (collaborative, multi models, verticalized, etc.).
It seemed the right pitch to complement and not compete with OpenAI.
But most of these value propositions have been wiped out by last week’s OpenAI release.
These AI-first startups now need to rethink their value prop and strategy.
It looks like OpenAI wants to develop everything on their own.
Now, it's unclear what parts of the AI-first internet they plan to build themselves versus what they want to partner on.
Here again, clarifications regarding OpenAI vision and roadmap would help promising startups like Lindy and Dust to figure out where to build in order to complement OpenAI, move faster and create more value in AI.
Is there a scenario where OpenAI eats all software?
And from there, only a few steps are missing for OpenAI to become the ultimate internet AI-first OS on which everything is built in natural language and every data inevitably attracted.
The Opportunity for 2024
Building the platform for deploying AI first apps and agents to businesses.
LLMs are key enablers, but the there’s a lot left to do to build the AI-first infrastructure: we need systems for developing, deploying, testing, managing user rights, dealing with data access, interfacing users, interfacing other apps & agents, distributing AI-first apps, etc.
From my first-hand experience leading a team developping AI applications (like this and that) on top of LLMs for now almost a year, I can tell you an agent marketplace does not resolve most of it.
This is the opportunity for builders right now, and it shows that OpenAI announcement is not an overnight capture of the AI-first application market.
Regulation, a DDOS attack on smaller startups
Last, a quick word on regulation that I had in mind these past days.
Here is a fascinating video of Bill Gates saying his biggest mistake ever was being distracted by the Microsoft anti-trust lawsuit, instead of being focused on winning the battle for mobile.
He is laughing that it’s $400b of dollars that he lost and that could have been sitting in Microsoft’s instead of Google’s valuation, if he hadn’t made that mistake.
You might think: avoiding being dismantled still should be considered to have been a life or death priority for Microsoft. But here is Bill Gates saying it was a distraction.
Regulation is a distraction, when you are competing in a winner-takes-all game.
It’s Bill Gates saying it.
Of course AI builders are flattered to be speaking to the highest authorities. It makes anyone feel important. But it’s a waste of time and focus.
In that area, I think the opportunity is to follow Bill Gates advice and disengage from the conversation with authorities. Countries involving their startups in the regulation endless conversations are actually not helping them.