Blog Layout

AI business models

Alessio De Filippis • Jan 30, 2023

Generative artificial intelligence products have many hurdles to overcome before fulfilling the wildest hopes and fears that they’ve inspired since OpenAI introduced ChatGPT in November.

The enormous processing capacity needed to run modern AI systems is shaping their technical development and business model. OpenAI Needs Billions to Keep ChatGPT Running.


To a user firing up OpenAI’s chatbot hoping to generate automated haikus about the American Revolution or recipes for Spam casserole, the product’s basic interface and instantaneous answers can seem simple, even magical.

On the other side of those queries, though, an immense amount of work is going on. OpenAI’s ChatGPT chatbot requires far more computing power to answer a question than Google takes to respond to a web search. The startup’s current offering is good enough to inspire speculation about a world in which it and programs like it take over some disruptive proportion of the work that only humans can do today. But even if that’s where the economy is headed, getting there is beyond the average startup’s capacity.


Generative artificial intelligence products have many hurdles to overcome before fulfilling the wildest hopes and fears that they’ve inspired since OpenAI introduced ChatGPT in November. The service has suffered regular outages; the problem stems from the technical challenges that come with running any suddenly popular website, rather than with the computing power needed to run its AI models, according to OpenAI. ChatGPT also has the potential to give incorrect information, and in its current form it doesn’t have sufficient information to answer questions about recent events. These are all thorny issues that it and its competitors will likely be grappling with for years.

But the challenge of computing power in particular is likely to shape the development of the field, and potentially the products themselves. As organizations such as OpenAI seek to turn a profit, they may have to start charging for services that are now free. Some companies could look for ways to make more targeted products with computing needs that aren’t as intensive. And the cost of computing is already influencing which entities will have influence over the AI products that seem set to shape the future of the internet.


On Jan. 23, Microsoft Corp. announced a multiyear investment in OpenAI. A person familiar with the deal, requesting anonymity to give non-public business details, put its value at $10 billion. Much of that value lies in Microsoft gaining the right to almost half of OpenAI’s financial returns in exchange for giving OpenAI access to computing power on Microsoft’s cloud network, Azure. Other general-use AI systems are similarly tied to one of the large cloud computing companies, even when the organizations building the models are independent.


Clement Delangue, chief executive officer of Hugging Face Inc., which runs a repository of open source AI models popular with startups, says the industry runs the risk of “cloud money laundering.” This term describes what happens when startups have access to enough money and subsidized computing power to sidestep assessments of whether it makes sense to use certain techniques to solve particular problems. This dynamic should be avoided, he says, “because it creates kind of unsustainable use cases for machine learning.” He says it’s important to have precise calculations of what generative products actually cost to run.

One thing is clear: These systems are monumentally expensive. The first step to building one is sucking up huge volumes of data—text, photos or art—from across the internet. After that, such data is used to train an AI model. For the biggest models, this process runs into the millions of dollars even before considering the cost of specialized engineers, such as language experts, says Rowan Curran, an analyst at Forrester Research. And the more data a system is built on, the more computing power it likely needs to answer a query. Each new question has to run through a model that includes tens of billions of parameters, or variables, that the AI system has learned through its training and retraining.

The GPT-3 system that ChatGPT is built on uses 175 billion parameters, which expands its versatility while also making it especially power hungry. Many of the most popular models on Hugging Face have about 10 billion parameters, Delangue says. Stability AI’s Stable Diffusion, an open source rival to DALL-E, OpenAI’s image generator, has about 1 billion. But subsequent versions could be larger, says Tom Mason, Stability AI’s chief technical officer. “I think there’s a trend this year that the models are getting bigger,” he says. At the same time, he says that people in the field are all working on improving the efficiency of the underlying technology in ways that could offset this increase.

In December, OpenAI Chief Executive Officer Sam Altman tweeted that ChatGPT’s average cost per query was “probably single-digit cents per chat.” A Morgan Stanley analysis put it at 2¢. That’s about seven times the average cost of a Google search query, according to Morgan Stanley, and it can quickly add up in products that operate on such a large scale.

A spokesperson for OpenAI says it’s making progress on improving its efficiency, and it continues to seek a balance between distributing its technology as widely as possible and finding a path to commercial viability.

If everyone uses massive, general models rather than smaller, more specific ones, the computing demands are probably not sustainable right now, Delangue says. But some companies are already looking for an opening by creating models to serve a specific market. “One way for startups to go is to identify their area of specialization and focus their training models only on the relevant data,” says Preeti Rathi, general partner at Icon Ventures Inc. in Palo Alto. Icon has invested in Aisera, a company creating a system specifically targeted at helping resolve customer service tickets.

Particularly in the short term, other startups will build products using general models made by OpenAI, Alphabet Inc.’s Google or Stability AI, then customize them or add domain-specific data to target specific markets, says Navrina Singh, CEO at startup Credo AI, which is working on governance systems for new AI applications. Others are looking to design products that won’t rely on the biggest tech companies.

Large cloud companies are eager to work with startups that are hungry for computing power, in part because they have the potential to become long-term customers. Amazon’s cloud unit in November unveiled a partnership with Stability AI. Google has a ChatGPT-like system called Lamda that it hasn’t released publicly, and the Wall Street Journal has reported that it has had talks to invest $200 million in Cohere, an AI startup that creates language software developers can use for things such as chatbots. “There’s a somewhat of a proxy war going on between the big cloud companies,” says Matt McIlwain, managing director at Seattle’s Madrona Venture Group LLC, which invests in AI startups. “They are really the only ones that can afford to build the really big ones with gazillions of parameters.”


After an extended period of technological innovation during which a handful of companies consolidated their dominance of the internet, some people see AI developing in a way that will only strengthen their grip. There have been calls for regulation of the emerging field, and some countries or universities are setting up publicly owned supercomputers as alternatives. Bloom, the largest open source rival to GPT, was trained on a French public supercomputer called Jean Zay. Delangue has said that a French research tax credit should be expanded to cover the costs of computers for machine learning research, and he urged other countries to take similar action.

The tech titans have positioned themselves well to avoid being disrupted, says Forrester’s Curran. “We’re in the beginning stages of what could possibly be a huge explosion of ideas and creativity around business creation,” he says. “But the big players are all doing lots of work here. So the chance that they would be totally blindsided by a startup isn’t huge.”

***


Alessio De Filippis, Founder and Chief Executive Officer @ Libentium.


Founder and Partner of Libentium, developing projects mainly focused on Marketing and Sales innovations for different types of organizations (Multinationals, SMEs, startups).


Cross-industry experience: Media, TLC, Oil & Gas, Leisure & Travel, Biotech, ICT.


by Alessio De Filippis 26 Apr, 2024
Lessons learned and suggestions.
by Alessio De Filippis 01 Dec, 2023
Generative AI is developing fast, and companies will have to balance pace and innovation with caution.
by Alessio De Filippis 22 Sept, 2023
Don't forget, historically it has taken about five to seven years for a disruptive firm to change industry models.
by Alessio De Filippis 03 Sept, 2023
Don't approach investor relations as a marketing exercise.
by Alessio De Filippis 02 Sept, 2023
It's all about soft vs. hard data.
by Alessio De Filippis 02 Aug, 2023
Create value that’s hard to copy.
by Alessio De Filippis 02 Aug, 2023
A digital and AI transformation cannot be done in “special project” mode. To pull this off, the entire organization must be able to deliver constant digital innovation, which requires a holistic set of capabilities. The effort is significant, but so is the reward.
by Alessio De Filippis 02 Aug, 2023
Like any business-planning exercise, think about your AI strategy in phases. Embrace agility and change, and keep a continuous learning mindset, calibrating and adjusting your gameplan as you go.
by Alessio De Filippis 03 Jul, 2023
The business advantages of scale and scope are widely recognized, but large, global enterprises often fail to fully realize them when it comes to innovation.
by Alessio De Filippis 03 Jul, 2023
Today we find ourselves in a place that’s all too familiar: the unfamiliar.
More posts
Share by: