Silicon Valley generative AI startup Inflection AI has raised $1.3 billion to compete against OpenAI’s ChatGPT by introducing their AI personal assistant Pi and will make available directly to the public and via an API. Pi is an “kind and supportive companion,” created to provide “fast, relevant and helpful information and advice,” According the company. The app was launched in May.
There’s no official estimate of how much OpenAI is raising, Microsoft The company has reported investing as high as $10 billion Over the last few years, OpenAI has raised more than $600 million. OpenAI as well as Inflection rival Anthropic has raised the same quantity to Inflection.
ADVERTISING
Where is the bulk of the money where is the money
It’s compute. It’s called ChatGPT. ChatGPT has been re-trained and is now running on Microsoft’s Azure cloud, and Anthropic has been trained and is running its Claude LLM within Google’s cloud. In contrast, Inflection plans to build its own supercomputer and deploy Pi.
PARTNER CONTENT
The top-ranked entry in the MLPerf benchmark that was held last week to train GPT-3 proved to be Inflection’s supercomputer. The machine is in the process of being built. Once completed, the Inflection installation will include 22,000. Nvidia H100 GPUs, which makes it the biggest AI cluster, and one of the most powerful computing clusters around the globe. All this for chatbots. Nvidia’s own AI supercomputer Eos is a monster 4,600-GPU machine is currently in the process of being brought up and will eventually get a thumping from Inflection’s cluster.
Why would you want to invest in hardware as an AI software company? AI software firm?
The simple response is: bigger is better The number of LLMs are still increasing with only a limited amount of computation available. Although training is the main source of computation needed for certain LLMs intended for scientific use but when they are deployed in the manner required by consumer applications, inference computing increases to the point of being unstoppable. While foundation models and fine-tuning are expected to lower the cost of training to a manageable level for consumer LLMs however, there’s no similar solution to inference. Large, massive AI computing machines will be needed.
The back-of-the-envelope calculations suggest that 22,000 H100 GPUs could be worth approximately $800 million, which is the bulk of the latest Inflection funding. However, this figure does not include the costs of the remaining infrastructure such as the cost of real estate, energy as well as all other elements that make up the total cost of ownership (TCO) for existing hardware. If it sounds like $800 million, that’s an amount, recent analysis from SemiAnalysis indicates that ChatGPT costs about 700,000 every day to operate. If that’s the case it will take approximately three years to go through $800 million.
We don’t have the exact dimensions of the Inflection LLM Inflection-1 which Pi is built on, however Inflection has stated that it’s in similar class to GPT-3.5 that is the same as the GPT-3 model that OpenAI’s Chat GPT is constructed upon (175 billion parameters). Inflection also includes the Meta Llama (60 billion parameters) as well as Google’s Palm (540 billion parameters) within the same class of computation as Inflection although they’re vastly different in terms of dimension and range (Palm is able to write code, which Inflection-1 was not designed to do, as an example).
The more capabilities an LLM includes (multiple languages such as code generation reasoning, math understanding) as well as the more precise its accuracy, the more powerful it will be. It’s possible that the one with the most powerful LLM can “win,” but it’s definitely the case that the organization that is deploying the largest LLMs with the greatest scale will be the one with the largest amount of computing power available. That’s why the 22,000-GPU processor that is operated and owned by a single firm is so important.
It’s obvious that the expense of deploying an generative AI currently is mostly in the computation required: Running an LLM at a consumer level today requires a substantial amount of money.
While we investigate the benefits of LLMs and their potential, the significance of large clusters of computing like Inflection’s will only increase. If there isn’t any shift to less expensive hardware made by AMD, Intel or any number of startups, the price of computing isn’t likely to fall as well. That’s why we’ll probably continue to witness billions of dollars being spent on chatbots.