AI
AI
AI
Recursive Superintelligence Inc., a startup that hopes to develop self-improving artificial intelligence models, launched today with $650 million in funding.
Alphabet Inc.’s GV fund and Greycroft led the round. They were joined by Nvidia Corp. and Advanced Micro Devices Inc.’s venture capital arm. Recursive says the investment values it at $4.65 billion.
The company was founded earlier this year by former Salesforce Inc. Chief Scientist Richard Socher. He earlier launched You.com Inc., a provider of application programming interfaces that AI models use to perform online research. The startup received a $1.5 billion valuation last year.
According to the New York Times, Recursive’s initial team comprised Socher and six other staffers. The company now has more than 25 employees in San Francisco and London. They’re working to build so-called recursive self-improving superintelligence, or an AI model that can discover new knowledge similarly to human scientists.
Current neural networks can’t perform basic research in a fully autonomous manner. As a result, Recursive’s first priority is to build an AI model that can improve its own code base. The company hopes that such a model would be capable of discovering how to develop an AI that is as effective as humans at scientific tasks.
The company’s AI will search for ways to improve itself by carrying out simulations “in an open-ended process of automated scientific discovery.” Recursive says that the model will develop experiment ideas, test them and then validate the results. The company will develop guardrails to prevent the software from producing risky output.
According to Recursive, the experiments carried out by its AI model will focus on improving not only its code but also its harness. A harness is a set of auxiliary programs that AI providers use to enhance the output of their algorithms. Furthermore, Recursive’s system will search for ways to improve its training and inference infrastructure.
OpenAI Group PBC is already using its recently released GPT-5.5 model to that end. The company splits each inference request into so-called chunks and spreads them chunks across multiple graphics card cores to speed up processing. Until recently, the number of chunks involved in the workflow was fixed. According to OpenAI, GPT-5.5 developed a more efficient parallelization method that boosted token generation speeds by more than 20%.
Some companies are using AI to enhance not only their inference workflows but also the underlying hardware. Recursive investor Alphabet, for example, designs its TPU accelerators with the help of a neural network trained on chip blueprints. The creators of the system recently launched a startup called Ricursive Intelligence Inc. to make similar technology available for other companies.
Recursive didn’t disclose what machine learning methods will power its self-improving AI. Rival Ineffable Intelligence Ltd., which also hopes to develop models that can discover new knowledge, is using reinforcement learning. That’s an AI model technique commonly used in large language model projects.
“We will start with AI research itself but eventually hope to expand its aperture to physics, chemistry and especially pre-clinical biology,” Socher wrote in a post on X. “AI will be to biology what calculus was to physics — a new language and way of thinking that deals with complex systems and helps us understand and engineer them better.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.