Last year, OpenAI held a splashy press event in San Francisco during which the company announced a bevy of new products and tools, including the ill-fated App Store-like GPT Store.
This year will be a quieter affair, however. On Monday, OpenAI said itβs changing the format of its DevDay conference from a tentpole event into a series of on-the-road developer engagement sessions. The company also confirmed that it wouldnβt release its next major flagship model during DevDay, instead focusing on updates to its API and developer services.
βWeβre not planning to announce our next model at DevDay,β an OpenAI spokesperson told TechCrunch. βWeβll be focused more on educating developers about whatβs available and showcasing dev community stories.β
OpenAIβs DevDay events this year will take place in San Francisco on October 1, London on October 30 and Singapore on November 1. All will feature workshops, breakout sessions, demos with the OpenAI product and engineering staff and developer spotlights. Registration will cost $450, with applications closing on August 15.
OpenAI has in recent months taken more incremental steps than monumental leaps in generative AI, opting to hone and fine-tune its tools as it trains the successor to its current leading models GPT-4o and GPT-4o mini. The company has refined approaches to improving the overall performance of its models and preventing these models from going off the rails as often as much as they previously did, but OpenAI β at least according to some benchmarks β has lost its technical lead in the generative AI race.
One of the reasons could be the increasing challenge of finding high-quality training data.
OpenAIβs models, like most generative AI models, are trained on massive collections of web data β web data that many creators are choosing to gate over fears that their data will be plagiarized or that they wonβt receive credit or compensation. More than 35% of the worldβs top 1,000 websitesΒ now block OpenAIβs web crawler, according to data fromΒ Originality.AI. And around 25% of data from βhigh-qualityβ sources has been restricted from the major data sets used to train AI models, aΒ studyΒ by MITβs Data Provenance Initiative found.
Should the current access-blocking trend continue, the research group Epoch AIΒ predictsΒ that developers will run out of data to train generative AI models between 2026 and 2032. That β and fear of copyright lawsuits β has forced OpenAI to ink costly licensing agreements with publishers and various data brokers.
OpenAI is said to have developed a reasoning technique that could improve its modelsβ responses on certain questions, particularly math questions, and the companyβs CTO Mira Murati has promised that a future OpenAI model will have βPh.D.-levelβ intelligence. Thatβs promising a lot β and thereβs high pressure to deliver. OpenAIβs said to hemorrhaging billions of dollars training its models and hiring top-paid research staff.
Time will tell whether OpenAI can deliver while dealing with the many, many controversies that plague it.