ChatGPT might well be the most famous, and potentially valuablealgorithm of the moment, but the artificial intelligence techniques used by OpenAI to provide its smarts are neither unique nor secret. Competing projects and open-source clones may soon make ChatGPT-style bots available for anyone to copy and reuse.
Stability AIa startup that has already developed and open-sourced advanced image-generation technology, is working on an open competitor to ChatGPT. “We are a few months from release,” says Emad Mostaque, Stability’s CEO. A number of competing startups, including Anthropic, Cohereand AI21are working on proprietary chatbots similar to OpenAI’s bot.
The impending flood of sophisticated chatbots will make the technology more abundant and visible to consumers, as well as more accessible to AI businesses, developers, and researchers. That could accelerate the rush to make money with AI tools that generate images, code, and text.
Established companies like Microsoft and Slack are incorporating ChatGPT into their productsand many startups are hustling to build on top of a new ChatGPT API for developers. But wider availability of the technology may also complicate efforts to predict and mitigate the risks that come with it.
ChatGPT’s beguiling ability to provide convincing answers to a wide range of queries also causes it to sometimes make up facts or adopt problematic personas. It can help out with malicious tasks such as producing malware code, or spam and disinformation campaigns.
As a result, some researchers have called for deployment of ChatGPT-like systems to be slowed while the risks are assessed. “There is no need to stop research, but we certainly could regulate widespread deployment,” says Gary Marcus, an AI expert who has sought to draw attention to risks such as disinformation generated by AI. “We might, for example, ask for studies on 100,000 people before releasing these technologies to 100 millions of people.”
Wider availability of ChatGPT-style systems, and release of open-source versions, would make it more difficult to limit research or wider deployment. And the competition between companies large and small to adopt or match ChatGPT suggests little appetite for slowing down, but appears instead to incentivize proliferation of the technology.
Last week, LLaMA, an AI model developed by Meta—and similar to the one at the core of ChatGPT—was leaked online after being shared with some academic researchers. The system could be used as a building block in the creation of a chatbot, and its release sparked worry among those who fear that the AI systems known as large language models, and chatbots built on them like ChatGPT, will be used to generate misinformation or automate cybersecurity breaches. Some experts argue that such risks may be overblownand others suggest that making the technology more transparent will in fact help others guard against misuses.
Meta declined to answer questions about the leak, but company spokesperson Ashley Gabriel provided a statement saying, “While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness.”