Friday, April 26, 2024

An open-source ChatGPT rival was just launched by the Stable Diffusion team

Share

The newest challenger to OpenAI’s ChatGPT comes from the company that makes the popular AI image generator Stable Diffusion. Known as StableLM, Stability AI developed this open-source chatbot to democratize access to advanced language models.

Stability AI recently announced the alpha version of StableLM, noting that it is a smaller and more efficient solution than most others. StableLM uses just three billion to seven billion parameters, 2% to 4% the size of ChatGPT’s 175 billion parameter model.

Stability AI

Just as Stable Diffusion is a more accessible image generator that can be extended by third-party developers, StableLM offers the same free and open-source solution as an AI chatbot that’s available to all.

Related

  • What is ChatGPT? How to use the AI chatbot everyone’s talking about

  • GPT-5: release date, claims of AGI, pushback, and more

  • ChatGPT is coming directly to Windows, but not how you think

Thanks to training on a new, experimental data set from EleutherAI called “The Pile,” StableLM can carry on conversations and write code with high performance. Stability AI notes that this data set contains 1.5 trillion tokens, three times larger than the data set used to train most AI models. ChatGPT was trained on”The Pile” but underwent more refinements afterward, including reinforcement learning to help reduce flawed results. ChatGPT has advanced considerably since it was released to the public, and most feel it is the AI chat leader.

Related Videos

A highly efficient AI model is critical to Stability AI since it wants to make StableLM work on lower-cost systems and less powerful GPUs. You can install and run the alpha version of StableLM today. The instructions are in the GitHub repository, along with a notebook with details about using it on a computer with limited GPU capabilities.

The easiest way to try StableLM is by going to the Hugging Face demo page. Since this was just launched and there will likely be high demand, response times could be slow, and as an alpha release, the results won’t be as good as the final release.

For example, when I asked StableLM to help me write an apology letter for breaking someone’s phone, it told me I did what I was supposed to do. The AI somehow misunderstood and thought I gave a gift rather than damaged a phone.

Stability AI includes a disclaimer about the results since StableLM is a pretrained Large Language Model with no additional fine-tuning. It doesn’t use reinforcement learning, as ChatGPT does, so the responses “might be of varying quality and might potentially include offensive language and views.”

It’s unknown whether the upgraded StableLM models that are coming can compete with ChatGPT. At the moment, it is clearly a work in progress. The same was true of another open-source challenger called CollosalGPT.

This isn’t the end of the story, however. Stability AI said larger models with 15 billion, 30 billion, and 65 billion parameters are in progress and should help refine the results. A 175 billion parameter model is planned for the future. Given the limited model size available currently, StableLM is off to a good start.

The open-source nature and the lightweight implementation of the alpha version of StableLM serve the purpose of allowing developers to begin working on applications. There is enough potential for growth and improvement that it’s worth keeping an eye on this new AI chatbot.

Read more

More News