Mistral AI

Mistral AI recently released a new open-source AI model that the creators describe as an advanced sparse mixture-of-experts network. With over 46 billion parameters (but efficient sparse processing means it only uses 12.9B parameters per token), it matches large proprietary models like GPT-3.5+ in quality while being much faster. Mistral handles long contexts across 5 languages and shows strong performance at tasks like code generation and instruction following after tuning.

Developers can access the Mistral and Mixtral models through Mistral’s API or by deploying it themselves with the vLLM open-source toolkit or other LLM manager. The model files are available on Hugging Face.