Microsoft has launched Phi-3 Mini, a new lightweight AI model featuring 3.8 billion parameters, marking the beginning of a series of smaller, more efficient artificial intelligence systems.
This release is the first of three planned models, with Phi-3 Small and Phi-3 Medium set to follow, featuring 7 billion and 14 billion parameters, respectively. The Phi-3 Mini is now available on Azure, Hugging Face and Ollama platforms.
Designed to perform complex tasks on a smaller scale, the Phi-3 Mini offers capabilities similar to larger language models like GPT-3.5 but requires less computational power, making it ideal for use on personal devices such as smartphones and laptops.
Phi-3 Mini builds on the success of its predecessor, Phi-2, enhancing its abilities in coding and reasoning. Microsoft's approach to training involves a novel "curriculum" method inspired by the learning processes in children, using simplified content to develop the model's capabilities. This educational strategy involves generating content akin to children's books to effectively train the AI in foundational knowledge and reasoning.
Despite its strengths, Phi-3 Mini is not intended to compete directly with vast knowledge-based models like GPT-4 in terms of general knowledge breadth. Instead, it is optimised for specific applications where the required data sets are smaller, a factor that many businesses find appealing due to the reduced operational costs and enhanced performance for specialised tasks.
Microsoft's foray into compact AI models with Phi-3 Mini positions it as a strong competitor in the AI market, where there is increasing demand for efficient, specialised models.