Compact, open-source Phi-3 models can be modified, run on less powerful hardware, and allow developers to create locally executable applications
Intel is pleased to collaborate closely with Microsoft to guarantee that a number of new Phi-3 models are actively supported by Intel hardware
Intel co-designed DeepSpeed, an intuitive deep learning optimisation package, and improved Hugging Face's automatic tensor parallelism support for Phi-3 and other models
Phi-3 models’ compact size makes them ideal for on-device inference, enabling lightweight model creation on AI PCs and edge devices
Finally, customers can now quickly and simply get started with Azure AI’s models by accessing Phi-3 mini and Phi-3 Medium
Phi-3 models surpass similar and larger models in language, reasoning, coding, and math benchmarks, making them the most capable and cost-effective SLMs
Phi-3-vision is a multimodal model with 4.2B parameters that combines language and vision functions
There are two context lengths for the 3.8B parameter language model, Phi-3 mini (128K and 4K)
There are two context lengths for the 7B parameter language model, Phi-3-small (128K and 8K)
The 14B parameter Phi-3 Medium language model comes in two context lengths (128K and 4K)
The Phi-3 models may also be deployed anywhere and are optimised for inference on NVIDIA GPUs and Intel accelerators
Phi-3 family, Phi-3-vision, can extract and reason about text from photographs and real-world imagery
Phi-3-small and Phi-3 Medium perform better than both language models of the same size and considerably larger ones