HOW ONNX Runtime is Changing Microsoft AI with Intel

The extension of AI inference deployment from servers to Windows PCs enhances responsiveness

These advancements keep powering Office features like neural grammar checker, ink form identification, and text prediction

The ONNX Runtime, which enables machine learning models to scale across various hardware configurations and operating systems

The ONNX runtime is continuously refined by Microsoft, Intel, and the open-source community

Using the same API as cloud-based inferencing, ONNX Runtime Mobile runs models on mobile devices

ONNX Runtime Inference leverages hardware accelerators and functions with web browsers, cloud servers, and edge and mobile devices

Ensuring optimal on-device AI user experience necessitates ongoing hardware and software optimization