The Future of Chips Silicon Volley’s Genius AI Assistance

The work shows how businesses in highly specialized industries might use internal data to train large language models (LLMs) to create productivity-boosting assistants

When examined under a microscope, a cutting-edge processor such as the NVIDIA H100 Tensor Core GPU (above) resembles a carefully designed city made of tens of billions of transistors connected by streets 10,000 times thinner than a human hair

A Vast Perspective for LLMs

The paper describes how NVIDIA engineers trained a proprietary LLM, named ChipNeMo, on the company’s own data for internal use

The LLM was used to produce and optimize software and support human designers

During early testing, a prototype chatbot that answers inquiries on GPU architecture and design assisted a large number of engineers in finding technical documents fast

Personalized AI Models Utilizing NVIDIA NeMo