A variety of rendering, simulation, and generative AI innovations are being brought by NVIDIA to SIGGRAPH 2024
The studies centre on physics-based simulation, increasingly lifelike AI-powered rendering, and diffusion models for visual generative AI
These projects will contribute to the development of tools that businesses and developers may use to construct intricate virtual settings, characters, and items
Diffusion models can assist artists, designers, and other creators in quickly creating visuals for storyboards or production
In a separate group of publications, NVIDIA presents novel methods for simulating diffraction effects, which are utilised in radar modelling to train self-driving cars
The model could be used to replicate longer wavelengths of radio waves, radar, or sound in addition to visible light
This improves the performance of denoising techniques and reduces visual artefacts in the final output