Register now for better personalized quote!

Nvidia's latest AI research will make it easier to build 3D objects out of 2D images

Jun, 21, 2022 Hi-network.com
Nvidia

Nvidia on Tuesday is debuting new research into 3D rendering that could one day make it easier for graphic artists, architects, and other creators to quickly build 3D models out of 2D images. It's the sort of research that will help Nvidia and other major companies lay the foundations for an easily-accessible metaverse. 

Rendering 3D worlds is a complicated technical challenge. One technique known as inverse rendering involves reconstructing 3D scenes from a handful of 2D images. It uses AI to approximate how light behaves in the real world. 

Innovation

  • I tried Apple Vision Pro and it's far ahead of where I expected
  • This tiny satellite communicator is packed full of features and peace of mind
  • How to use ChatGPT: Everything you need to know
  • These are my 5 favorite AI tools for work

Nvidia recently showcased its work with neural radiance fields (NeRF), which uses AI to speed up the process of inverse rendering. The downside of NeRF models is that they leave you with images you can't easily edit -- that's not great for a designer creating something new. 

Nvidia's new research addresses that problem. The new Nvidia 3D MoMa pipeline reconstructs images with three features: a 3D mesh model, materials, and lighting. The 3D mesh model is built out of triangles. The materials are 2D textures that are overlaid on the 3D mesh like a skin. Lastly, 3D MoMa estimates how a scene is lit.

The finished product is directly compatible with the graphics engines and modeling tools that creators already use, meaning creators can modify and experiment with their models -- changing materials or lighting effects. 

Nvidia is presenting the inverse rendering pipeline this week at the Conference on Computer Vision and Pattern Recognition in New Orleans. To demo it, Nvidia's research and creative teams have created 3D models of jazz band instruments. They started with 100 images each of five different instruments. After 3D MoMa created 3D mesh representations of each, the team took the models out of their original scenes and imported them into the Nvidia Omniverse 3D simulation platform to edit. 

They changed the materials on the instruments, replacing the trumpet's original plastic with other materials like gold and wood. Then they demonstrated how the different materials reflect light differently.

Recommends

The 100+ best October Prime Day deals you can buy: Live updatesThe 5 best VPN services (and tips to choose the right one for you)The best Android phones you can buy (including a surprise pick)The best robot vacuum and mop combos (and if they're worth the money)
  • The 100+ best October Prime Day deals you can buy: Live updates
  • The 5 best VPN services (and tips to choose the right one for you)
  • The best Android phones you can buy (including a surprise pick)
  • The best robot vacuum and mop combos (and if they're worth the money)

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.