nvidia 的新 ai 将现实生活中的视频转换为3d 渲染
以下内容由机器翻译生成。如果您觉得可读性不好, 请阅读原文或 点击这里.
We’ve often wondered while playing games or experiencing virtual reality: how can this be closer to the real world? Nvidia might have an answer. The company has developed an AI which can turn video into a virtual landscape.
Nvidia has set up a demo zone atNeurIPS AI conference in Montreal to show off this technology. The company used its own supercomputer named the DGX-1, powered byTensor Core GPUsto convert videos captured from a self-driving car’s dashcam. This setup made it possible to covert theory into tangible results.
The research team then extracted a high-level semantics map using a neural network, and then used Unreal Engine 4 to generate high-level colorized frames. In the last step, Nvidia uses its AI to convert these representations into images. Developers can edit the end result easily to suit their needs.
“Nvidia has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network,” Vice President of Applied Deep Learning at Nvidia, Bryan Catanzaro, said in a statement. “Neural networks – specifically – generative models are going to change the way graphics are created.”
He added that this technology will help developers and artists create virtual content at a much lower cost than before.
This is particularly exciting for game developers and virtual reality content creators as they can explore new possibilities by drawing from the standard video. However, this technology is still in the development phase and requires a supercomputer. So we might have to wait a while until we see this on our consoles and desktops.
You can read more about Nvidia research 这里.