Categories: News

Nvidia Reveal New AI Algorithm that can Turn 2D Images into 3D

While there are plenty of current applications for turning 3D objects into a 2D perspective, there is very little that can successfully reverse engineer that process. Put simply, if you want an object to be 3D, you have to render it in 3D. Once the hard work of that is done, turning it into a 2D image is a pretty straight forward process. Well, probably not that straight forward, but you get the idea.

Nvidia, however, has just announced that a brand new AI being developed has successfully been able to recreate various 3D images from a singular static 2D picture.

Using pictures of birds, the AI was able to successfully replicate images from various angles. More than that, however, it was also able, to an impressive degree, to recreate the various textures.

What Does Nvidia Have to Say?

Now, the science behind this is amazingly complicated. We’re talking about mathematic symbols and equations of which I have literally zero understanding. I’ll, therefore, allow Nvidia themselves to tell you about it.

“In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite. A model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.

NVIDIA researchers wanted to build an architecture that could do this while integrating seamlessly with machine learning techniques. The result, DIB-R, produces high-fidelity rendering by using an encoder-decoder architecture. A type of neural network that transforms input into a feature map or vector that is used to predict specific information. Such as shape, color, texture and lighting of an image.

It’s especially useful when it comes to fields like robotics. For an autonomous robot to interact safely and efficiently with its environment, it must be able to sense and understand its surroundings. DIB-R could potentially improve those depth perception capabilities.”

What Do We Think?

The main purpose of this new DIB-R (differentiable interpolation-based renderer) technology means that a process that could formerly take AI algorithms weeks to be ‘trained’ to do can now essentially learn ‘depth-perception’ on any object within milliseconds.

The bottom line is that while this might sound a little dry, it could offer some amazing potential. Some kudos should definitely go out to Nvidia!

For more information, you can check out the official Nvidia blog via the link here!

What do you think? Do you think this is impressive? What applications can you see it having? – Let us know in the comments!

Mike Sanders

Disqus Comments Loading...

Recent Posts

Fractal Design Define 7 TG Black PC Case Review

Fractal Design is one of the most respect brand names in the PC market. Sure, they make some nice cooling…

3 hours ago

RDR2 Modder is in Hot Water over ‘Hot Coffee’ Mod

If you know a little about your gaming history, then the mere mention of 'hot coffee' in this article's title…

3 hours ago

Metro Exodus Gets off to A Strong Start on Steam

Earlier this month we saw the end of the year's exclusivity deal with the Epic Games Store and, as such,…

4 hours ago

AOC Launches the 35” Agon AG353UCG 200Hz Gaming Monitor

Display specialist AOC has just announced the launch of the 35” (88.98 cm) AOC AGON AG353UCG, a monitor which is…

5 hours ago

GIGABYTE Updates its Brix Mini-PC For Intel Comet Lake CPUs

Over recent years, mini-PC's have become a more popular market for those on the move who don't necessarily want (or…

5 hours ago

Nvidia Confirms Cyberpunk 2077 is Coming to GeForce NOW!

Earlier this month Nvidia brought a semi-surprise to the gaming market by launching their GeForce Now game streaming service. While…

6 hours ago