is looking to take the sting out of creating 3D virtual worlds with a new model. GET3D can render characters, buildings, vehicles, and other types of 3D objects, says NVIDIA. The model should also be able to quickly create shapes. The company notes that GET3D can render around 20 objects per second using a single GPU.
The researchers trained the model using synthetic 2D images of 3D shapes taken from multiple angles. NVIDIA says it took just two days to feed around 1 million images into GET3D using A100 Tensor Core GPUs.
The model can create objects with “high-fidelity textures and complex geometric details,” NVIDIA’s Isha Salian . The shapes that GET3D makes “are in the form of a triangular mesh, like a papier-mâché model, covered with a textured material,” Salian added.
Users should be able to quickly import the objects into game engines, 3D modelers, and movie renderers for editing, as GET3D will create them in supported formats. That means it could be much easier for developers to create dense virtual worlds for games and the metaverse. NVIDIA cited robotics and architecture as other use cases.
The company said that based on a training dataset of car images, GET3D was able to generate sedans, trucks, race cars and pickups. It can also produce foxes, rhinos, horses, and bears after being trained with animal images. Unsurprisingly, NVIDIA notes that the larger and more diverse the training set that is fed into GET3D, “the more varied and detailed the result will be.”
With the help of another NVIDIA AI tool, , it is possible to apply multiple styles to an object with text-based prompts. You can apply a burnt-out look to a car, turn a model of a house into a haunted house, or, as a video showing off the technology suggests, apply tiger stripes to any animal.
The NVIDIA research team that created GET3D believes that future versions could be trained on real-world images rather than synthetic data. It is also possible to train the model on several types of 3D shapes at once, instead of having to focus on one category of objects at a given time.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at time of publication.