The on-stage demo showed off rotations for a number of varied images, from largely symmetrical dragons, horses, and bats to more complex shapes like a sketch of a bread basket or a living cup of fries (complete with arms, legs, eyes, and a mouth). In each case, the machine-learning algorithm does an admirable job assuming unseen parts of the model from what's available in the original 2D view, extrapolating a full set of legs on a side-view horse or the bottom of the Fry Man's shoes, for instance.
You are viewing a single comment's thread from: