Here's some test footage I threw together using the GoogleDream's API to see it's viability to use as a comp/effect for a short film. To make this effect work you have to separate every frame in your footage into stills, run it through the generator than bring it back into your timeline. This one second of footage is made up of 24 stills that I extracted from the original footage then reassembled in Adobe Premiere.
A few learnings:
Texture and movement is key. Low-key lighting helps create texture in camera where there might otherwise not be. This helps the generator "dream" or interpret faces, data, etc., which would be harder to detect on well-lit, smooth surfaces. This footage does NOT have enough movement in my opinion. It's very uninteresting.
The Google Dream app is very unpredictable. You can't predict the movement or output of the stills once you run them through the generator. So, a good clip with good movement and results is really just luck.
Keying out the subject would make this better since the generator effects EVERYTHING in the frame. If you want ONLY the subject to be effected, it's going to take more work (I had an idea that that would be the case). You would need to properly prep the footage and comp the results back into a plate.
All in all, this may NOT work for what I want it to do, but it's fun tinkering with these sort of things. And I encourage everyone to keep up your creative endeavors (regardless of the results) and focus less on immediate results. In an era where "content is king", please make it good. Or at least, interesting...