You are viewing a single comment's thread from:

RE: Dividing the concrete jungle to help the animals connect - Part I

in #science7 years ago (edited)

I really like this post as a conversation starter. The question you ask about stopping the concrete jungle is one that resonates with me on a daily basis. I think a lot about VR in this context. I have seen some very basic things in VR that have really blown my mind. For example, the "California Redwoods" environment in the Steam VR home screen thing. This was my genuine reaction to seeing it for the first time:

(Hopefully the time point 222 embeds, otherwise skip ahead to 3:42 in that video).

I've been trying to collect as high resolution nature footage that I can in the hopes that we will soon have better algorithms for using 2d pictures to texture 3d scenes. I believe this is important because the world needs to be exposed to nature in order to appreciate it, but we also cannot expose 7.5 billion people to sensitive habitats in order for people to want to protect those places.

Sort:  

Hey thanks for stopping by and for adding this cool video too! 2D to 3D is really cool technology. Have you looked into structure from motion (SfM)? This is, to my knowledge the best way to do photogramatry to get structure (3D) from a number of different photos... check it out!

Cheers and welcome to the conversation!

Structure from motion
Structure from motion (SfM) is a photogrammetric range imaging technique for estimating three-dimensional structures from two-dimensional image sequences that may be coupled with local motion signals. It is studied in the fields of computer vision and visual perception. In biological vision, SfM refers to the phenomenon by which humans (and other living creatures) can recover 3D structure from the projected 2D (retinal) motion field of a moving object or scene.

Thanks @wikitextbot! You are one of my favourite bots (after @haikubot of course).

I am familiar with the general idea but hadn't heard it worded like that. The academic work I do is influenced by much earlier neural models of direction and orientation like Sporns1989:

It's quite a complicated business though, and the papers I've seen that claim to automate the process in some way are quite intense, e.g. Li, Y., Pizlo, Z., & Steinman, R. M. (2009). A computational model that recovers the 3D shape of an object from a single 2D retinal representation. Vision Research, 49(9), 979–991. http://doi.org/10.1016/j.visres.2008.05.013

That looks really cool! "Academic work you do" - cool. Maybe you should do a post on this work? @steemstem is a good place to look for similar and like-minded individuals. Cool video too and hopefully I can find some time to check out the paper too. Looking forward to seeing more of your photography!

You're awesome :)

Thanks! Enjoy the wild ride here...