After recent experiments with 360 video I have worked out a quick methodology so I can work quickly without going back a few months later, and saying how the hell did I do that?
So basically the rule of thumb is take the 360 source video or stills from the iPhone directly to the laptop, and NOT from the 360 camera directly. The image should be in Equirectangular format and look like the above. This is Monoscopic 360 video, not Stereoscopic 3D which is used in more high end 360 cameras. A standard 360 video is just a flat equirectangular video displayed on a sphere. Think of Monosciopic like the face of a world map on a globe, whereas Stereoscopic 3D can add another level of immersion by adding depth data between the foreground and background. Stereoscopic means you have two copies, a left eye and a right eye.
The image should NOT look like the above image, if it does it won’t look correct when played via VR app in the Google cardboard headset. The spatial injector code only needs to be applied if its to be uploaded to Youtube.
Drag the video above to see it 360 (view in Chrome browser). This is a 360 video taken by my Ricoh Theta S 360 camera and a blender 3D rotating object, both imported into Final Cut Pro X, exported as an MP4.
The exported mp4 is then uploaded to the iPhone again, and opened in a free VR app, one thats been working well for me is HOMiDO player, you simply open the app, hit ‘video player’, click on the folder icon, find your mp4 on the iPhone, then click ‘choose’ and it compresses the video for 360 play. Slot the iPhone it into your VR headset and bobs your mothers brother, you are immersed inside the video.
Note to Self:
- Strange to see a 3D scan from blender rotating inside my studio.
- Need to source some good 360 source material, maybe from some woods/forest?
- Ricoh Theta S video quality is very soft/low quality, so try to get around that by lighting perhaps?
- Do some proper stitching/editing experiments so no joins/lines and imagery looks ok across 360 sphere.