I got my hands on a Ricoh Theta S 360 camera. The still photos are very sharp, unfortunately the video quality is quite soft.
The different stages of the file are interesting.
Video 360/Spherical screengrab.
The Theta software is buggy and will only convert the video from m4v files to mp4 depending on If you bring the image in via camera or from the app. Also if you bring it into Final Cut Pro or Premiere to edit, you have to then add metadata to the file so Youtube knows its meant to be a 360 video.
Hit play and then drag cursor for 360 orbit view (view in Chrome browser).
Note to Self:
- Quality of 360 video above on Youtube is very soft and jittery, other 360 videos using the ‘Spatial Media Metadata Injector’ seem to be twitchy also.
A sod of clay reveals itself as object.
I want to see the Lat. I want to see the Long. LatLong.
Unfurl for me.
Video vignette. Track used is Grey by Brixton based electronic musician Gaika.
Round and round and round you go.
Quick scans of a colleagues human head to see the quality.
I wondered previously if it was possible to get Arduino to talk to Processing and vice versa and previously had rigged up a quick experiment that got them talking and to display a simple message, but it seemed to always cause problems with the serial ports.
So to create a visual graph in Processing generated by triggering a sensor in Arduino.
When I play the Processing sketch a purple graph is created in the display window. As I move my hand up and down over the Arduino sensor, the graph displays peaks and troughs representing the sound triggered. The numbers created by Ardunino are between 0 and 1023.
Note to Self:
- Visually this graph looks fairly horrible, so the next step is to manipulate the Processing sketch so it looks aesthetically appealing
- I also thought it would easily be able to display any binary info/numbers generated via Arduino in the Blynk app, but they only have a bog standard graphical method of display, not very creative, also the Blynk app has stared to charge since jan 1st 2016, so I think its best used as a virtual button to trigger any Arduino events if need be in the future.
Last few very quick Arduino experiments…
1 little piggy went to the market, 1 little piggy stayed at home…
A Photoresistor sensor, as the hand moves closer to the pile of hundreds and thousands the sensor measures the distance, and acts like a Theremin.
Piggy friends were just lying around the studio, so gave them some hundreds + thousands to feed on, as they look for the sensor in the pile.
Note to Self:
- Sensors: Have been mucking about with various sensors…proximity, light etc..so kind of know now, which ones are useful to me going frwd.
- Need to push into a more focused action research state.
- Things are now bits + pieces/loose ends, digital sketches, be good to have a few ‘finished’ pieces for my own kick…however small.
An experiment with sound. Slo-mo musical scale/paper speaker in leaves. I was getting restless dwelling a lot on context, process and methodology and simply wanted to create some ‘work’ however small, a mini piece, with a view to then developing a larger project based on it called ‘The Old Ohm Tree’ which would feature maybe 5 speakers and a lot more wooden boughs!
Slo-mo musical scale gives it an eerie Autumnal feel. ‘Oiche Samhain’ (Gaelic for Halloween night) will soon be upon us, the leaves are falling off the trees.
I want to harvest this information and represent it online. I am Waiting to receive a Arduino shield that will help connect me via Blynk app.
Note to Self: Old Ohm Tree.
- Boughs: Source x5 other Silver Birch wooden pieces and cut to fit.
- 3D rendered base: Install Cinema 4D? and sketch up 3D object for white base.
- 3D Printing: Does this need to be modular/not all 1 piece for a 3D printer?
An experiment to trigger and manipulate audio using the FaceOSC tool.
Pure data is used to route the audio file into any audio software, e.g. Ableton live or Garageband, then whilst moving the face (eyes, nose and mouth) the sound files can be manipulated.
This is the same thing done via OSCulator.
The audio source files are files I quickly put through a Korg kaoss pad and messed with.
Note to Self:
- Revisit: This is an unfinished experiment which should have taken just 20 mins, I had x2 versions of it open, 1 in Pure Data and 1 in OSCulator, DUH! hence it wasn’t doing what I wanted it to.