Tag: Interface

Arduino to Processing Revisited

I have revisited an experiment I did a few weeks back, of creating a visual graph in Processing generated by triggering a sensor in Arduino. The graph generated didn’t look aesthetically pleasing so I tweaked the Processing code to change the visualisation of the data.

proc-4

The visuals still don’t look as I want them so I have introduced some new code in the Processing sketch.

proc-6

This changes the look of the data generated by the Arduino sensor and is closer to the look I wanted to achieve.

proc-3

proc-1

Note to Self:

  • Do some more versions.

Cardboard Reality

I finally got around to getting my hands on a Google Cardboard VR headset. Cardboard is a low-cost, easy-to-get virtual reality viewer that transforms a phone into a basic VR headset.

cr-1

Film Auteur Werner Herzog had been ranting on about VR [1],

“What reality is the cockroach at my feet in the kitchen experiencing? It is not my reality, we only share the same space.”

so I wanted to see what all the fuss was about.

Is Virtual Reality and Augmented Reality really a new 21st century art form? With not too much bother I assembled the Cardboard viewer, its a cheap alternative to the myriad of other viewers out there…HTC Vive VR headset, Durovis Dive, Homido, Samsung Gear VR, Carl Zeiss VR One, Cmoar, OSVR, Fibrum, HTC Vive, Sony Morpheus, Oculus Rift DK1 etc. The Oculus Rift and Sony Morpheus (which is more game orienteted) looked the most promising to me.

Anyway back to my poor man’s version, the cardboard…I downloaded a few apps that looked interesting.

cr-2

First up was Bjorks Stonemilker, which was good, especially the way she seemed to jump out from her own body and into a new position…but I managed to ‘break it’ by trying to take a screenshot whilst it was playing, but this was interesting in itself as it threw up some code and ‘inner workings’ as to how it might be made. Stonemilker was directed by Andrew Thoamas Huang and produced by VRSE.com. They seem to be ahead of the pack in the VR game, Chris Milk is a former video artist who runs it.

The next x3 pieces I watched were all produced by VRSE, ‘Take Flight’, a marvellous short excursion into the heavens above New York City. ‘Evolution of Verse’ that takes you face to face with a foetus in the womb, quite amazing, and finally ‘Catatonic’ a creepy wheelchair ride through an insane asylum. All three were fairly amazing, you the viewer being immersed directly into their environment. Catatonic was the most unsettling, as when you swivel 360 degrees and look up directly at the orderly who is pushing you in the wheelchair right above you…you realise its not an orderly anymore!

cr-3

So how does film making for VR differ from traditional film making? some main points below;

  • You can’t frame a 360 shot
  • There are no cuts
  • Death of the Close-up?
  • The character can know you are there, and be right beside you
  • The main protagonist sacrilegiously cuts through the fourth wall, and makes a direct connection with you via eye contact
  • You must try and draw the viewers eyes to the different places they can look at and explore
  • Scale. Object sizes aren’t always in real-world ratios. Sometimes certain scale ratios are based on what feels right, rather than what would be mathematically correct
  • Focus on movement that matters, so that movements are computed in real time to adjust to the viewers perspective
  • Sound. Directional binaural sound
  • Don’t overload your rendering machines, Oculus headsets show images at 90 frames per second which is a huge computional burden. Reduce the load in a CG production, mathematical bounding boxes are calculated around objects, if the viewer isn’t look at something in particular, then it doesn’t render

The Cardboard was the best $10 I ever spent.

Note to Self:

  • Not sure how this impacts on my making, if at all, as the technology seems to be customised or else priced out of reach.
  • There is a great 360 degree camera called ‘Neo’ by a company by Jaunt, but they aren’t even for sale, they envisage leasing them out to interested parties in the future.

Arduino to Processing

I wondered previously if it was possible to get Arduino to talk to Processing and vice versa and previously had rigged up a quick experiment that got them talking and to display a simple message, but it seemed to always cause problems with the serial ports.

Arduino Sketch

So to create a visual graph in Processing generated by triggering a sensor in Arduino.

arduino-proc-1

Processing Sketch

When I play the Processing sketch a purple graph is created in the display window. As I move my hand up and down over the Arduino sensor, the graph displays peaks and troughs representing the sound triggered. The numbers created by Ardunino are between 0 and 1023.

arduino-proc-2

Note to Self:

  • Visually this graph looks fairly horrible, so the next step is to manipulate the Processing sketch so it looks aesthetically appealing
  • I also thought it would easily be able to display any binary info/numbers generated via Arduino in the Blynk app, but they only have a bog standard graphical method of display, not very creative, also the Blynk app has stared to charge since jan 1st 2016, so I think its best used as a virtual button to trigger any Arduino events if need be in the future.

Digital Scrying

I have always been fascinated by the 16th century tale of Dr. John Dee and the Alchemist Edward Kelly (not to be confused with the Camberwell tutor of the same name!) and their seven year obsession with ‘conversing with Angels’ in an ancient Enochian language that both lead them through dark portals and ultimately their own downfall. Channel 4 featured a program on it in 2002 [1]. and in a recent online lecture on this course the Dean of Camberwell touched on it when mentioning a failed project and seeing John Dee’s obsidian scrying mirror in the British museum.

digital-scrying-1

Obsidian is a naturally occurring black glass and has been used throughout the ages by many cultures, John Dee used it in his translucent ‘scrying mirror’ in conjuncture with his ‘crystal gazers’ to attempt to foretell the future. Many black surfaces can be used for scrying, even dark water.

Charlie Brooker created the TV series ‘Black Mirror’ [2]. which is set in the very near future. It examines through speculative fiction, contemporary society and the unanticapted consquences of new technologies. The term Black Mirror refers to the shiny black screen of our tablet and mobile devices.

digital-scrying-3

I see a parallel in Dee’s 16th century scrying interfaces and present day Virtual reality (VR). Imagine Dee the ‘intelligencer’ or seeker of knowledge with his dubious sidekick Kelly donning Oculus Rift Headsets and immersing themselves in the ‘magic of presence’. Conceptually, both are considered parallel worlds to ours. One is created visually within the mind, the other digitally created and immersive.

digital-scrying-2

Maybe I am reaching here…VR used as some means of digital prophecy? the headset surplanting the crystal gazer of old? but both Dee’s journeys into other realms and the Rift’s advanced display technology combined with its precise, low-latency constellation tracking system enables the sensation of presence – the feeling as though you’re actually there. Both the scrying Mirror and the Rift headsets have as objects/divine interfaces an otherworldly ritualistic quality to them.

As Oculus say on their own website….”The magic of presence changes everything” [3].

Blynk & Miss

I have revisited this Blynk gateway experiment from October.

blynk-3

Blynk is an app with a digital dashboard where you can build a graphic interface to control virtually the Arduino micro processor. I have set up a simple Arduino sketch, a simple LED light being triggered by a button. After setting up in this case a virtual button in Blynk, You get sent an authorisation token code via email that is pasted into the Arduino sketch, this then enables you to trigger the LED light simply by pressing the virtual button.

The problem was to get the app to talk physically to the Arduino board, I had to get either a wireless or an ethernet shield to act as an online gateway. It would have been better to do it via Wifi but the European Wifi boards were sold out and the ethernet boards were cheaper, so I opted for that instead.

ethernet

As my studio is in a network in a building and therefore to route it I had to generate special IP addresses and do a lot of fiddling about with ethernet and CAT cables, but after reinstalling Arduino Blynk libraries I finally got it to work. The virtual button triggers the LED on the Arduino board via the web/app.

Note to Self:

  • This is handy If I want to further develop some proper Mobile Locative Media installations.
  • Get a Wifi shield and set it up use that instead.
  • I had rigged up the Arduino board to talk to the Processing app in another experiment, which worked but it seems to mess up all the Ardunio ports and libraries.

Hundreds & Thousands Theremin

Last few very quick Arduino experiments…1 little piggy went to the market, 1 little piggy stayed at home

theremin-1

A Photoresistor sensor, as the hand moves closer to the pile of hundreds and thousands the sensor measures the distance, and acts like a Theremin.

Piggy friends were just lying around the studio, so gave them some hundreds + thousands to feed on, as they look for the sensor in the pile.

theremin-2

theremin-3

Note to Self:

  • Sensors: Have been mucking about with various sensors…proximity, light etc..so kind of know now, which ones are useful to me going frwd.
  • Need to push into a more focused action research state.
  • Things are now bits + pieces/loose ends, digital sketches, be good to have a few ‘finished’ pieces for my own kick…however small.

Lives driven by Data

Media theorist ‘Lev Manovich’ said:

“19th Century culture was defined by the novel,
20th Century culture by Cinema,
The culture of the 21st century will be defined by the interface.”

lee-1

His media visualisation techniques compress massive amounts of data into ‘smaller observable media landscapes. Rather than searching through metadata, we’re then able to find relevant information in a way that’s more compatible with the way humans process information. This is particularly valuable in giving us the ability to observe where patterns of structure and colour may exist.

A lot of digital works for me in general are conceptually sound, but the arrived at outcome is not always aesthetically pleasing.

Artist Aaron Koblin expands on this, he takes vast amounts of data — and at times vast numbers of people — and weaves them into interesting visualizations. From elegant lines tracing airline flights to landscapes of cell phone data, his works explore how modern technology can make us more human.

aaron-1

We can set parameters and choose what to pull out of the data glut. Below is a visualisation of data from the U.S. Federal Aviation Administration processed to create animations of flight traffic patterns and density. The outcome is Colour coded by type: Altitude, make and model.

aaron-2

Note to Self:

  • I Will continue my reading/research into the representation of information gathered, and how best to present it once captured.