• The Conductive Ink Challenge

    The Challenge

    ConductiveInkChallangeBadge

    I was recently asked if I wanted to take part in a competition organised by Newark Canada and presented with VHS ( Big thanks to Tom :-) ); a challenge that focused on a very interesting and versatile product, conductive ink. I received a pen in the mail and started to investigate what I could do with it. The Conductive Ink Pen essentially allows you to easily draw lines or tracks that you can pass electricity through. Here is the project that I decided to submit for this challenge. I’ll also explain how I went about building this project and how these concepts can be applied to other projects.

     

     Rudolph the Conductive Nosed Reindeer


    When constructing this project I wanted to make the conductive ink the star, really show off what it could do but also make it a little out of the norm. From an artistic point of view I really like the way that the silver conductive ink is highlighted on the white stone, even more so when it is lit, beauty in simplicity. I also wanted to bring people closer and connect with this piece so I created a touch interface that starts the sequence.

    Building the Project

    The piece is made up of 4 main parts:

    1. Base Model

    2. Surface mounted lights

    3. Microcontroller

    4. Conductive Ink

    The Base Model:

    As Christmas is nearing there are many ornaments being stocked at local craft shops. After an afternoon looking for just the right subject I came across a reindeer and the idea was obvious. I glued it to a base and that was all I needed to start going.

    Surface mounted lights:

    Surface mount devices or SMD’s for short are components used on the surface of a circuit board as opposed to the traditional through hole method. I decided to use these tiny lights because I didn’t want to drill holes into the reindeer and I wanted to keep the process simple. I mounted one red SMD LED (Light emitting diode, essentially a Light) on his nose and 2 on either corner at the front of the base giving some mood lighting to the model.

     The Microcontroller:

    To control the whole sequence I used a small microcontroller called an Arduino. It looked after detecting the touch, animating the lights and triggering the audio to play on the computer. The code I used for this project is based on the code that I wrote in my experiments with capacitive sensing with Conductive Ink with the addition of the light animation.

    Conductive Ink:

    The main star of this show is the Conductive Ink, it was used to connect all the parts together. There are many great things about this product; the pens are easy to use, they can be applied to almost anything and any shaped surface but the best thing is you can use it as part of the artwork. Conductive ink used to light the nose runs down the left and right side of the reindeer's face and front legs and out to the sides of the base. For the touch sensor I used a single line from the antlers, down its back and across the background of the base.

     

    Other Applications

    Conductive Ink is such a versatile and easy to use product, from educating basic electronic concepts to complex circuits, flexible boards and interactive projects. Capacitive sensing as I have used in this project can be applied to many application from detecting a user’s touch or proximity to creating paper thin controls as I have shown in a previous post. The potential uses for conductive ink are vast, grab a pen and start experimenting with making switches, sliders interactive circuits and great looking art.


  • Intel’s Perceptual Computing Camera

    Intel - Perceptual Computing

    At GDC 2013 I spoke at the "Natural, Intuitive, and Immersive Gaming With Intel Perceptual Computing" and Intel Booth introducing the emerging field of Physical Technical Art, an area that focuses on human-computer, human-human interactions and physical workflows during game development and play.  One of the main interaction devices that I spoke about in my talk was the Intel Perceptual Computing Camera. In this article I’m going to talk about my experiences working with this hardware and how I used the provided SDK to port an iOS game to use it’s perceptual interface.

    What is Perceptual Computing

    Traditionally when interacting with a computer you only had a number of basic input systems; keyboard, mouse / trackball, joystick or digitising tablet. Over the years this has exploded to include speech recognition, touch / multi-touch interfaces, eye and face tracking, motion/gesture tracking and even scratch detection to name a few.

    Intel’s Perceptual computing initiative focuses on the research and development of software and hardware that encompasses speech recognition, face identification, finger and hand gesture tracking and augmented reality for consumers in turn improving the Natural User Interface or human-machine interface.

    GDC Intel Robert Butterworth

    Intel Developers Session

    Bowlind Dead- Intel Booth TalkIntel Booth Talk on porting "The Bowling Dead"

    The Camera

    Intel Gesture Camera

    Intel Perceptual Camera

    The Intel Perceptual Computing camera is actually a Creative Camera, a USB 2.0 HD webcam with dual-array microphones and a Depth sensor designed for close-range interactions. It doesn’t require any additional power as the Kinect does and it’s quite small as you can see in the first image. The camera does do some image and sound processing but most of the magic is in Intel’s SDK. It’s great for laptops and desktops, but not so accessible to lower power systems such as Raspberry Pi or Arduino.


    How it works

    The Creative Camera is built from a number of parts including an HD camera to capture the 24 bit color at 30fps as well as stereo microphones; typically what you would find in a standard HD Web Camera. The other camera is an 8 bit Infrared camera with an Infrared emitting light right next to it which can capture up to 60fps at 320x240 pixels.

    The HD color camera works as a normal camera would, although it has an infrared filter over the lens to stop any light produced by the IR Light flooding the color image. The infrared camera is a little different; it works in tandem with the light. This camera uses a concept called Time of Flight or ToF. This works using the known speed of light to measure distance which requires some precision electronics to time an IR flash to each pixel on the IR sensor. With this timing an image can be created with a gradient of distance as you can see in the grayscale image below.

    Anomalies can be created in the image if the surface that you are illuminating doesn’t reflect the light correctly or not at all and is outside of its timing range. Glass is a great example of this, it is virtually invisible to the sensor and we can use this to our advantage; I’ll go into a little more “depth” later. The IR Light is similar to the light used in a remote control and as with a remote you can also see the IR light on this camera with a mobile phone camera or any other camera that doesn't have an IR filter on it.

    The dual microphones not only offer high quality audio capture at 48Khz, but it provide us with positional information that can be used to identify who is talking in the image or other gestural information such as clicking or tapping. The PCSDK also includes a speech recognition system for voice commands and dictation.

    CC Parts

    Parts of the Camera

    PCX Color Depth

    Color and Depth Images

    With the information collected via the color and infrared sensors the Intel PCSDK can algorithmically detect hand positions and gestures, detect faces and  facial motion and produce 3d point clouds of the viewed scene, as you can see in the images below.

    The PCSDK is a simple to use but very powerful API which takes all the hard work out of the computational and image manipulation required to track and detect these gestures. Querying the API for information about each finger, palm or gesture is easily accessible whether you are using it in a standalone app, incorporated into a presentation system such as Cinder, in a web app or in a game engine as I did with The Bowling Dead port.

     

    PCX Hand Face Point

    Hand & Face Tracking and Point Cloud Images

    Comparable Technologies

    There are a number of similar hardware devices to the Intel Camera, each having their own strengths and weaknesses. I have included some of the major hardware devices that are currently popular and in use.

    Kinect 1&2

    Kinect - Xbox 360 Kinect - Xbox One

    The Kinect for PC has been a staple for hackers for a number of years now. There are many applications that have been developed ranging from full body motion capture systems to 3d scanners. The resolution of the image provided is quite low however and the hardware’s age is starting to tell. The Kinect 1 camera uses a slightly different technique for constructing its 3d scene, namely Structured Light. Essentially this system projects an IR pattern (structured light) over the scene and then by the distortion it can reconstruct it’s depth.

    With the second generation of Kinect currently shipping with Xbox One, a PC version will soon be available. This new hardware has switched to use the ToF system with an HD sensor, which will greatly improve its depth quality and overall tracking. One of the major downsides to the Kinect is its form factor. The hardware is designed to be a tabletop device that require external power in addition to the USB connection and not as an ultra portable device. Access to the Kinect 2 API is still currently restricted but will be released soon.

    Leap Motion

    LeapMotion_01

    The Leap motion is constructed from 2 cameras with 3 IR LED’s and runs completely off USB. Measuring only 3” long this unit is extremely responsive and fits nicely in front of your keyboard and will soon be integrated within one. With it’s cameras pointing directly up it is able to capture fingertips and hand motion smoothly and at high frame rate. It is best suited for interaction with palms facing down towards the camera. The recent SDK 2.0 sees improvements to hand and finger tracking with better occluded finger tracking and hand and finger labeling.

    Intel Camera

    Creative Camera

    Overall the Intel PCSDK is the powerhouse behind this developer hardware. It provides a great deal of information about the subject with hand, face and gesture tracking along with its voice recognition software. It’s medium form factor enables it to be taken easily wherever you go and can be run completely off USB.

    Now that the developer hardware has been out for a year, Intel has announced and demoed at CES 14 and IDF Shenzhen 2014 that within this year we will see a greatly miniaturized device embedded in laptops and tablets. This is going to have a phenomenal effect on how we interact with our hardware. I’ll certainly be looking forward to testing out the new incarnation of this device.

    Compact PC Camera

    Experiments

    Over the months leading up to GDC 13 I was able to test out a couple of features of the SDK. I was particularly interested in seeing how a touch based game would adapt to the Intel Camera.

    The Bowling Dead - iOS to Perceptual Computing SDK conversion

    The Bowling Dead is an iOS game that uses the touch swipe mechanism to bowl various bowling balls, some explosive, at the ever increasing zombie horde marching towards you in an alleyway. If the Zombie reaches you the game switches to a frantic melee and you are required to remove the zombie from you before it devours you.

     

     

    Hand Tracking in Maya

    Using the SDK I decided to test to see if hand tracking would work within Maya. It was relatively easy to get it all hooked up and operational. I used a combination of C# with Python.Net and it was surprisingly fast. This could be extended to allow the manipulation of other objects  or brushes within the scene or used as a general motion capture interface.

     

    Head Tracking in Maya (Perspective coupled Head Tracking)

    In this video you can see perspective coupled head tracking. This system drives the camera in planar space and is based upon the movement of the user’s head, producing perspective parallax which is a component of depth perception. In addition to this technique the user could wear Anaglyphic glasses typically red or blue or polarized glasses adding stereo vision which is key to producing immersion.

     

     

    In later blogs I'll be delving a little more into the details on how I ported the game from iOS to the Intel Perceptual Computing SDK.

     

    Stay Tuned!

     


  • Conductive Ink – Capacitive Sensing Test

    Earlier this week I got a conductive ink pen for a competition that I am currently entered in. I had a bit of a muck around with what I could do with it and the first test that I tried was a capacitive sensing control. The control is a simple slider that drive an LED’s brightness. Capacitive Sensing is a technology that uses the capacitance of the body to alter the charging time on an input pin. The larger a contact surface is the faster the charge time will become; Knowing this we can map the surface charge time to a scale and thus the LED brightness.

    Capacitive sensing circuit using conductive Ink surface with 1 Megaohm resistor and a 100pf capacitor.


    The code below was based on the CapSense tutorial from the Arduino Playground with some additions. I’ve added in 3 sample smoothing and some low end noise reduction. The LED is then mapped to the values coming directly out of the CapacitiveSensor library included in the tutorial.

    #include <CapacitiveSensor.h>

    CapacitiveSensor cs_3_4 = CapacitiveSensor(3,4);
    int led = 9;
    long total;
    long total_01;
    long total_02;
    long holdValue;
     
    void setup()
    {
        // turn off autocalibrate on channel 1 - just as an example
        cs_3_4.set_CS_AutocaL_Millis(0xFFFFFFFF);
     
        Serial.begin(9600);
        pinMode(led, OUTPUT);
    }
     
    void loop()
    {
        long start = millis();
     
        //Buffer Samples
        total_02 = total_01;
        total_01 = total;
        total = cs_3_4.capacitiveSensor(30);
     
        // Average (Smooth values over 3 Samples)
        total = (total + total_01 + total_02) / 3;
     
        // Remove Bottom end noise
        total -= 300;
     
        if(total &lt; 0)
        total = 0;
        else if(total &gt; 11000)
        total = 11000;
     
        //Update LED Brightness
        analogWrite(led, total/43);
     
        // check on performance in milliseconds &amp; Print Total (Debug)
        Serial.print(millis() - start);
        Serial.print("\t");
        Serial.println(total/43);
     
        // arbitrary delay to limit data to serial port
        delay(10);
    }

    The Conductive Ink Pen worked quite well and was easy to apply, I didn’t need a lot to get a good surface down on the paper to act as a contact. You can see in the video that I made the conductive surface in the shape of a triangle, this provides the variation in charge time. As the finger is in constant contact with the paper, you need to vary the amount of contact surface to drive the change.

    I have a couple more sensor ideas that I need to try out in the next few weeks and I’ll keep you posted. If you want to try some out yourself head over to Newark and pick yourself up some, there are some pretty freakin’ cool circuits you can make.

     


  • Super Ultra Deadrising 3 Display

    Over the past 6 months I’ve been working on an Expansion pack “Super Ultra Deadrising 3’ Arcade Remix Hyper Edition EX Plus Alphaand as part of the project I helped build with Jason Buchwitz an in-house display for the project. Here’s a little sample of what the sign was able to do, starting with just an idle animation every minute or so it enters into an attract mode where the sign lives up the name of the game… Massively over the top and completely awesome! Check out the video below and afterwards you can have a read of how the display was put together.

     

     

    The Sign is built mainly of foam core with graphics printed and glued to its surface, to illuminate the sign I used some 36mm Square 12V Digital RGB LED Pixels.

     

    These lights are individually addressable (4 Per Square) and can produce the full RGB gamut. I chained a couple of these strings together and fed them around the display, starting around the logo and then behind the flames. The lights are controlled via a controller Chip (WS2801) and take data via SPI or Serial Peripheral Interface. This data feed is sent from an Arduino  Due.

     

     

    This 84 MHz ARM Microcontroller looks after the sequencing of the display and can also control audio playback. The whole display is powered off  2x 12V power supplies for the Lights and a 5V power adapter for the Arduino. I'll be going into a little more detail about the code and wire setup in future posts... Stay Tuned!

     


  • Netduino Powered Pumpkin

    Seeing as it’s getting closer to that time of the year when people walk around at night wearing ghastly clothing, no it’s not Talk like a Pirate Day; Halloween. As part of the festivities at work we have a yearly Best Costume and Pumpkin Carving competition. So I thought I would share with you our team’s 2nd place winning Pumpkin family.




    The Pumpkin Family



    Yes the pumpkin was suped up a little from the traditional pumpkin; the competition rules allowed for props to be used I decided to take it to the limit. In a last minute rush the night before the carving, I wired, programmed and sequenced the hardware.

    The setup consisted of a Netduino (Micro-controller that is programmed in c#) for the brains, 4 LEDs (light emitting diode) for the eyes, 2 servos (motors that allow you to set the rotation of an arm between 0-180 degrees) for the eyes and eyebrows and a little speaker to play the theme tune.

    Once all the hardware was built, I created a very simple program to play the theme tune and sequence the movement of the servos and toggle the lighting of the LEDs. The first thing I wrote was the tone generator; to play a note I passed in the note (F#) and using a lookup table I had the note’s frequency, with this I set the speaker output pin to oscillate that frequency. Next I created an array of notes and duration's, these were the base of the sequencer. Now having the music playing (Thanks goes to my wife who converted the sheet music to note and duration for me) I created the servo and LED tracks and set their timing.


    Netduino & wiring



    The only thing left to do was create an event handler that would fire off the sequence when I pressed a button on the NetDuino. The quality of the music was quite gritty, this was due to not having any smoothing electronics in place, but I kinda liked the classic sounding theme, so I left it.

    The eyes in the following image were bought at a dollar store and mounted on some wire as pivots then the servo and controller arms were mounted to some cardboard for easy installation.


    Inside the pumpkin

    So I hope this quick rundown of how I created the Pumpkin Family electronics has inspired you to go and make your pumpkin a little more high-tech.

    Happy Hacking!