Interactive Electric Painting

Jess had this idea to make an interactive painting: create a picture using, in part, electrically-conductive paint, string wires from the paint to a computer such that touching the paint would trigger sounds.

The implementation of this idea was a smash at the Wellington Maker Party, and we got a lot of questions about how we did it. This article will quickly detail each step we took to make this unique piece of multimedia art.

painting

The Painting

Jess did all the painting. She went through a few stages of design:
first, she sketched out a really basic idea in her notebook. When she
was happy with that, she went and bought a canvas; she kept it cheap
and just got a basic all-purpose canvas. In theory, we could have just
found a big box and cut off one side, slather it with some white
paint, and then painted onto that, but we knew that it would be used and abused by hundreds of small children, so we chose to purchase a sturdy canvas from a store.

On the canvas, her first step was to re-do her sketch, in pencil, in
full size. Once she was happy with that, she broke out some acrylic
paint and mixed some custom colours. She used the acrylics for most of
the piece, and a special conductive paint from BarePaint for the
triggers.

With the conductive paint, she created two types of contact points:
there were ground contacts and there were live contacts. This allowed
us to demonstrate to users the basic concepts behind electric
conductivity; if you placed your hand on a live contact point, the
electrical charge continues through your body toward the ground. But
if you placed your other hand on the ground contact point, the charge
travels through your body, into the ground cable and on to the
computer waiting for a signal, thus completing the circuit and
triggering a sound.

The Wires

We were doing this on the cheap, and also are big supporters of
recycling and re-using, so we found some cables and wires that some
people were throwing out and brought them home. We cut the ends off of
the cables and peeled back the casing. This exposed the inner wiring,
which we stripped just enough so that the bare copper was exposed at
either end.

This was frightfully delicate and dangerous work, because
unfortunately most products these days are not designed to be
de-constructed. This isn’t just wasteful; it makes it really difficult
to re-use parts. The bottom line is, if you’re going to cut cables and
strip them, get an adult to help you. Seriously, do not try this
without an adult; I am an adult, and I nearly cut my finger not once
but three different times. Even Jess and I are always sure to work as
a pair, so that if one of us does get hurt, the other one is there to get
help. Plus, it’s just more fun as a team.

Jess hot-glued the cables to her painting, with the copper wire
touching the conductive paint. For added conductivity, she painted
another layer over the wire.

Looking back, we think that next time we will either try to use a
sponge or a roller brush for the conductive paint, or else try some
other kind of material, because the paint wasn’t quite as conductive
as we had hoped. Then again, it did work as planned, so we don’t
really have any real complaints. However, the paint is quite
expensive, so we are looking into other options for future projects.

The Makey Makey

Since Jess already owns a Makey Makey board, we used it as the
receptor for our wiring. In theory, we could have also used the GPIO
of a Raspberry Pi to receive all the signals, but since we had the
Makey Makey and wanted to show it off, we used it.

Each wire from each live contact point got connected to a unique point
on the Makey Makey. It didn’t really matter where, just as long as
each wire got its own connection. I think I counted 13 or 14 unique
inputs on the Makey Makey, and we ended up using 10 of those.

The ground wires all got connected to the ground connections of the
Makey Makey. A signal sent to just the ground or to just a live
contact point would not register, but if you touched both the ground
and a live contact point, then the Makey Makey was triggered.

The Makey Makey, when a signal is received, sends that signal through
USB to a Raspberry Pi.

The Raspberry Pi

Because we needed a fairly robust system for sound playback, I used
Fedora Linux (nicknamed “Pidora“) on this Pi, although Debian Linux (I mean “Raspian“) would be an acceptable choice as well. There are obviously other flavours of Linux for the
Pi, but I would stick with Fedora or Debian unless you really want to
sit around and tweak sound systems and IRQs. I don’t usually mind
doing just that, but in this case, I wanted to concentrate on making
the painting work, so I let the Fedora people figure out the low-level
OS stuff for me.

The Code

The code to make the sounds happen is a really simple little Python
script that I called alluvial.py. It requires two external Python modules: evdev and pygame.

The script does three things:

  1. It starts up and imports the evdev and pygame modules.
  2. It assigns some variables so that it can hold all the different
    sounds that we want to trigger. The sounds are played by the pygame module.
  3. It listens, via the evdev module, for signals coming in from USB.

This is the code (for clarity, I use the ¬ character to signify a TAB, or 4 spaces, since that is quite important in Python):

#!/usr/bin/env python

import pygame
from pygame.locals import *
from evdev import InputDevice, categorize, ecodes
from select import select

dev = InputDevice('/dev/input/event0')
#print("debug", dev)
pygame.mixer.init()

SOUNDS = '/usr/local/bin/alluvial/'

UP = pygame.mixer.Sound(SOUNDS + 'UP.ogg') # Guitar
RIGHT = pygame.mixer.Sound(SOUNDS + 'RIGHT.ogg') # Guitar
DOWN = pygame.mixer.Sound(SOUNDS + 'DOWN.ogg') # Guitar
LEFT = pygame.mixer.Sound(SOUNDS + 'LEFT.ogg') # Guitar

SPACE = pygame.mixer.Sound(SOUNDS + 'SPACE.ogg') # Pno A

LMB = pygame.mixer.Sound(SOUNDS + 'LMB.ogg') # Pno B
RMB = pygame.mixer.Sound(SOUNDS + 'RMB.ogg') # Pno C

W = pygame.mixer.Sound(SOUNDS + 'W.ogg') # Pno D
A = pygame.mixer.Sound(SOUNDS + 'A.ogg') # Pno E
S = pygame.mixer.Sound(SOUNDS + 'S.ogg') # Pno F
D = pygame.mixer.Sound(SOUNDS + 'D.ogg') # Pno G

F = pygame.mixer.Sound(SOUNDS + 'F.ogg') # Drum

main = True
while main == True:

¬ r,w,x = select([dev], [ ], [ ])


¬ for event in dev.read():


¬ ¬ if event.type == ecodes.EV_KEY:
¬ ¬ ¬ APRESS = categorize(event)
¬ ¬ ¬ #print("debug - you pressed something", APRESS)
¬ ¬ ¬ ADICT = str(APRESS).split(" ")
¬ ¬ ¬ print(ADICT)[-1]
¬ ¬ ¬ print(ADICT)[4]


¬ ¬ ¬ #key events married to sounds here
¬ ¬ ¬ if ADICT[-1] == 'down':
¬ ¬ ¬ ¬ print(ADICT)[-1]
¬ ¬ ¬ ¬ if ADICT[4] == '103':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ UP.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '105':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ RIGHT.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '108':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ DOWN.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '106':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ LEFT.play(loops = 0)


¬ ¬ ¬ ¬ if ADICT[4] == '57':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ SPACE.play(loops = 0)


¬ ¬ ¬ ¬ if ADICT[4] == '272':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ LMB.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '273':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ LMB.play(loops = 0
)


¬ ¬ ¬ ¬ if ADICT[4] == '17':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ W.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '30':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ A.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '31':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ S.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '32':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ D.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '33':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ F.play(loops = 0)
¬ ¬ ¬ ¬ if ADICT[4] == '34':
¬ ¬ ¬ ¬ ¬ print(ADICT)[4]
¬ ¬ ¬ ¬ ¬ G.play(loops = 0)

It’s a really basic script, in fact it is the same kind of script that
you might use in a simple platformer game (or similar) that you were
writing in Python: set up an infinite loop that listens for a specific
key press, and when it detects that key press, perform some action.

It’s worth noting that I trigger the sounds on the DOWN event, rather
than the UP event. I don’t remember why I did that, but it works.

It’s also worth noting that in order to trigger the sounds, I use the
raw keycode rather than something like


event.key == ord('g'):
¬ print(ADICT)[4]
¬ G.play(loops =0)

The reason I did this was to try to keep my code as raw as possible,
with the theory that the more raw it was, the less delay there would
be between when a person touches the paint and triggers the sound. It
would have been even better to do this application in C++ or something
faster than Python, but Python has the advantage of being a lot easier
to understand by beginner programmers, and a few milliseconds of delay
seems a small price to pay for pretty code.

The Sounds

Piano sounds, a drum sound, and guitar sounds, are all available from
freesound.org, a great, creative commons repository of sound effects,
samples, and sample banks. I converted them to ogg vorbis files to
keep them small.

Making it an Appliance

Since the Pi is a computer with a full OS, when you power it on, you
have to log in and start whatever application it is that you want to
run. We wanted this painting to be an appliance, or in art terms, “an
installation”, meaning that we wanted to plug it in, wait a few
seconds for it to boot, and then start using it without ever having to
hook up a monitor or keyboard.

To achieve this, I did a few things.

    1. First, I optimised the boot time of the Pi by stripping out all the unnecessary software that normally starts during boot. For instance, I did not need networking or fancy video drivers or any graphical environment, so I removed all of those components. The Pi now booted to a text login screen. It took maybe 5 seconds from being plugged in to being fully operational.It still wanted me to log in, however, and I didn’t want to have to do that. I just wanted the Pi to boot up and start the Python script.Normally on Fedora or Debian, you would enable auto-login through your desktop’s session manager. But I had scrapped the desktop, so I needed auto-login for my text console. To do that, you use raw systemd, the init subsystem that gets Fedora and Debian from a useless paperweight to a fully functioning self-aware (well, more or less) computer.Specifically, you need to modify:
      /usr/lib/systemd/system/getty@.service

      In that file, there is a block of code that looks a little something like this:


      [Service]
      Environment=TERM=linux
      ExecStart=-/sbin/agetty --noclear %I 38400
      Type=idle

      but you need to comment out one line and add a new auto-login line:


      [Service]
      Environment=TERM=linux
      ####ExecStart=-/sbin/agetty --noclear %I 38400
      ExecStart=/sbin/agetty --autologin root --noclear %I 38400 linux
      Type=idle

      The line you have added, bit by bit:

      ExecStart = what command we want to run when we are at a login prompt
      /sbin/agetty = opens a tty (text console) port
      –autologin root = tell agetty to auto login as root
      –noclear %I 38400 linux = tell agetty what kind of login we want to
      use, at what baud rate, and whether or not to clear the screen

      Notice that we are logging in as root. Normally I would never do this, ever. However, this Raspberry Pi is an appliance without so much as a networking stack, and it is constantly being monitored. If this was going to be put in an art gallery, left unattended, I might re-think this, but for now it’s basically an un-embedded embedded system, so I really had no problem running as root.

      Now the Pi booted with auto-login. Now to get the alluvial.py script
      to auto-start.

    2. On traditional UNIX systems (such as a BSD or something like Slackware), an application could be started at boot by placing it in rc.local. Systemd does not use rc.anything, and while I had read online that compatibility with rc.local was being maintained, I could not get that to work. So, I wrote a systemd service script, which is a lot easier than it sounds.I created a file at /etc/systemd/system/alluvial.service and put this code into it:
      [Unit]
      Description=Alluvial Startup
      [Service]
      Type=idle
      ExecStart=/usr/local/bin/alluvial/alluvial.py

      [Install]
      WantedBy=multi-user.target

      Enable the service with the usual systemd incantation:


      systemctl enable alluvial.service

  1. The last thing I had to do was place my alluvial.py script and all of
    its assets (the sounds files) into some logical location. As you can
    probably tell by the code I’ve already pasted in, that location was
    /usr/local/bin/alluvial but really you could put it anywhere, or you
    could send the sounds to /usr/local/share/alluvial and the executable
    to /usr/local/bin, or whatever. I was lazy and put everything into
    /usr/local/bin.

The Case

The case was printed out on a 3D printer by our friend Wolf at the Wellington Makerspace. You can find files to print many different styles of Pi cases on Thingiverse or you can 3D model your own.

Results

Reboot the Pi, and eight seconds later, it is up and running with the
alluvial.py script patiently listening for signals. Connect some
speakers, touch different parts of the painting, and the sounds are
played. It’s like magic, except a lot more technical.

Advertisements

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s