Project Supernova — https://vimeo.com/161542312

Making pinball fun to watch (and play!)

Behind the scenes: Project Supernova

Erik Klimczak
Perficient Digital Labs
6 min readApr 7, 2016

--

From the day our Williams Firepower pinball machine arrived, we’ve been brainstorming ways to make it more more engaging. Pinball is fun to play, but it’s not so fun to watch. Nevertheless, people love to hang around a pinball machine. Seriously, it’s like a chunk of gravity, it pulls people in better than any water cooler. But the spectator experience hasn’t been great and we wanted to change that. We also wanted to make a machine built in the 80s more relevant in an Xbox world.

Read on to learn more about the project. Or check out the final project on vimeo

Interactive graphics triggered by sensors on the machine

The concept

Pinball machines have always been about two things: Engineering and Design. Whichever machine attracts (and retains) the most customers through a combination of game play and intriguing art will stay on the floor at the arcade or in a collector’s basement.

Sticking to those principals, we wanted to physically expand the interaction experience outside of the playfield and onto the nearby walls. We wanted to make it fun for everyone in the room, not just the player.

Early concept drawings

It might not be obvious at first, but the pinball machine is telling a story and as a player you’re a part of it. The story is told with late 70s era sci-fi art (inspired by Angus McKie) and complimented by a myriad of twinkling lights and 8-bit sound effects.

We wanted to re-imagine the story of Firepower and tell it with modern storytelling tools. After studying the machine’s artwork, we derived a storyline consisting of three scenes: A flight scene, wormhole, and cityscape. The goal was to create a visual journey to compliment each of the 3 balls in a unique way.

Creative process

Designing for a 15 foot projection and pinball machine isn’t your everyday task, so we took an iterative approach to refining our concept and artwork.

Finding inspiration

The graphics on the backglass depict an x-wing-ish / death star-like battle featuring explosions, lasers, and a black hole. There is also a neat halftone effect on all the graphics if you look closely. All of these elements informed the visual language we created for the project.

Artwork found on the Firepower pinball machine

We wanted to pay homage to the original graphics on the machine so we used similar graphic elements and stylistic textures commonly found in 1970s and 80s film. Then to give it a modern touch, we introduced cooler/eerie color palette and a hyperrealism feel that is common in today’s video games.

Artwork from the final project

We designed the scenes using a combination of Cinema 4D, WebGL and various tooling like Photoshop, Illustrator and Sketch.

Interaction design

One insight we gleaned from seeing the projection at scale was the relationship between the screen and the player’s line of sight. Since the player’s head tilts downwards during gameplay the “usable” screen real estate becomes the areas immediately to the left and right of machine. Luckily the fix was easy to accommodate by simply recomposing our scenes to place elements in the user’s peripheral vision.

Projection graphics were optimized for the player and the audience

Playing a game with the projected graphics was an intense experience and added a whole new dimension to the gameplay. But it wasn’t enough. To take it to the next level, we knew we needed to control the lighting and sound effects through actions on the table.

Final artwork from the projection

Technology highlights

We’ve used frameworks like Open Frameworks and Cinder for installations like this in the past, this time we were curious if a full screen web browser running WebGL and a Node.js backend would be up to the task. As you can see, it was! Here are some of the tech highlights:

Sensor integration

Our first task was to get the pinball hardware to talk to our server. To accomplish this, we utilized two separate detection techniques: piggybacking lamps and switches (for on/off states), and wiring up our own switches that were triggered by (more high voltage) mechanical interactions under the playfield.

Various sensors on the machine triggered visual effects in the projection

Wiring all of these sensors to an Arduino and talking to it through the server’s serial port gave us an easy way to detect different events that were happening on the machine (like a ball being plunged or a pop bumper being hit). Once the server was notified about these hardware events, it fired off the commands via Socket.io to the web interface to ‘shoot lasers’ or ’cause explosions’.

Plunging the ball causes the ship to launch

Optical character recognition (OCR)

At this point we still needed a way to switch scenes between balls and track scores. Unfortunately, the hardware sensors couldn’t easily read a player’s score and ball count. We had OpenCV experience from a past project and decided to try that in combination with OCR to read the scores from the backglass via webcam.

To our delight we found Node-ready wrappers for OpenCV (computer vision) and Tesseract (OCR). This allowed us to keep both the frontend and backend code in the same project and written primarily in JavaScript.

After some minor install and configuration headaches we successfully got OpenCV running in Node and generating clean, high-contrast images perfect for character recognition:

We used OpenCV to process the webcam video and extract the player scores and ball count

One hurdle we ran into was reading 7 segment displays. Tesseract is designed to scan full pages of text for things like digitizing print media, it works well for common fonts like Arial or Times New Roman, but not for 7 segment displays. A little more research led us to a library called SSOCR, which was exactly the tool we needed.

Using computer vision and SSOCR to read scores off the machine in realtime

As you can see from the figure above, once we pulled it all together it works like a charm! This part of the project seemed particularly useful, so we broke it out into it’s own starter project for anyone to use. You can take a peek at the code on GitHub here:

https://github.com/truthlabs/SuperNova-OCR

Custom pixel shaders

Once we had the scenes rendering and animated in WebGL, it felt a little dull and lacking “excitement”. Some of the colors didn’t translate well to the projector and overall the scene needed more finesse to achieve the visual quality we wanted. So we turned to custom pixel shaders.

We used pixel shaders to create a more cinematic look and feel

Many of the objects in the scene have custom fragment and vertex shaders applied. These shaders are dual-purpose 1) they create a cinematic look and 2) they are interactive with the machine hardware.

Putting it all together

We had a blast, literally, working on this project. We’re big fans of pushing web technologies to their limits and after pumping thousands of 3D objects to the screen at 60fps we walked away feeling confident in Node.js and full-screen WebGL for installation work. Below is a video of the final output in action.

The final output

Enjoy the post? Show us some ♡ below.

--

--