UXFest and My Talk on Second Screen Experiences

October 11th, 2013 by Chris Allen

Chris UX Fest

Last week I presented at a really well attended and inspiring conference here in Boston called UXFest run by design firm Fresh Tilled Soil. I spoke to a packed room of UX enthusiasts that were interested in what I had to say about the direction of user experiences in video games and how these designs can play out in other industries. I got quite a few requests to post my slides from the talk, but given that I tend to take the approach of one image per slide with little-to-no text, that simply wasn’t going to work. For example a slide that shows nothing but an iPhone followed by a photo of a remote controlled helicopter wouldn’t make much sense without some context. So with that, here’s a summary of my talk in blog form. Read the rest of this entry »

Kiwi Katapult Revenge Case Study by Intel

July 29th, 2013 by Adam Doucette

Earlier this year, Intel invited Infrared5 to compete in its Ultimate Coder Challenge: Going Perceptual, a groundbreaking contest that provided participants with an Ultrabook device, Creative Interactive Gesture Camera development kits, a still-evolving perceptual computing SDK, and all of the support possible for letting their imaginations run rampant. With its focus on sensor-based input, using Brass Monkey technology seemed a natural complement to perceptual computing, but the question surfaced of how to mesh the two. William Van Winkle recounts the story our team experienced in this challenging but fulfilling opportunity. As always, we welcome all questions and feedback. Enjoy!

http://software.intel.com/en-us/articles/infrared5-case-study

, , , , , ,

Infrared5 Developer Steff Kelsey featured on Intel Developer Zone Blog

June 20th, 2013 by Adam Doucette

Recently our own Developer-extraordinaire Steff Kelsey (@thecodevik1ng) was featured on the Intel Developer Zone blog with his post – Masking RGB Inputs with Depth Data using the Intel PercC SDK and OpenCV. Steff and our team here at Infrared5 took part in Intel’s Ultimate Coder Challenge in the building of Kiwi Catapult Revenge using the Intel Perceptual Computing SDK. Check out Steff’s article below as he goes through the approaches used, what to look out for, and he provides some example code.

Steff Kelsey Intel Blog Post

, , ,

Design Occlusion is Killing Your Game Design

June 5th, 2013 by Adriel Calder

Last March, while at the annual Game Developer’s Conference in San Francisco, I attended numerous enlightening talks focusing on the different ways one can approach game design.  The one that stood out to me the most was titled “Design Occlusion is Killing Your Creativity” presented by Dylan Cuthbert of Q-Games.  Cuthbert’s talk focused mainly on his time working for Shigeru Miyamoto on Star Fox for the SNES and the lessons he learned as a young game developer working with an established person in the industry.  In addition to this, Cuthbert was also faced with understanding and appreciating the differences between British and Japanese approaches to design. “We were a very cocky sort of British programmers…sort of taught ourselves everything and we were suddenly thrown into this Japanese environment….and we were kind of in awe and also kind of in shock at the same time about the process.”

Read the rest of this entry »

, , , , , , ,

Wrapping Up: IR5′s Team Looks Back On Kiwi Katapult Revenge

April 23rd, 2013 by Rosie

This week marks the wrap up on Intel’s Ultimate Coder Challenge, a seven week showdown between the worlds top developers and agencies. Intel’s hand picked teams were given seven weeks to work with Intel’s new perceptual computing SDK to create a cutting edge experience. The team at Infrared5 worked to put together Kiwi Katapult Revenge, a one of a kind game that leverages Brass Monkey’s controller and head tracking technology to create an incredible interactive experience.

I spoke with each of our team members to hear about the ups and downs of creating this game under such a tight schedule.

What was the most fun and exciting thing about being a part of this competition?

Elena: The whole time that we were making this game I was amazed at the pace we were able to keep at. Maybe it was the thrill of the contest or the fact that we had such a short deadline, but we were able to cram what I thought would take months into a few weeks. I think the most fun aspect of it was the creativity we were able to have with the game. The encouraging feedback from the judges and other team members was really nice to hear too. It made working on the art for the game exciting as we kept improving it.

Chris: One of the most exciting things about this challenge was how our team collaborated on the concepts that were first brought to the table. I had come into it thinking of something very specific. I had envisioned a post apocalyptic driving/shooting game.

The thing was though, the art team had been thinking of something very different. They wanted to create a paper world, made up of items in the style of pop-up books and paper dolls. How could a Mad Max style game fit into this vision? I’m not quite sure how it came up, but I had seen several articles via Twitter that had gone viral on the subject of a New Zealand Ecologist that wanted to eradicate all the domestic cats in his country.  Apparently the cats were killing off the native species of flightless birds like the Kiwi. I brought up the concepts of using this as our theme, and it was something that resonated with everyone. Plus it would work perfectly with Rebecca and Aaron’s art style that they wanted to create. And as an added bonus the flat paper style had the advantage of really accentuating the 3D effect of peering around objects in the game. A little more brainstorming, a weekend writing up a crazy story, and Kiwi Katapult Revenge was born. In many ways, after looking back, it’s amazing how much of the core mechanics stayed exactly the same as what I had originally planned.

Steff: For me, the most exciting part was working with the Intel and OpenCV libraries and the new camera. I definitely got kind of a charge the first time I got a good image from the depth camera.

John: One of many fun features is seeing Kiwi in the rearview mirror.  The mirror itself is an organic shape so we used a render texture on the face of the mesh and this worked quite nicely.  However, dealing with where fire and lasers actually shoot from in reality, as opposed to how they look coming out of the model of the kiwi in the mirror, was a bit of a challenge.  We had to set up a separate camera for the render texture of course, but also had to assign separate features to show the fire coming out of the mouth and lasers out of the eyes so that it would look consistent.  On top of all of that, we had to then deal with the parenting and rotation of the head now that Steff’s code was coming in to deal with the main camera’s view matrix.  I think the result looks convincing and the game is certainly fun!”

Aaron: Being a fulfillment house, it’s not very often that we get to just roll with our own idea. Even though it wasn’t exactly a blank check, it was fun to explore a style not often used in modern 3D games.

Rebecca: The most exciting aspect was getting to have complete creative freedom with the concept through to the final product. I really enjoyed being able to have the paper cut out style implemented on a project after so many discussion with Aaron about that idea. I am just so happy with the end results. The creative team worked positively together to ensure that this style worked well and looked amazing. Really proud of it!

Like all truly exciting projects, Kiwi Katapult Revenge came with its share of questions and struggles. I asked the team to tell me what the most challenging part of their experience was.

Elena: The most challenging aspect for me was diving straight to work with only some idea of how things were going to turn out. I certainly learned a lot more about 3D work and Unity than I had known previously. The project required a lot of flexibility as things kept changing. In the end, I’m still happy with how it came out.

Chris: I think the biggest challenge was retaining maximum flexibility while keeping some semblance of a process in place. As any game designer knows, what you think will be fun is often not the case, and you have to go through a highly iterative process of trying things out. Doing that in a short seven week time frame posed many challenges, especially when we were dealing with cutting edge technology like the perceptual computing depth camera and computer vision algorithms. I do think our team managed quite well even though the experience was incredibly chaotic. I also admit that my style of always pushing and never quite being satisfied was probably a bit difficult to work with. In the end though, we made an incredible game. It’s something that’s fun to play, and has quite a bit of depth for being built in such a short time period. The game is far from complete however, and we look forward to building on it further in the near future.

Steff: When we had to dump all the code in Unity dealing with the camera because our C# port of OpenCV was not going to work and I had to essentially restart the project from scratch in Visual Studio. That was daunting!

John: The game has some unique control features that include flight with relative head rotation for firing/aiming.  In and of itself, this is basic gaming, but combined with the perceptual camera code which worked with the main camera’s view matrix, we had our work cut out for us.  On top of that, since the typical renderer skybox feature didn’t work when you changed out the view matrix in Unity at runtime, we had to create a simple cube that follows the player around on x and z axis (simple and effective) and thankfully, we didn’t hit too many other roadblocks because of the direct access to the main camera’s view matrix.

Rebecca: The most challenging aspect was working on a complete game in such a short time frame. We only had 7 weeks from beginning to end. In that time, we had to have a game concept created, design a 3D world, create numerous characters, develop a full featured game and integrate perceptual computing and Brass Monkey. It was a large task and the team had a few bumps along the road. However, we all persevered and managed to get it done. Lots of lessons learned. :)

Here at Infrared5, we would like to congratulate everyone that participated in the Ultimate Coder Challenge. It has been an amazing ride!

Infrared5 Participating in the Intel Ultimate Coder Challenge

April 18th, 2013 by admin

Perceptual Computing Challenge Winner to be Announced Next Week

Boston, MA – April 18, 2013 – Infrared5, an innovative Boston-based interactive studio, today
announced that the company is one of only seven developer teams participating in the Intel® Ultimate
Coder Challenge, and the only East Coast team.

The seven teams used the latest Intel Ultrabook™ devices, the Intel Perceptual Computing Software
Developers Kit (SDK), and Creative® Interactive Gesture Camera to build a special application prototype
in just seven weeks. The Ultimate Coder teams, including the Infrared5 team, blogged about their
experiences online and a panel of experts will crown the Ultimate Coder on April 24.

The Intel Ultimate Coder Challenge is designed to promote the use of perceptual computing. Perceptual
computing creates more immersive software applications by incorporating 2D/3D object tracking, user
gesture control, facial analysis and recognition, eye control, and voice recognition. Perceptual computing
is more than just the use of touch screens; it is a new area of pioneering development.

“The Intel Ultimate Coder Challenge provides a great opportunity for participants to innovate around
perceptual computing, and learn from peers in a few short weeks,” said Bob Duffy, community manager
for the Intel Developer Zone at Intel.

“We’re very comfortable with pioneering applications of technology,” said Rebecca Smith Allen, Infrared5
CEO and part of the Challenge team. “Perceptual computing is a new frontier we’re excited to explore.”

“The combination of the immersive properties of Brass Monkey and the power of perceptual computing
allowed our team to give the judges a unique experience that will hopefully earn us the Ultimate Coder
title,” said Chris Allen, the project leader. Allen is CEO of BrassMonkey, a next generation browser-based
game system created by the Infrared5 team.

In this game, the player uses a phone to fly a character around a 3D world. When game players turn
their head, the perceptual computing camera tracks what the player is looking at, causing the scene to
shift. This effect allows the player to peer around obstacles, giving the game a real sense of depth and
immersion.

, , , , , ,

Infrared5 Ultimate Coder Week Seven: The Final Countdown

April 10th, 2013 by admin

This the final week in the contest, and we think that we’ve got a pretty solid game out of the six weeks we’ve been at this. This week we will focus on adding that little bit of polish to the experience. We know that Nicole suggested not adding more features, but we simply couldn’t resist. Our character Karl really needed to be able to breathe fire properly, and for the player to truly experience this we felt it necessary to control it with your voice. So, to breathe fire in Kiwi Catapult Revenge you can choose to yell “aaaaaahhhhh”, or “firrrrrrrre”, and the bird will throw flames from his mouth. This feature allows the player to also be able to shoot lasers at the same time doubling your fire power. Beware of timing though, as currently there’s a slight delay. We plan on optimizing this feature as much as we can before our Friday deadline.

We’ve been playing with “mouth open” detection from the perceptual computing camera as well, but the funny thing is that it might have been a lot of work for not much gain. We found that using audio detection was a bit more interesting, and we still have more polish on mouth detection to make it really work well. That said, Steff gave a little overview of the technique we are using for mouth detection in this week’s video.

We also have some words from our protagonist Karl Kiwi, shots from GDC, footage of fire breathing via voice and more insight from our team on how things have gone during the Ultimate Coder contest.

Optimizations Galore

The game we’ve produced is really taxing the little Lenovo Yoga Ultrabook we were given for the contest. The integrated graphics and the overall low power on this little guy doesn’t allow us to do too much while running the perceptual computing algorithms. What runs great on our development PCs really kills the Ultrabook, so now we are optimizing as much as we can. Last week we showed some of the techniques Steff came up with to fine tune the computer vision algorithms so that they are highly optimized, but we didn’t talk much about the other things we are doing to make this game play well on such a low powered device.


Unity is a great IDE to create games like this. It’s not without its struggles, of course, but considering what we’re trying to accomplish, it has to be the best development environment out there. The world (New Zealand) is constructed using Unity’s terrain editor, and the 3D assets were created largely in Blender (we love open source software!). We’ve been able to gain performance with tools like Beast lightmapping that allow us to have a rich looking scene with nice baked shadows and shading. We’ve had some decent hurdles with multiple cameras for the mainview, UniSWF gui and the rearview mirror (render to texture to accommodate organic shape of mirror object), but we’ve been able to handle this just fine. Most of the optimizations, so far, have been concerning draw calls and lighting issues. We typically build apps for iOS/Android, so we tend to keep code/assets lean from the get go. Still, we’ve got a bit more to go before we hand Kiwi Katapult Revenge to the judges.


This week’s Web Build

We are excited to share with you the latest build of the game. Since this is a web build it won’t allow you to use the perceptual computing camera, but we are building an installer for that piece which will be delivered to the judges at the end of the week. With this build you can still fly around using your phone as the controller via the Brass Monkey app for Android or iOS, and you can breath fire by yelling. There are new updates to the UI and environment, and overall the only thing that’s missing from this build are powerups and the perceptual computing camera input. We will have three kinds of power ups for the final build of the game that we deliver on Friday.


Check out the playable demo of Kiwi Catapult Revenge!

What’s Next for Karl Kiwi and the Intel Perceptual Computing SDK?

We really had a great time using the new hardware and SDK from Intel and we are definitely going to keep building on what we have started (remember, we did promise to release this code as open source). We have some optimization to do (see above). And looking back at our plan from the early weeks of the competition, we were reaching for robust feature tracking to detect if the mouth was open or closed, the orientation of the head, and the gaze direction right from the pupils. All three of the above share the same quality that makes them difficult: in order for specific feature tracking to work with the robustness of a controller in real time, you need to be confident that you are locked onto each feature as the user moves around in front of the camera. We have learned that finding trackable points and tracking them from frame to frame does not enable you to lock onto the targeted feature points that you would need to do something like gaze tracking. As the user moves around, the points slide around. How to overcome the lack of confidence in the location of the trackable points? Active Appearance Model (AAM).


So, next steps are to see what kind of a boost we get on our face detection and head tracking using the GPU methods built into OpenCV. HaarCascades, feature detection and optical flow should all benefit from utilizing the GPU. Then, we are going to implement AAM with and without face training to get a good lock on the mouth and on the eyes. The idea behind implementing AAM without face training (or a calibration step) is to see if how good it works without being trained to a specific face in the hope that we can avoid that step so people can just jump in and play the game. With a good lock on the face features using AAM, we will isolate the pupil locations in the eyes and calculate if the gaze vector intersects with the screen and viola! Robust gaze tracking!


Where can you play the game once it’s done? Well we’ve not yet decided on how we want to distribute the game, and there are a ton more features that we would like to add before we are ready for a production release. We are considering putting it on Steam Greenlight. So, with that, you will have to wait a little while before you can play Kiwi Katapult Revenge in all its glory. Let us know your thoughts, how would you distribute such a crazy game if it were up to you?


Parting Words

This has been a great experience for our whole team, and we want to thank Intel for having us be a part of the contest. We really like the game we came up with, and are excited to finish it and have it available for everyone to play. Thanks too to the judges for all the time and effort you’ve given us. Your feedback has only made our game better. Now best of luck choosing a winner! For those of you that have been following us during the contest, please do stay in touch and follow us on our own blog. From the team at Infrared5, over and out!

Infrared5 Ultimate Coder Week Six: GDC, Mouth Detection Setbacks, Foot Tracking and Optimizations Galore

April 3rd, 2013 by Chris Allen

For week six Chris and Aaron made the trek out to San Francisco to the annual Game Developers Conference (GDC) where they showed the latest version of our game Kiwi Catapult Revenge. The feedback we got was amazing! People were blown away at the head tracking performance that we’ve achieved, and everyone absolutely loved our unique art style. While the controls were a little difficult for some, that allowed us to gain some much needed insight into how to best fine tune the face tracking and the smartphone accelerometer inputs to make a truly killer experience. There’s nothing like live playtesting on your product!



Not only did we get a chance for the GDC audience to experience our game, we also got to meet some of the judges and the other Ultimate Coder competitors. There was an incredible amount of mutual respect and collaboration among the teams. The ideas were flowing on how to help improve each and every project in the competition. Chris gave some tips on video streaming protocols to Lee so that he will be able to stream over the internet with some decent quality (using compressed JPEGs would have only wasted valuable time). The guys from Sixense looked into Brass Monkey and how they can leverage that in their future games, and we gave some feedback to the Code Monkeys on how to knock out the background using the depth camera to prevent extra noise that messes with the controls they are implementing. Yes, this is a competition, but the overall feeling was one of wanting to see every team produce their very best.


The judges also had their fair share of positive feedback and enthusiasm. The quality of the projects obviously had impressed them, to the point that Nicole was quoted saying “I don’t know how we are going to decide”. We certainly don’t envy their difficult choice, but we don’t plan on making it any easier for them either. All the teams are taking it further and want to add even more amazing features to their applications before the April 12th deadline.


The staff in the Intel booth were super accommodating, and the exposure we got by being there was invaluable to our business. This is a perfect example of a win-win situation. Intel is getting some incredible demos of their new technology, and the teams are getting exposure and credibility by being in a top technology company’s booth. Not only that, but developers now get to see this technology in action, and can more easily discover more ways to leverage the code and techniques we’ve pioneered. Thank you Intel for being innovative and taking a chance on doing these very unique and experimental contests!


While Aaron and Chris were having a great time at GDC the rest of the team was cranking away. Steff ran into some walls with mouth detection for the breathing fire controls, but John, Rebecca and Elena were able to add more polish to the characters, environment and game play.



John added on a really compelling new feature – playing the game with your feet! We switched the detection algorithm so that it tracks your feet instead of your face. We call it Foot Tracking. It works surprisingly well, and the controls are way easier this way.



Steff worked on optimizing the face tracking algorithms and came up with some interesting techniques to get the job done.


This week’s tech tip and code snippet came to us during integration. We were working hard to combine the head tracking with the Unity game on the Ultrabook, and ZANG we had it working! But, there was a problem. It was slow. It was so slow it was almost unplayable. It was so slow that it definitely wasn’t “fun.” We had about 5 hours until Chris was supposed to go to the airport and we knew that the head tracking algorithms and the camera stream were slowing us down. Did we panic? (Don’t Panic!) No. And you shouldn’t either when faced with any input that is crushing the performance of your application. We simply found a clever way to lower the sampling rate but still have smooth output between frames.


The first step was to reduce the number of times we do a head tracking calculation per second. Our initial (optimistic) attempts were to update in realtime on every frame in Unity. Some computers could handle it, but most could not. Our Lenovo Yoga really bogged down with this. So, we introduced a framesToSkip constant and started sampling on every other frame. Then we hit a smoothing wall. Since the head controls affect every single pixel in the game window (by changing the camera projection matrix based on the head position), we needed to be smoothing the head position on every frame regardless of how often we updated the position from the camera data. Our solution was to sample the data at whatever frame rate we needed to preserve performance, save the head position at that instant as a target, and ease the current position to the new position on every single frame. That way, your sampling rate is down, but you’re still smoothing on every frame and the user feels like the game is reacting to their every movement in a non-jarring way. (For those wondering what smoothing algorithm we selected: Exponential Smoothing handles any bumps in the data between frames.) Code is below.

Feeling good about the result, we went after mouth open/closed detection with a vengeance! We thought we could deviate from our original plan of using AAM and POSIT, and lock onto the mouth using a mouth specific Haarcascade on the region of interest containing the face. The mouth Haarcascade does a great job finding and locking onto the mouth if the user is smiling – which is not so good for our purposes. We are still battling with getting a good lock on the mouth using a method that combines depth data with RGB, but we have seen why AAM exists for feature tracking. It’s not just something you can cobble together and have confidence that it will work well enough to act as an input for game controls.


Overall, this week was a step forward even with part of the team away. We’ve got some interesting and fun new features that we want to add as well. We will be sure to save that surprise for next week. Until then, please let us know if you have any questions and/or comments. May the best team win!

, , , , , , ,

Parent Tested Parent Approved takes on THE GAME OF LIFE:zAPPed!

March 27th, 2013 by Rosie

Infrared5 is pleased to announce that one of our projects, THE GAME OF LIFE:zAPPed has been featured on Parent Tested Parent Approved. We love hearing feedback from users on our projects when our work goes out into the world and nothing is more fulfilling than hearing positive feedback from users. We’re flattered that the largest parent testing community recognized worldwide thinks that THE GAME OF LIFE: zAPPed is fun- both for parents who remember the original game and kids who are playing it for the first time. Check out the review to hear some of the nice things that parent playtesters had to say.

,

Ultimate Coder Week #5: For Those About to Integrate We Salute You

March 21st, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We definitely lost one of our nine lives this week with integrating face tracking into our game, but we still have our cat’s eyes, and are still feel very confident that we will be able to show a stellar game at GDC. On the face tracking end of things we had some big wins. We are finally happy with the speed of the algorithms, and the way things are being tracked will work perfectly for putting into Kiwi Catapult Revenge. We completed some complex math to create very realistic perspective shifting in Unity. Read below on those details, as well as for some C# code to get it working yourself. As we just mentioned, getting a DLL that properly calls update() from Unity and passes in the tracking values isn’t quite there yet. We did get some initial integration with head tracking coming into Unity, but full integration with our game is going to have to wait for this week.

On the C++ side of things, we have successfully found the 3D position of the a face in the tracking space. This is huge! By tracking space, we mean the actual (x,y,z) position of the face from the camera in meters. Why do we want the 3D position of the face in tracking space? The reason is so that we can determine the perspective projection of the 3D scene (in game) from the player’s location. Two things made this task interesting: 1) The aligned depth data for a given (x,y) from the RGB image is full of holes and 2) the camera specs only include the diagonal field of view (FOV) and no sensor dimensions.

We got around the holes in the aligned depth data by first checking for a usable value at the exact (x, y) location, and if the depth value was not valid (0 or the upper positive limit), we would walk through the pixels in a rectangle of increasing size until we encountered a usable value. It’s not that difficult to implement, but annoying when you have the weight of other tasks on your back. Another way to put it: It’s a Long Way to the Top on this project.

The z-depth of the face comes back in millimeters right from the depth data, the next trick was to convert the (x, y) position from pixels on the RGB frame to meters in the tracking space. There is a great illustration here of how to break the view pyramid up to derive formulas for x and y in the tracking space. The end result is:
TrackingSpaceX = TrackingSpaceZ * tan(horizontalFOV / 2) * 2 * (RGBSpaceX – RGBWidth / 2) / RGBWidth)
TrackingSpaceY = TrackingSpaceZ * tan(verticalFOV / 2) * 2 * (RGBSpaceY – RGBHeight / 2) / RGBHeight)

Where TrackingSpaceZ is the lookup from the depth data, horizontalFOV, and verticalFOV are are derived from the diagonal FOV in the Creative Gesture Camera Specs (here). Now we have the face position in tracking space! We verified the results using a nice metric tape measure (also difficult to find at the local hardware store – get with the metric program, USA!)

From here, we can determine the perspective projection so the player will feel like they are looking through a window into our game. Our first pass at this effect involved just changing the rotation and position of the 3D camera in our Unity scene, but it just didn’t look realistic. We were leaving out adjustment of the projection matrix to compensate for the off-center view of the display. For example: consider two equally-sized (in screen pixels) objects at either side of the screen. When the viewer is positioned nearer to one side of the screen, the object at the closer edge appears larger to the viewer than the one at the far edge, and the display outline becomes trapezoidal. To compensate, the projection should be transformed with a shear to maintain the apparent size of the two objects; just like looking out a window! To change up our methods and achieve this effect, we went straight to the ultimate paper on the subject: Robert Koomla’s Generalized Perspective Projection. Our port of his algorithm into C#/Unity is below.

The code follows the mouse pointer to change perspective (not a tracked face) and does not change depth (the way a face would). We are currently in the midst of wrapping our C++ libs into a DLL for Unity to consume and enable us to grab the 3D position of the face and then compute the camera projection matrix using the face position and the position of the computer screen in relation to the camera.

Last but not least we leave you with this week’s demo of the game. Some final art for UI elements are in, levels of increasing difficulty have been implemented and some initial sound effects are in the game.

As always, please ask if you have any questions on what we are doing, or if you just have something to say we would love to hear from you. Leave us a comment! In the meantime we will be coding All Night Long!

, , , , , , ,

« Previous Entries