First Annual Intel Innovators Summit

November 5th, 2015 by Chris Allen

Intel Innovator Summit 2.JPG

 

I just got back from the Intel Innovators Summit in Santa Clara, CA where I serve as a current Intel Software Innovator. Intel brought together some of the top programmers, makers and creative technologists from around the world to show us some of the innovative things they are working on. While I can’t share many of the details of what was presented, to me, the most valuable aspect of the summit was having the fantastic opportunity to connect with so many talented developers in one space.

During the event, Intel split up the topics between RealSense and IoT. RealSense, for those who don’t know, is a platform built around 3D depth cameras. As mentioned above, due to an NDA I can’t share everything Intel is up to, but much of what they are doing has been released publicly, and there are public SDKs that anyone can download to start thinking with the devices. As a quick aside, here’s a link to a video that Infrared5 put together for a contest on perceptual computing (which is actually the basis of RealSense) as a catalyst for a truly immersive game play experience at the 2013 Intel Ultimate Coder Challenge. While the front-facing cameras on Windows PCs offer quite a few interesting options for Minority Report style input and games, I find the rear/world-facing r200 cameras on Android devices much more intriguing. The possibilities surrounding the combination of the devices with IoTs are enormous; Imagine unlocking doors just by a camera recognizing your face, or creating wearables that interact with the world with 3D data.

In retrospect, IoT was really the topic that got me excited at this event. Though I don’t consider myself a Maker, Infrared5 has engaged in a significant number of projects with internet connected sensors. Our customer SICK, for example, utilizes Red5 for live streaming of data coming from their sensors. I’m really excited to get my hands on an Intel Edison board after seeing firsthand how the device can seamlessly enable live streaming across a variety of products.

SparkFun_Edison_Boards-14

The fact that it’s so small–and relatively powerful– introduces all kinds of exciting use cases beyond mobile phones. Rest assured, we will be experimenting and sharing with you guys what we and the other Innovators do with Red5 Pro and these devices.

One of my favorite activities of the summit was the little contest Intel ran called the Spark Tank, which was undoubtedly inspired by the hit TV show Shark Tank. Intel kicked off the exercise by splitting the Innovators up into groups with the task of inventing a project which we then pitched to the judges. The judges consisted of the Intel Innovator program as well as executives from the company. I must say, all the teams came up with compelling experiences–everything from doors that scanned your face in order to allow entry to stuffed animals with medical sensors embedded within.

Our project consisted of a military application to improve communication within squads and leveraged the Red5 Pro platform with the Edison board. Each team had to do a skit or a spontaneous commercial for their product and present on the benefits of their concept. For ours, we ended up cutting out cardboard rifles and paper goggles and then taped together gesture/motion control MYO armbands that let you wirelessly control connected devices. It was rather dramatic, as we had our team get shot and die on stage. While we won the “Most Likely to Get the Blind into the Military” award, we clearly should have won funniest skit for the most serious topic.

Intel Innovator Summit 1.JPG

It’s fantastic to be a part of such a talented group of developers, and I’m honored to be able to represent Infrared5 in the Intel Innovators group. Kudos to Intel for having one of the best developer outreach programs out there! I’m excited about all the possible collaborations that will emerge from the Intel Innovator’s Summit, and of course, we can’t wait to see what these guys do with live streaming and Red5 Pro.

, , , ,

Vicarious Visions at GDC

May 22nd, 2013 by Aaron Artessa

User: “Something doesn’t look right, it’s just not working”.
Artist: “What’s bugging you?”
User: “ I don’t know, I can’t put my finger on it”.
Artist: Look calm like the pro you are but secretly swear on the inside at the clueless situation.

This conversation is pretty familiar amongst artists with producers, testers and creative directors. Sometimes you simply can’t put your finger on the problem, but like the Matrix, you know it’s there. This tends to be where the bottleneck in the art pipeline happens and proverbial dollar signs start ticking through the producer’s eyes.

On my third visit to GDC, Vicarious Vision gave a talk about their process on developing environments for Skylanders offering an interesting solution to this problem utilizing custom tools working directly in their engine.

If you haven’t played Skylanders before, I’ll tell you that the environments are gorgeous, jaw-dropping, hand-painted scenes inviting you into a colorful story begging to be explored. Even the production environments feel polished and ready to ship.

Even the most accomplished of artists still find themselves scratching their heads. During testing, the team at Vicarious Visions discovered that the audience didn’t know what to do in some puzzle areas. The conversation above ensued. The answer: Visual Debugging.

Having your puzzles tied into your environment is a tricky balance- on one hand you want to cue the user so that they that can intuitively interact with the puzzle elements. On the other hand, you don’t want the visual cue to be so overt that the puzzle feels out of place. To achieve this balance, Vicarious Visions has a series of tools integrated into their engine to help debug what people are seeing behind the scenes in real time.

On the basic level, they could utilize filters to display one of three things: Chromadepth, Edge lines and Contrast. Using a Chromadepth they were able to identify color variation and hot spots in the scene, helping them see where the eye was being drawn and if the focal points were being lost in a mass of detail. Contrast works in a similar fashion except without chroma variance.

Another option Vicarious Visions employs is The Edge line tool. If you are familiar with Photoshop, this tool gives an effect similar to the Glowing Edges filter, except in black and white. This helps the artist identify areas of visual clutter due to various elements creating hard edges either due to specular detailing, contrasty diffuse textures, or harsh lighting.

These tools aren’t exactly impossible to mimic. As I said before, the edges can easily be simulated, as can chromadepth. That said, having it a part of your engine and working in real time- the benefits to this type of workflow are incalculable.

, , , , ,

Wrapping Up: IR5′s Team Looks Back On Kiwi Katapult Revenge

April 23rd, 2013 by Rosie

This week marks the wrap up on Intel’s Ultimate Coder Challenge, a seven week showdown between the worlds top developers and agencies. Intel’s hand picked teams were given seven weeks to work with Intel’s new perceptual computing SDK to create a cutting edge experience. The team at Infrared5 worked to put together Kiwi Katapult Revenge, a one of a kind game that leverages Brass Monkey’s controller and head tracking technology to create an incredible interactive experience.

I spoke with each of our team members to hear about the ups and downs of creating this game under such a tight schedule.

What was the most fun and exciting thing about being a part of this competition?

Elena: The whole time that we were making this game I was amazed at the pace we were able to keep at. Maybe it was the thrill of the contest or the fact that we had such a short deadline, but we were able to cram what I thought would take months into a few weeks. I think the most fun aspect of it was the creativity we were able to have with the game. The encouraging feedback from the judges and other team members was really nice to hear too. It made working on the art for the game exciting as we kept improving it.

Chris: One of the most exciting things about this challenge was how our team collaborated on the concepts that were first brought to the table. I had come into it thinking of something very specific. I had envisioned a post apocalyptic driving/shooting game.

The thing was though, the art team had been thinking of something very different. They wanted to create a paper world, made up of items in the style of pop-up books and paper dolls. How could a Mad Max style game fit into this vision? I’m not quite sure how it came up, but I had seen several articles via Twitter that had gone viral on the subject of a New Zealand Ecologist that wanted to eradicate all the domestic cats in his country.  Apparently the cats were killing off the native species of flightless birds like the Kiwi. I brought up the concepts of using this as our theme, and it was something that resonated with everyone. Plus it would work perfectly with Rebecca and Aaron’s art style that they wanted to create. And as an added bonus the flat paper style had the advantage of really accentuating the 3D effect of peering around objects in the game. A little more brainstorming, a weekend writing up a crazy story, and Kiwi Katapult Revenge was born. In many ways, after looking back, it’s amazing how much of the core mechanics stayed exactly the same as what I had originally planned.

Steff: For me, the most exciting part was working with the Intel and OpenCV libraries and the new camera. I definitely got kind of a charge the first time I got a good image from the depth camera.

John: One of many fun features is seeing Kiwi in the rearview mirror.  The mirror itself is an organic shape so we used a render texture on the face of the mesh and this worked quite nicely.  However, dealing with where fire and lasers actually shoot from in reality, as opposed to how they look coming out of the model of the kiwi in the mirror, was a bit of a challenge.  We had to set up a separate camera for the render texture of course, but also had to assign separate features to show the fire coming out of the mouth and lasers out of the eyes so that it would look consistent.  On top of all of that, we had to then deal with the parenting and rotation of the head now that Steff’s code was coming in to deal with the main camera’s view matrix.  I think the result looks convincing and the game is certainly fun!”

Aaron: Being a fulfillment house, it’s not very often that we get to just roll with our own idea. Even though it wasn’t exactly a blank check, it was fun to explore a style not often used in modern 3D games.

Rebecca: The most exciting aspect was getting to have complete creative freedom with the concept through to the final product. I really enjoyed being able to have the paper cut out style implemented on a project after so many discussion with Aaron about that idea. I am just so happy with the end results. The creative team worked positively together to ensure that this style worked well and looked amazing. Really proud of it!

Like all truly exciting projects, Kiwi Katapult Revenge came with its share of questions and struggles. I asked the team to tell me what the most challenging part of their experience was.

Elena: The most challenging aspect for me was diving straight to work with only some idea of how things were going to turn out. I certainly learned a lot more about 3D work and Unity than I had known previously. The project required a lot of flexibility as things kept changing. In the end, I’m still happy with how it came out.

Chris: I think the biggest challenge was retaining maximum flexibility while keeping some semblance of a process in place. As any game designer knows, what you think will be fun is often not the case, and you have to go through a highly iterative process of trying things out. Doing that in a short seven week time frame posed many challenges, especially when we were dealing with cutting edge technology like the perceptual computing depth camera and computer vision algorithms. I do think our team managed quite well even though the experience was incredibly chaotic. I also admit that my style of always pushing and never quite being satisfied was probably a bit difficult to work with. In the end though, we made an incredible game. It’s something that’s fun to play, and has quite a bit of depth for being built in such a short time period. The game is far from complete however, and we look forward to building on it further in the near future.

Steff: When we had to dump all the code in Unity dealing with the camera because our C# port of OpenCV was not going to work and I had to essentially restart the project from scratch in Visual Studio. That was daunting!

John: The game has some unique control features that include flight with relative head rotation for firing/aiming.  In and of itself, this is basic gaming, but combined with the perceptual camera code which worked with the main camera’s view matrix, we had our work cut out for us.  On top of that, since the typical renderer skybox feature didn’t work when you changed out the view matrix in Unity at runtime, we had to create a simple cube that follows the player around on x and z axis (simple and effective) and thankfully, we didn’t hit too many other roadblocks because of the direct access to the main camera’s view matrix.

Rebecca: The most challenging aspect was working on a complete game in such a short time frame. We only had 7 weeks from beginning to end. In that time, we had to have a game concept created, design a 3D world, create numerous characters, develop a full featured game and integrate perceptual computing and Brass Monkey. It was a large task and the team had a few bumps along the road. However, we all persevered and managed to get it done. Lots of lessons learned. :)

Here at Infrared5, we would like to congratulate everyone that participated in the Ultimate Coder Challenge. It has been an amazing ride!

Infrared5 Participating in the Intel Ultimate Coder Challenge

April 18th, 2013 by admin

Perceptual Computing Challenge Winner to be Announced Next Week

Boston, MA – April 18, 2013 – Infrared5, an innovative Boston-based interactive studio, today
announced that the company is one of only seven developer teams participating in the Intel® Ultimate
Coder Challenge, and the only East Coast team.

The seven teams used the latest Intel Ultrabook™ devices, the Intel Perceptual Computing Software
Developers Kit (SDK), and Creative® Interactive Gesture Camera to build a special application prototype
in just seven weeks. The Ultimate Coder teams, including the Infrared5 team, blogged about their
experiences online and a panel of experts will crown the Ultimate Coder on April 24.

The Intel Ultimate Coder Challenge is designed to promote the use of perceptual computing. Perceptual
computing creates more immersive software applications by incorporating 2D/3D object tracking, user
gesture control, facial analysis and recognition, eye control, and voice recognition. Perceptual computing
is more than just the use of touch screens; it is a new area of pioneering development.

“The Intel Ultimate Coder Challenge provides a great opportunity for participants to innovate around
perceptual computing, and learn from peers in a few short weeks,” said Bob Duffy, community manager
for the Intel Developer Zone at Intel.

“We’re very comfortable with pioneering applications of technology,” said Rebecca Smith Allen, Infrared5
CEO and part of the Challenge team. “Perceptual computing is a new frontier we’re excited to explore.”

“The combination of the immersive properties of Brass Monkey and the power of perceptual computing
allowed our team to give the judges a unique experience that will hopefully earn us the Ultimate Coder
title,” said Chris Allen, the project leader. Allen is CEO of BrassMonkey, a next generation browser-based
game system created by the Infrared5 team.

In this game, the player uses a phone to fly a character around a 3D world. When game players turn
their head, the perceptual computing camera tracks what the player is looking at, causing the scene to
shift. This effect allows the player to peer around obstacles, giving the game a real sense of depth and
immersion.

, , , , , ,

Infrared5 Ultimate Coder Week Seven: The Final Countdown

April 10th, 2013 by admin

This the final week in the contest, and we think that we’ve got a pretty solid game out of the six weeks we’ve been at this. This week we will focus on adding that little bit of polish to the experience. We know that Nicole suggested not adding more features, but we simply couldn’t resist. Our character Karl really needed to be able to breathe fire properly, and for the player to truly experience this we felt it necessary to control it with your voice. So, to breathe fire in Kiwi Catapult Revenge you can choose to yell “aaaaaahhhhh”, or “firrrrrrrre”, and the bird will throw flames from his mouth. This feature allows the player to also be able to shoot lasers at the same time doubling your fire power. Beware of timing though, as currently there’s a slight delay. We plan on optimizing this feature as much as we can before our Friday deadline.

We’ve been playing with “mouth open” detection from the perceptual computing camera as well, but the funny thing is that it might have been a lot of work for not much gain. We found that using audio detection was a bit more interesting, and we still have more polish on mouth detection to make it really work well. That said, Steff gave a little overview of the technique we are using for mouth detection in this week’s video.

We also have some words from our protagonist Karl Kiwi, shots from GDC, footage of fire breathing via voice and more insight from our team on how things have gone during the Ultimate Coder contest.

Optimizations Galore

The game we’ve produced is really taxing the little Lenovo Yoga Ultrabook we were given for the contest. The integrated graphics and the overall low power on this little guy doesn’t allow us to do too much while running the perceptual computing algorithms. What runs great on our development PCs really kills the Ultrabook, so now we are optimizing as much as we can. Last week we showed some of the techniques Steff came up with to fine tune the computer vision algorithms so that they are highly optimized, but we didn’t talk much about the other things we are doing to make this game play well on such a low powered device.


Unity is a great IDE to create games like this. It’s not without its struggles, of course, but considering what we’re trying to accomplish, it has to be the best development environment out there. The world (New Zealand) is constructed using Unity’s terrain editor, and the 3D assets were created largely in Blender (we love open source software!). We’ve been able to gain performance with tools like Beast lightmapping that allow us to have a rich looking scene with nice baked shadows and shading. We’ve had some decent hurdles with multiple cameras for the mainview, UniSWF gui and the rearview mirror (render to texture to accommodate organic shape of mirror object), but we’ve been able to handle this just fine. Most of the optimizations, so far, have been concerning draw calls and lighting issues. We typically build apps for iOS/Android, so we tend to keep code/assets lean from the get go. Still, we’ve got a bit more to go before we hand Kiwi Katapult Revenge to the judges.


This week’s Web Build

We are excited to share with you the latest build of the game. Since this is a web build it won’t allow you to use the perceptual computing camera, but we are building an installer for that piece which will be delivered to the judges at the end of the week. With this build you can still fly around using your phone as the controller via the Brass Monkey app for Android or iOS, and you can breath fire by yelling. There are new updates to the UI and environment, and overall the only thing that’s missing from this build are powerups and the perceptual computing camera input. We will have three kinds of power ups for the final build of the game that we deliver on Friday.


Check out the playable demo of Kiwi Catapult Revenge!

What’s Next for Karl Kiwi and the Intel Perceptual Computing SDK?

We really had a great time using the new hardware and SDK from Intel and we are definitely going to keep building on what we have started (remember, we did promise to release this code as open source). We have some optimization to do (see above). And looking back at our plan from the early weeks of the competition, we were reaching for robust feature tracking to detect if the mouth was open or closed, the orientation of the head, and the gaze direction right from the pupils. All three of the above share the same quality that makes them difficult: in order for specific feature tracking to work with the robustness of a controller in real time, you need to be confident that you are locked onto each feature as the user moves around in front of the camera. We have learned that finding trackable points and tracking them from frame to frame does not enable you to lock onto the targeted feature points that you would need to do something like gaze tracking. As the user moves around, the points slide around. How to overcome the lack of confidence in the location of the trackable points? Active Appearance Model (AAM).


So, next steps are to see what kind of a boost we get on our face detection and head tracking using the GPU methods built into OpenCV. HaarCascades, feature detection and optical flow should all benefit from utilizing the GPU. Then, we are going to implement AAM with and without face training to get a good lock on the mouth and on the eyes. The idea behind implementing AAM without face training (or a calibration step) is to see if how good it works without being trained to a specific face in the hope that we can avoid that step so people can just jump in and play the game. With a good lock on the face features using AAM, we will isolate the pupil locations in the eyes and calculate if the gaze vector intersects with the screen and viola! Robust gaze tracking!


Where can you play the game once it’s done? Well we’ve not yet decided on how we want to distribute the game, and there are a ton more features that we would like to add before we are ready for a production release. We are considering putting it on Steam Greenlight. So, with that, you will have to wait a little while before you can play Kiwi Katapult Revenge in all its glory. Let us know your thoughts, how would you distribute such a crazy game if it were up to you?


Parting Words

This has been a great experience for our whole team, and we want to thank Intel for having us be a part of the contest. We really like the game we came up with, and are excited to finish it and have it available for everyone to play. Thanks too to the judges for all the time and effort you’ve given us. Your feedback has only made our game better. Now best of luck choosing a winner! For those of you that have been following us during the contest, please do stay in touch and follow us on our own blog. From the team at Infrared5, over and out!

Ultimate Coder Week #5: For Those About to Integrate We Salute You

March 21st, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We definitely lost one of our nine lives this week with integrating face tracking into our game, but we still have our cat’s eyes, and are still feel very confident that we will be able to show a stellar game at GDC. On the face tracking end of things we had some big wins. We are finally happy with the speed of the algorithms, and the way things are being tracked will work perfectly for putting into Kiwi Catapult Revenge. We completed some complex math to create very realistic perspective shifting in Unity. Read below on those details, as well as for some C# code to get it working yourself. As we just mentioned, getting a DLL that properly calls update() from Unity and passes in the tracking values isn’t quite there yet. We did get some initial integration with head tracking coming into Unity, but full integration with our game is going to have to wait for this week.

On the C++ side of things, we have successfully found the 3D position of the a face in the tracking space. This is huge! By tracking space, we mean the actual (x,y,z) position of the face from the camera in meters. Why do we want the 3D position of the face in tracking space? The reason is so that we can determine the perspective projection of the 3D scene (in game) from the player’s location. Two things made this task interesting: 1) The aligned depth data for a given (x,y) from the RGB image is full of holes and 2) the camera specs only include the diagonal field of view (FOV) and no sensor dimensions.

We got around the holes in the aligned depth data by first checking for a usable value at the exact (x, y) location, and if the depth value was not valid (0 or the upper positive limit), we would walk through the pixels in a rectangle of increasing size until we encountered a usable value. It’s not that difficult to implement, but annoying when you have the weight of other tasks on your back. Another way to put it: It’s a Long Way to the Top on this project.

The z-depth of the face comes back in millimeters right from the depth data, the next trick was to convert the (x, y) position from pixels on the RGB frame to meters in the tracking space. There is a great illustration here of how to break the view pyramid up to derive formulas for x and y in the tracking space. The end result is:
TrackingSpaceX = TrackingSpaceZ * tan(horizontalFOV / 2) * 2 * (RGBSpaceX – RGBWidth / 2) / RGBWidth)
TrackingSpaceY = TrackingSpaceZ * tan(verticalFOV / 2) * 2 * (RGBSpaceY – RGBHeight / 2) / RGBHeight)

Where TrackingSpaceZ is the lookup from the depth data, horizontalFOV, and verticalFOV are are derived from the diagonal FOV in the Creative Gesture Camera Specs (here). Now we have the face position in tracking space! We verified the results using a nice metric tape measure (also difficult to find at the local hardware store – get with the metric program, USA!)

From here, we can determine the perspective projection so the player will feel like they are looking through a window into our game. Our first pass at this effect involved just changing the rotation and position of the 3D camera in our Unity scene, but it just didn’t look realistic. We were leaving out adjustment of the projection matrix to compensate for the off-center view of the display. For example: consider two equally-sized (in screen pixels) objects at either side of the screen. When the viewer is positioned nearer to one side of the screen, the object at the closer edge appears larger to the viewer than the one at the far edge, and the display outline becomes trapezoidal. To compensate, the projection should be transformed with a shear to maintain the apparent size of the two objects; just like looking out a window! To change up our methods and achieve this effect, we went straight to the ultimate paper on the subject: Robert Koomla’s Generalized Perspective Projection. Our port of his algorithm into C#/Unity is below.

The code follows the mouse pointer to change perspective (not a tracked face) and does not change depth (the way a face would). We are currently in the midst of wrapping our C++ libs into a DLL for Unity to consume and enable us to grab the 3D position of the face and then compute the camera projection matrix using the face position and the position of the computer screen in relation to the camera.

Last but not least we leave you with this week’s demo of the game. Some final art for UI elements are in, levels of increasing difficulty have been implemented and some initial sound effects are in the game.

As always, please ask if you have any questions on what we are doing, or if you just have something to say we would love to hear from you. Leave us a comment! In the meantime we will be coding All Night Long!

, , , , , , ,

START 2013 – A Conference Not to Miss

March 19th, 2013 by Rebecca Allen

Last Thursday, Chris Allen (one of my business partners and husband) and I headed on a train to New York City for the first inaugural conference called Start. We were one of 23 startups invited to show off our product, Brass Monkey, to the highly curated group of 500 attendees. Hands down, it has to be one of the best events I have ever attended. From the moment we arrived at  Centre 548 in Chelsea at 7:30am Friday morning until we left at 6:30pm that evening, it was one great conversation after another. Paddy Cosgrave and his amazing team of organizers at f.ounders did an outstanding job. We were honored to be selected as an exhibitor and excited to be amongst such innovative products and applications. Here are a few of my favorites: LittleBits , 3Doodler, BrandYourself and Magisto. LittleBits is an open source library of electronic modules that snap together with magnets for prototyping, learning and fun. Such a cool product that totally hits on so much that we love: open source technology, education, fun and creativity!

Since Chris and I were managing our booth, we were unable to attend the round tables and talks that happened throughout the day. We are excited that the talks were recorded, and Chris and I will be spending some quality time going through all of this great content. We had a fabulous day and would recommend to anyone that’s into Startups to attend Start 2014 when it comes around next year. I look forward to making it to WebSummit, f.ounders other event in the fall. Dublin, here we come!

, , , , ,

Infrared5 Ultimate Coder Update 4: Flamethrowers, Wingsuits, Rearview Mirrors and Face Tracking!

March 18th, 2013 by admin

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!
Week three seemed to go much better for the Infrared5 team. We are back on our feet with head tracking, and despite the judges lack of confidence in our ability to track eyes, we still believe that we’ve got a decent chance of pulling it off. Yes, it’s true as Nicole as said in her post this week, that the Intel Perceptual Computing (IPC) SDK isn’t yet up to the task. She had an interview with the Perceptual computing team and they told her “that eye tracking was going to be implemented later”. What’s funny about the lack of eye tracking and even decent gaze tracking in the IPC SDK is that the contest is showing this:



Yes we know it’s just marketing, but it is a pretty misleading image. They have a 3D mesh over a guy’s face giving the impression that the SDK can do AAM and POSIT. That would be so cool!  Look out FaceAPI! Unfortunately it totally doesn’t do that. At least not yet.

This isn’t to say that Intel is taking a bad approach with the IPC SDK beta either. They are trying out a lot of things at once and not getting lost in the specifics of just a few features. This allows developers to tell them what they want to do with it without spending tremendous effort on features that wouldn’t even be used.

The lack of decent head, gaze and eye tracking is what’s inspired us on to eventually release our tracking code as open source. Our hope is that future developers can leverage our work on these features and not have to go through the pain we did in this contest. Maybe Intel will just merge our code into the IPC SDK and we can continue to make the product better together.

Another reason we are sticking with our plan on gaze and eye tracking is that we feel strongly, as do the judges, that these features are some of the most exciting aspects of the perceptual computing camera. A convertible ultrabook has people’s hands busy with typing, touch gestures, etc. and having an interface that works using your face is such a natural fit for this kind of setup.

Latest Demo of Kiwi Catapult Revenge

Check out the latest developments with the Unity Web Player version. We’ve added a new fireball/flamethrower style effect, updated skybox, sheep and more. Note that this is still far from final art and behavior for the game, but we want to continue showing the process we are going through by providing these snapshots of the game in progress. This build requires the free Brass Monkey app for iOS or Android.

A Polished Experience

In addition to being thoroughly entertained by the judges’ video blooper this week, one thing we heard consistently from them is that they were expecting more polished apps from the non-individual teams. We couldn’t agree more! One advantage that we have in the contest is that we have a fantastic art and game design team. That’s not to say our tech skills are lacking either. We are at our core a very technically focused company, but we tend not to compartmentalize the design process and the technology implementation in projects we take on. Design and technology have to work together in harmony to create an amazing user experience, and that’s exactly what we’re doing in this challenge.

Game design is a funny, flexible and agile process. What you set out to do in the beginning rarely ends up being what you make in the end. Our initial idea started as a sort of Mad Max road warrior style driving and shooting game (thus Sascha thinking ours was a racing game early on), but after having read some bizarre news articles on eradicating cats in New Zealand we decided the story of Cats vs. Kiwis should be the theme. Plus Rebecca and Aaron really wanted to try out this 2D paper, pop-up book style, and the Kiwi story really lends itself to that look.

Moving to this new theme kept most of the core game mechanics as the driving game. Tracking with the head and eyes to shoot and using the phone as a virtual steering wheel are exactly the same in the road warrior idea. Since our main character Karl Kiwi has magical powers and can fly, we made it so he would be off the ground (unlike a car that’s fixed to the ground). Another part of the story is that Karl can breathe fire like a dragon, so we thought that’s an excellent way to use the perceptual computing camera by having the player open their mouth to be able to shoot fire. Shooting regular bullets didn’t work with the new character either, so we took some inspiration from funny laser cats memes, SNL and decided that he should be able to shoot lasers from his eyes. Believe it or not, we have been wanting to build a game involving animals and lasers for a while now. “Invasion of the Killer Cuties” was a game we concepted over two years ago where you fly a fighter plane in space against cute rodents that shoot lasers from their eyes (initial concept art shown below).



Since Chris wrote up the initial game design document (GDD) for Kiwi Catapult Revenge there have been plenty of other changes we’ve made throughout the contest. One example: our initial pass at fire breathing (a spherical projectile) wasn’t really getting the effect we wanted. In the GDD it was described as a fireball so this was a natural choice. What we found though is that it was hard to hit the cats, and the ball didn’t look that good either. We explored how dragon fire breathing is depicted in movies, and the effect is much more like how a flamethrower works. The new fire breathing effect that John implemented this week is awesome! And we believe it adds to the overall polish of our entry for the contest.

(image credit MT Falldog)


Another aspect of the game that wasn’t really working so far was that the main character was never shown. We chose a first person point of view so that the effect of moving your head and peering around items would feel incredibly immersive, giving the feeling that you are really right in this 3D world. However, this meant that you would never see Karl, our protagonist.

Enter the rear view mirror effect. We took a bit of inspiration from the super cool puppets that Sixense showed last week, and this video of an insane wingsuit base jump and came up with a way to show off our main character. Karl Kiwi will be fitted with a rear view mirror so that he can see what’s behind him, and you as the player can the character move the same as you. When you tilt your head, Karl will tilt his, when you look right, so will Karl, and when you open your mouth Karl’s beak will open. This will all happen in real time, and the effect will really show the power of the perceptual computing platform that Intel has provided.

Head Tracking Progress Plus Code and Videos

It wouldn’t be a proper Ultimate Coder post without some video and some code, so we have provided you some snippets for your perusal. Steff did a great job of documenting his progress this week, and we want to show you step by step where we are heading by sharing a bit of code and some video for each of these face detection examples. Steff is working from this plan, and knocking off each of the individual algorithms step by step. Note that this week’s example requires the OpenCV library and a C compiler for Windows.

This last week of Steff’s programming was all about two things: 1) switching from working entirely in Unity (with C#) to a C++ workflow in Visual Studio, and 2) refining our face tracking algorithm.  As noted in last week’s post, we hit a roadblock trying to write everything in C# in Unity with DLL for the Intel SDK and OpenCV.  There were just limits to the port of OpenCV that we needed to shed.  So, we spent some quality time setting up in VS 2012 Express and enjoying the sharp sting of pointers, references, and those type of lovely things that we have avoided by working in C#.  However there is good news, we did get back the amount of lower level control needed to refine face detection!

Our main refinement this week was to break through the limitations of tracking faces that we encountered when implementing the Viola-Jones detection method using Haar Cascades. This is a great way to find a face, but it’s not the best for tracking a face from frame to frame.  It has limitations in orientation; e.g. if the face is tilted to one side the Haar Cascade no longer detects a face.  Another drawback is that while looking for a face, the algorithm is churning through images per every set block of pixels.  It can really slow things down. To break through this limitation, we took inspiration from the implementation by the team at ROS.org . They have done a nice job putting face tracking together using python, OpenCV, and an RGB camera + Kinect. Following their example, we have implemented feature detection with GoodFeaturesToTrack and then tracked each feature from frame to frame using Optical Flow. The video below shows the difference between the two methods and also includes a first pass at creating a blue screen from the depth data.

This week, we will be adding depth data into this tracking algorithm.  With depth, we will be able to refine our Region Of Interest to include an good estimate of face size and we will also be able to knock out the background to speed up Face Detection with the Haar Cascades. Another critical step is integrating our face detection algorithms into the Unity game. We look forward to seeing how all this goes and filling you in with next week’s post!

We are also really excited about all the other teams’ progress so far, and in particular we want to congratulate Lee on making a super cool video last week!  We had some plans to do a more intricate video based on Lee’s, but a huge snowstorm in Boston put a bit of a wrench in those plans. Stay tuned for next week’s post though, as we’ve got some exciting (and hopefully funny) stuff to show you!

For you code junkies out there, here is a code snippet showing how we implemented GoodFeaturesToTrack and Lucas-Kanada Optical Flow:


#include "stdafx.h"

#include "cv.h"

#include "highgui.h"

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <assert.h>

#include <math.h>

#include <float.h>

#include <limits.h>

#include <time.h>

#include <ctype.h>

#include <vector>

#include "CaptureFrame.h"

#include "FaceDetection.h"

using namespace cv;

using namespace std;

static void help()

{

// print a welcome message, and the OpenCV version

cout << "\nThis is a demo of Robust face tracking use Lucas-Kanade Optical Flow,\n"

"Using OpenCV version %s" << CV_VERSION << "\n"

<< endl;

cout << "\nHot keys: \n"

"\tESC - quit the program\n"

"\tr - restart face tracking\n" << endl;

}

// function declaration for drawing the region of interest around the face

void drawFaceROIFromRect(IplImage *src, CvRect *rect);

// function declaration for finding good features to track in a region

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi);

// function declaration for finding a trackbox around an array of points

CvBox2D findTrackBox(CvPoint2D32f *features, int numPoints);

// function declaration for finding the distance a point is from a given cluster of points

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints);

// Storage for the previous gray image

IplImage *prevGray = 0;

// Storage for the previous pyramid image

IplImage *prevPyramid = 0;

// for working with the current frame in grayscale

IplImage *gray = 0;

// for working with the current frame in grayscale2 (for L-K OF)

IplImage *pyramid = 0;

// max features to track in the face region

int const MAX_FEATURES_TO_TRACK = 300;

// max features to add when we search on top of an existing pool of tracked points

int const MAX_FEATURES_TO_ADD = 300;

// min features that we can track in a face region before we fail back to face detection

int const MIN_FEATURES_TO_RESET = 6;

// the threshold for the x,y mean squared error indicating that we need to scrap our current track and start over

float const MSE_XY_MAX = 10000;

// threshold for the standard error on x,y points we're tracking

float const STANDARD_ERROR_XY_MAX = 3;

// threshold for the standard error on x,y points we're tracking

double const EXPAND_ROI_INIT = 1.02;

// max distance from a cluster a new tracking can be

int const ADD_FEATURE_MAX_DIST = 20;

int main(int argc, char **argv)

{

// Init some vars and const

// name the window

const char *windowName = "Robust Face Detection v0.1a";

// box for defining the region where a face was detected

CvRect *faceDetectRect = NULL;

// Object faceDetection of the class "FaceDetection"

FaceDetection faceDetection;

// Object captureFrame of the class "CaptureFrame"

CaptureFrame captureFrame;

// for working with the current frame

IplImage *currentFrame;

// for testing if the stream is finished

bool finished = false;

// for storing the features

CvPoint2D32f features[MAX_FEATURES_TO_TRACK] = {0};

// for storing the number of current features that we're tracking

int numFeatures = 0;

// box for defining the region where a features are being tracked

CvBox2D featureTrackBox;

// multiplier for expanding the trackBox

float expandROIMult = 1.02;

// threshold number for adding more features to the region

int minFeaturesToNewSearch = 50;

// Start doing stuff ------------------>

// Create a new window

cvNamedWindow(windowName, 1);

// Capture from the camera

captureFrame.StartCapture();

// initialize the face tracker

faceDetection.InitFaceDetection();

// capture a frame just to get the sizes so the scratch images can be initialized

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

currentFrame = captureFrame.getFrameCopy();

// init the images

prevGray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

prevPyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

gray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

pyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

// iterate through each frame

while(1)

{

// check if the video is finished (kind of silly since we're only working on live streams)

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

// save a reference to the current frame

currentFrame = captureFrame.getFrameCopy();

// check if we have a face rect

if (faceDetectRect)

{

// Create a grey version of the current frame

cvCvtColor(currentFrame, gray, CV_RGB2GRAY);

// Equalize the histogram to reduce lighting effects

cvEqualizeHist(gray, gray);

// check if we have features to track in our faceROI

if (numFeatures > 0)

{

bool died = false;

//cout << "\nnumFeatures: " << numFeatures;

// track them using L-K Optical Flow

char featureStatus[MAX_FEATURES_TO_TRACK];

float featureErrors[MAX_FEATURES_TO_TRACK];

CvSize pyramidSize = cvSize(gray->width + 8, gray->height / 3);

CvPoint2D32f *featuresB = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

CvPoint2D32f *tempFeatures = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

cvCalcOpticalFlowPyrLK(prevGray, gray, prevPyramid, pyramid, features, featuresB, numFeatures, cvSize(10,10), 5, featureStatus, featureErrors, cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, -3), 0);

numFeatures = 0;

float sumX = 0;

float sumY = 0;

float meanX = 0;

float meanY = 0;

// copy back to features, but keep only high status points

// and count the number using numFeatures

for (int i = 0; i < MAX_FEATURES_TO_TRACK; i++)

{

if (featureStatus[i])

{

// quick prune just by checking if the point is outside the image bounds

if (featuresB[i].x < 0 || featuresB[i].y < 0 || featuresB[i].x > gray->width || featuresB[i].y > gray->height)

{

// do nothing

}

else

{

// count the good values

tempFeatures[numFeatures] = featuresB[i];

numFeatures++;

// sum up to later calc the mean for x and y

sumX += featuresB[i].x;

sumY += featuresB[i].y;

}

}

//cout << "featureStatus[" << i << "] : " << featureStatus[i] << endl;

}

//cout << "numFeatures: " << numFeatures << endl;

// calc the means

meanX = sumX / numFeatures;

meanY = sumY / numFeatures;

// prune points using mean squared error

// caclulate the squaredError for x, y (square of the distance from the mean)

float squaredErrorXY = 0;

for (int i = 0; i < numFeatures; i++)

{

squaredErrorXY += (tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y  - meanY) * (tempFeatures[i].y - meanY);

}

//cout << "squaredErrorXY: " << squaredErrorXY << endl;

// calculate mean squared error for x,y

float meanSquaredErrorXY = squaredErrorXY / numFeatures;

//cout << "meanSquaredErrorXY: " << meanSquaredErrorXY << endl;

// mean squared error must be greater than 0 but less than our threshold (big number that would indicate our points are insanely spread out)

if (meanSquaredErrorXY == 0 || meanSquaredErrorXY > MSE_XY_MAX)

{

numFeatures = 0;

died = true;

}

else

{

// Throw away the outliers based on the x-y variance

// store the good values in the features array

int cnt = 0;

for (int i = 0; i < numFeatures; i++)

{

float standardErrorXY = ((tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y - meanY) * (tempFeatures[i].y - meanY)) / meanSquaredErrorXY;

if (standardErrorXY < STANDARD_ERROR_XY_MAX)

{

// we want to keep this point

features[cnt] = tempFeatures[i];

cnt++;

}

}

numFeatures = cnt;

// only bother with fixing the tail of the features array if we still have points to track

if (numFeatures > 0)

{

// set everything past numFeatures to -10,-10 in our updated features array

for (int i = numFeatures; i < MAX_FEATURES_TO_TRACK; i++)

{

features[i] = cvPoint2D32f(-10,-10);

}

}

}

// check if we're below the threshold min points to track before adding new ones

if (numFeatures < minFeaturesToNewSearch)

{

// add new features

// up the multiplier for expanding the region

expandROIMult *= EXPAND_ROI_INIT;

// expand the trackBox

float newWidth = featureTrackBox.size.width * expandROIMult;

float newHeight = featureTrackBox.size.height * expandROIMult;

CvSize2D32f newSize = cvSize2D32f(newWidth, newHeight);

CvBox2D newRoiBox = {featureTrackBox.center, newSize, featureTrackBox.angle};

// find new points

CvPoint2D32f additionalFeatures[MAX_FEATURES_TO_ADD] = {0};

int numAdditionalFeatures = findFeatures(gray, additionalFeatures, newRoiBox);

int endLoop = MAX_FEATURES_TO_ADD;

if (MAX_FEATURES_TO_TRACK < endLoop + numFeatures)

endLoop -= numFeatures + endLoop - MAX_FEATURES_TO_TRACK;

// copy new stuff to features, but be mindful of the array max

for (int i = 0; i < endLoop; i++)

{

// TODO check if they are way outside our stuff????

int dist = findDistanceToCluster(additionalFeatures[i], features, numFeatures);

if (dist < ADD_FEATURE_MAX_DIST)

{

features[numFeatures] = additionalFeatures[i];

numFeatures++;

}

}

// TODO check for duplicates???

// check if we're below the reset min

if (numFeatures < MIN_FEATURES_TO_RESET)

{

// if so, set to numFeatures 0, null out the detect rect and do face detection on the next frame

numFeatures = 0;

faceDetectRect = NULL;

died = true;

}

}

else

{

// reset the expand roi mult back to the init

// since this frame didn't need an expansion

expandROIMult = EXPAND_ROI_INIT;

}

// find the new track box

if (!died)

featureTrackBox = findTrackBox(features, numFeatures);

}

else

{

// convert the faceDetectRect to a CvBox2D

CvPoint2D32f center = cvPoint2D32f(faceDetectRect->x + faceDetectRect->width * 0.5, faceDetectRect->y + faceDetectRect->height * 0.5);

CvSize2D32f size = cvSize2D32f(faceDetectRect->width, faceDetectRect->height);

CvBox2D roiBox = {center, size, 0};

// get features to track

numFeatures = findFeatures(gray, features, roiBox);

// verify that we found features to track on this frame

if (numFeatures > 0)

{

// find the corner subPix

cvFindCornerSubPix(gray, features, numFeatures, cvSize(10, 10), cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03));

// define the featureTrackBox around our new points

featureTrackBox = findTrackBox(features, numFeatures);

// calculate the minFeaturesToNewSearch from our detected face values

minFeaturesToNewSearch = 0.9 * numFeatures;

// wait for the next frame to start tracking using optical flow

}

else

{

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

}

}

else

{

// reset the current features

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

// save gray and pyramid frames for next frame

cvCopy(gray, prevGray, 0);

cvCopy(pyramid, prevPyramid, 0);

// draw some stuff into the frame to show results

if (numFeatures > 0)

{

// show the features as little dots

for(int i = 0; i < numFeatures; i++)

{

CvPoint myPoint = cvPointFrom32f(features[i]);

cvCircle(currentFrame, cvPointFrom32f(features[i]), 2, CV_RGB(0, 255, 0), CV_FILLED);

}

// show the tracking box as an ellipse

cvEllipseBox(currentFrame, featureTrackBox, CV_RGB(0, 0, 255), 3);

}

// show the current frame in the window

cvShowImage(windowName, currentFrame);

// wait for next frame or keypress

char c = (char)waitKey(30);

if(c == 27)

break;

switch(c)

{

case 'r':

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

break;

}

}

// Release the image and tracker

captureFrame.DeallocateFrames();

// Destroy the window previously created

cvDestroyWindow(windowName);

return 0;

}

// draws a region of interest in the src frame based on the given rect

void drawFaceROIFromRect(IplImage *src, CvRect *rect)

{

// Points to draw the face rectangle

CvPoint pt1 = cvPoint(0, 0);

CvPoint pt2 = cvPoint(0, 0);

// setup the points for drawing the rectangle

pt1.x = rect->x;

pt1.y = rect->y;

pt2.x = pt1.x + rect->width;

pt2.y = pt1.y + rect->height;

// Draw face rectangle

cvRectangle(src, pt1, pt2, CV_RGB(255,0,0), 2, 8, 0 );

}

// finds features and stores them in the given array

// TODO move this method into a Class

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi)

{

//cout << "findFeatures" << endl;

int featureCount = 0;

double minDistance = 5;

double quality = 0.01;

int blockSize = 3;

int useHarris = 0;

double k = 0.04;

// Create a mask image to be used to select the tracked points

IplImage *mask = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// Begin with all black pixels

cvZero(mask);

// Create a filled white ellipse within the box to define the ROI in the mask.

cvEllipseBox(mask, roi, CV_RGB(255, 255, 255), CV_FILLED);

// Create the temporary scratchpad images

IplImage *eig = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

IplImage *temp = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// init the corner count int

int cornerCount = MAX_FEATURES_TO_TRACK;

// Find keypoints to track using Good Features to Track

cvGoodFeaturesToTrack(src, eig, temp, features, &cornerCount, quality, minDistance, mask, blockSize, useHarris, k);

// iterate through the array

for (int i = 0; i < cornerCount; i++)

{

if ((features[i].x == 0 && features[i].y == 0) || features[i].x > src->width || features[i].y > src->height)

{

// do nothing

}

else

{

featureCount++;

}

}

//cout << "\nfeatureCount = " << featureCount << endl;

// return the feature count

return featureCount;

}

// finds the track box for a given array of 2d points

// TODO move this method into a Class

CvBox2D findTrackBox(CvPoint2D32f *points, int numPoints)

{

//cout << "findTrackBox" << endl;

//cout << "numPoints: " << numPoints << endl;

CvBox2D box;

// matrix for helping calculate the track box

CvMat *featureMatrix = cvCreateMat(1, numPoints, CV_32SC2);

// collect the feature points in the feature matrix

for(int i = 0; i < numPoints; i++)

cvSet2D(featureMatrix, 0, i, cvScalar(points[i].x, points[i].y));

// create an ellipse off of the featureMatrix

box = cvFitEllipse2(featureMatrix);

// release the matrix (cause we're done with it)

cvReleaseMat(&featureMatrix);

// return the box

return box;

}

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints)

{

int minDistance = 10000;

for (int i = 0; i < numClusterPoints; i++)

{

int distance = abs(point.x - cluster[i].x) + abs(point.y - cluster[i].y);

if (distance < minDistance)

minDistance = distance;

}

return minDistance;

}

, , , , ,

The Balanced Approach: Hackathons and Marathons

March 2nd, 2013 by admin

The other day a blog post called “Hackathons are bad for you” struck a chord with developers and other members of the technology world.  The post, from Chinmay Pendharkar, a developer in Singapore, thoughtfully called out the code-all-night-drink-too-much-coffee-and-alcohol-and-eat-junk-food mentality.  It received hundreds of kudos from the obviously over- tired and over-caffeinated developer community. “Chinpen” makes a lot of good points, especially when he talks about culture and the glorification of the geek lifestyle.

We also give him thumbs up for making concrete suggestions around healthy hackathons.  (We’ve seen some of those guidelines in place locally  For example, the Battle for the Charles Startup Weekend organizers made great efforts to supply healthy eats and gave everyone reusable water bottles so they could hydrate without generating hundreds of empty disposable water bottles.)

Like everything in this world, there is room for a healthy balance.  Hackathons crackle with creative energy.  They can be a wonderful source of new ideas and inspiration.  Our own team is currently participating in the Intel Ultimate Coder Challenge, and we’re all excited and energized by the new technology and techniques.  We are already looking at ways we can employ these in our everyday work.

Over the last five years, we’ve grown Infrared5 significantly while holding the line on unrealistic release schedules and development timelines that deplete us mentally and physically.  While we have crunch times like everyone else, we offer comp time to make up for overtime. We encourage restful nights and weekends for restoring our creative selves.  Walking the dogs who are our office companions are great times for partner meetings.  Keith and Rebecca have taken up running, and Rebecca plans to compete in 10 races this year including one half-marathon.  She also wants to run a 5K in under 8 minute miles.

And yet it isn’t all bean sprouts and granola — as many of you know, we have (infamous) “The Infrared5 beer:30” get-together on Friday afternoons where we connect socially as a team and do some craft beer sampling.  This is an important part of our healthy balance.

Last week we spent part of this get-together brainstorming our “Wicked Ten” – how we define projects we would all like to work on.  Number 6 on the list was “Reasonable timeline/good budget.”  While this may seem obvious, it is rarer than we’d all like.  Yet, we know that some of the work we are proudest of comes when we work with people who also take time off to rest their creative muscles and exercise their physical bodies.

How are you achieving balance?

Face Tracking, Depth Sensors, Flying and Art = Progress!

February 28th, 2013 by admin

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

This week our team made a ton of progress on our game Kiwi Catapult Revenge. In their posts, the judges made some suggestions that the projects we are undertaking are ambitious and are perhaps a bit more than what can be accomplished in seven weeks. We have to agree that none of the teams are taking the easy way out, and we feel that because everyone is taking on such lofty goals it will only spur on more creativity from the entire group of competitors. Lucky for us, the Infrared5 guys/gals are accustomed to tight deadlines, insane schedules and hard to achieve deliverables, so the Ultimate Coder Challenge just feels like a demanding client. But it’s not like this is going to be easy. We’ve got quite a few challenges and areas of risk that we need to keep an eye on, plus it’s so tempting to continue to add scope, especially when you have the “fun factor” element which is so essential to creating a good game.

Speaking of the competitors, what many people following the contest may not realize is that we actually all are in communication quite a bit. We’ve got our own informal mailing list going, and there’s a lot of back and forth between the teams and sharing of ideas across projects. There is more of a collaborative spirit rather than a cut-throat competitive nature amongst the teams. We’ve even got a Google Hangout session scheduled this week so that we can all talk face to face. Unfortunately, Lee’s super cool video chat program isn’t ready for the task. We, at Infrared5, strongly believe that sharing ideas spurs on innovation and it will up our game to be so open with the other teams competing. After all, great ideas don’t happen in a vacuum.

In addition to our post this week, we did a quick video where Chris talked to our team members about head tracking, art and more.

Face Tracking

Let’s start with the biggest challenge we’ve given ourselves: Face tracking. We have been playing with OpenCV and the Intel Perceptual Computing SDK in different Unity proof of concept projects last week. Looking at our projected plan that we created at the start of the competition, our focus was on implementing basic face tracking by detecting Haar-like features. This works well, but he face detection algorithm currently has limits. If the target face is rotated to too far to either side then it will not be recognized and tracked as a “face.” Fortunately, we are aware of the limitation in the algorthim and have plans to implement a patch. We created Obama and Beyonce controllers so those of us with beards (Steff) can have more faces to test without bothering anyone in the office to “come and put your face in front of this screen.” Our current setup will cause issues if you have a beard and wear a hat – foreheads and mouths are important with this algorithm! Check out our “custom controllers” below.

Best news of the week: the depth sensing camera is awesome! It gives much better detail than we originally saw with the samples that came packaged with the SDK. The not-as-good news: since this SDK is still in beta, the documentation is not so awesome. Things do not always match up, especially with the prepackaged port to Unity3d. We are experiencing a good amount of crashing and might have to back out of this and write some of our own C code to port in the methods that we need for mapping the depth data to the RGB data. Stay tuned for what happens there!

Back to the good news. We originally were only going to use the data from the depth sensor to wipe out the background (one of the first steps in our planned pipeline). However, the depth data is so good, it will definitely also help us later on when we are calculating the pose of the player’s head. Pose calculations depend on estimating the position of non-coplanar points (read up on POSIT if you really want to geek-out now, but we will fill in more detail on POSIT once we implement it in our system), and finding these points is going to be much less of an iterative process with this depth data since we can actually look up the depth and associated confidence level for any point in the RGB image!

Introducing Gaze Tracking

Because of all the details from the depth + RBG cameras, we are optimistic that we will be able to track the player’s pupils. This of course means that we will be able to get gaze tracking working with our game. In Kiwi Catapult Revenge aiming at your targets won’t just lock into the center of where you are tilting your head, but we will allow you to fire precisely where you are looking, at any point in time. This one feature combined with the head tilt is where you start to really see how video games based on perceptual computing are going to have tremendous performance advantages over typical game controls like joypads, keyboard/mouse, etc. Now imagine adding another sensor device to the mix like Google Glass. What would be possible then? Maybe we can convince Google to give us early access to find out.

Game Engine

John has made a ton of progress on the game mechanics this week. He’s got a really good flow for flying in the game. We took inspiration from the PS3 game Flower for the player character movement we wanted to create in Kiwi Catapult Revenge. There’s a nice bounce and easing to the movement, and the ability to subtly launch over hills and come back down smoothly is going to really bring the flying capabilities of Karl Kiwi to life. John managed to get this working in a demo along with head tracking (currently mapped to the mouse movement). You can fly around (WASD keys), and look about, and get a general feel for how the game is going to play. We’ve posted a quick Unity Web Player version (here) of the demo for you to try out. Keep in mind that the controls aren’t yet mapped to the phone, nor is the artwork even close to final in this version.

Art and Game Design

Speaking of artwork, Rebecca, Aaron and Elena have been producing what everyone seems to agree is amounting to a very unique and inspiring visual aspect to our game. Chris did a quick interview with Rebecca and Aaron on the work they are doing and what inspired them to come up with the idea. We’ve included that in our video this week as well.

This week the design team plans to experiment more with paper textures and lighting, as well as rigging up some of the characters for some initial looks at animation and movement in the game.

Oh, and in case you missed it, we posted an update on the game with the background story of our main character. There you can also find some great concept art from Aaron and an early version of the 3D environment to whet your appetite.

That’s it for this week. What do you think about the playable game demo? What about our approach to face tracking? Is there anything else that we should be considering? Please let us know what you think in the comments below.

« Previous Entries