February 28th, 2013 by admin
This week our team made a ton of progress on our game Kiwi Catapult Revenge. In their posts, the judges made some suggestions that the projects we are undertaking are ambitious and are perhaps a bit more than what can be accomplished in seven weeks. We have to agree that none of the teams are taking the easy way out, and we feel that because everyone is taking on such lofty goals it will only spur on more creativity from the entire group of competitors. Lucky for us, the Infrared5 guys/gals are accustomed to tight deadlines, insane schedules and hard to achieve deliverables, so the Ultimate Coder Challenge just feels like a demanding client. But it’s not like this is going to be easy. We’ve got quite a few challenges and areas of risk that we need to keep an eye on, plus it’s so tempting to continue to add scope, especially when you have the “fun factor” element which is so essential to creating a good game.
Speaking of the competitors, what many people following the contest may not realize is that we actually all are in communication quite a bit. We’ve got our own informal mailing list going, and there’s a lot of back and forth between the teams and sharing of ideas across projects. There is more of a collaborative spirit rather than a cut-throat competitive nature amongst the teams. We’ve even got a Google Hangout session scheduled this week so that we can all talk face to face. Unfortunately, Lee’s super cool video chat program isn’t ready for the task. We, at Infrared5, strongly believe that sharing ideas spurs on innovation and it will up our game to be so open with the other teams competing. After all, great ideas don’t happen in a vacuum.
In addition to our post this week, we did a quick video where Chris talked to our team members about head tracking, art and more.
Let’s start with the biggest challenge we’ve given ourselves: Face tracking. We have been playing with OpenCV and the Intel Perceptual Computing SDK in different Unity proof of concept projects last week. Looking at our projected plan that we created at the start of the competition, our focus was on implementing basic face tracking by detecting Haar-like features. This works well, but he face detection algorithm currently has limits. If the target face is rotated to too far to either side then it will not be recognized and tracked as a “face.” Fortunately, we are aware of the limitation in the algorthim and have plans to implement a patch. We created Obama and Beyonce controllers so those of us with beards (Steff) can have more faces to test without bothering anyone in the office to “come and put your face in front of this screen.” Our current setup will cause issues if you have a beard and wear a hat – foreheads and mouths are important with this algorithm! Check out our “custom controllers” below.
Best news of the week: the depth sensing camera is awesome! It gives much better detail than we originally saw with the samples that came packaged with the SDK. The not-as-good news: since this SDK is still in beta, the documentation is not so awesome. Things do not always match up, especially with the prepackaged port to Unity3d. We are experiencing a good amount of crashing and might have to back out of this and write some of our own C code to port in the methods that we need for mapping the depth data to the RGB data. Stay tuned for what happens there!
Back to the good news. We originally were only going to use the data from the depth sensor to wipe out the background (one of the first steps in our planned pipeline). However, the depth data is so good, it will definitely also help us later on when we are calculating the pose of the player’s head. Pose calculations depend on estimating the position of non-coplanar points (read up on POSIT if you really want to geek-out now, but we will fill in more detail on POSIT once we implement it in our system), and finding these points is going to be much less of an iterative process with this depth data since we can actually look up the depth and associated confidence level for any point in the RGB image!
Introducing Gaze Tracking
Because of all the details from the depth + RBG cameras, we are optimistic that we will be able to track the player’s pupils. This of course means that we will be able to get gaze tracking working with our game. In Kiwi Catapult Revenge aiming at your targets won’t just lock into the center of where you are tilting your head, but we will allow you to fire precisely where you are looking, at any point in time. This one feature combined with the head tilt is where you start to really see how video games based on perceptual computing are going to have tremendous performance advantages over typical game controls like joypads, keyboard/mouse, etc. Now imagine adding another sensor device to the mix like Google Glass. What would be possible then? Maybe we can convince Google to give us early access to find out.
John has made a ton of progress on the game mechanics this week. He’s got a really good flow for flying in the game. We took inspiration from the PS3 game Flower for the player character movement we wanted to create in Kiwi Catapult Revenge. There’s a nice bounce and easing to the movement, and the ability to subtly launch over hills and come back down smoothly is going to really bring the flying capabilities of Karl Kiwi to life. John managed to get this working in a demo along with head tracking (currently mapped to the mouse movement). You can fly around (WASD keys), and look about, and get a general feel for how the game is going to play. We’ve posted a quick Unity Web Player version (here) of the demo for you to try out. Keep in mind that the controls aren’t yet mapped to the phone, nor is the artwork even close to final in this version.
Art and Game Design
Speaking of artwork, Rebecca, Aaron and Elena have been producing what everyone seems to agree is amounting to a very unique and inspiring visual aspect to our game. Chris did a quick interview with Rebecca and Aaron on the work they are doing and what inspired them to come up with the idea. We’ve included that in our video this week as well.
This week the design team plans to experiment more with paper textures and lighting, as well as rigging up some of the characters for some initial looks at animation and movement in the game.
Oh, and in case you missed it, we posted an update on the game with the background story of our main character. There you can also find some great concept art from Aaron and an early version of the 3D environment to whet your appetite.
That’s it for this week. What do you think about the playable game demo? What about our approach to face tracking? Is there anything else that we should be considering? Please let us know what you think in the comments below.