Parent Tested Parent Approved takes on THE GAME OF LIFE:zAPPed!

March 27th, 2013 by Rosie

Infrared5 is pleased to announce that one of our projects, THE GAME OF LIFE:zAPPed has been featured on Parent Tested Parent Approved. We love hearing feedback from users on our projects when our work goes out into the world and nothing is more fulfilling than hearing positive feedback from users. We’re flattered that the largest parent testing community recognized worldwide thinks that THE GAME OF LIFE: zAPPed is fun- both for parents who remember the original game and kids who are playing it for the first time. Check out the review to hear some of the nice things that parent playtesters had to say.

,

Ultimate Coder Week #5: For Those About to Integrate We Salute You

March 21st, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We definitely lost one of our nine lives this week with integrating face tracking into our game, but we still have our cat’s eyes, and are still feel very confident that we will be able to show a stellar game at GDC. On the face tracking end of things we had some big wins. We are finally happy with the speed of the algorithms, and the way things are being tracked will work perfectly for putting into Kiwi Catapult Revenge. We completed some complex math to create very realistic perspective shifting in Unity. Read below on those details, as well as for some C# code to get it working yourself. As we just mentioned, getting a DLL that properly calls update() from Unity and passes in the tracking values isn’t quite there yet. We did get some initial integration with head tracking coming into Unity, but full integration with our game is going to have to wait for this week.

On the C++ side of things, we have successfully found the 3D position of the a face in the tracking space. This is huge! By tracking space, we mean the actual (x,y,z) position of the face from the camera in meters. Why do we want the 3D position of the face in tracking space? The reason is so that we can determine the perspective projection of the 3D scene (in game) from the player’s location. Two things made this task interesting: 1) The aligned depth data for a given (x,y) from the RGB image is full of holes and 2) the camera specs only include the diagonal field of view (FOV) and no sensor dimensions.

We got around the holes in the aligned depth data by first checking for a usable value at the exact (x, y) location, and if the depth value was not valid (0 or the upper positive limit), we would walk through the pixels in a rectangle of increasing size until we encountered a usable value. It’s not that difficult to implement, but annoying when you have the weight of other tasks on your back. Another way to put it: It’s a Long Way to the Top on this project.

The z-depth of the face comes back in millimeters right from the depth data, the next trick was to convert the (x, y) position from pixels on the RGB frame to meters in the tracking space. There is a great illustration here of how to break the view pyramid up to derive formulas for x and y in the tracking space. The end result is:
TrackingSpaceX = TrackingSpaceZ * tan(horizontalFOV / 2) * 2 * (RGBSpaceX – RGBWidth / 2) / RGBWidth)
TrackingSpaceY = TrackingSpaceZ * tan(verticalFOV / 2) * 2 * (RGBSpaceY – RGBHeight / 2) / RGBHeight)

Where TrackingSpaceZ is the lookup from the depth data, horizontalFOV, and verticalFOV are are derived from the diagonal FOV in the Creative Gesture Camera Specs (here). Now we have the face position in tracking space! We verified the results using a nice metric tape measure (also difficult to find at the local hardware store – get with the metric program, USA!)

From here, we can determine the perspective projection so the player will feel like they are looking through a window into our game. Our first pass at this effect involved just changing the rotation and position of the 3D camera in our Unity scene, but it just didn’t look realistic. We were leaving out adjustment of the projection matrix to compensate for the off-center view of the display. For example: consider two equally-sized (in screen pixels) objects at either side of the screen. When the viewer is positioned nearer to one side of the screen, the object at the closer edge appears larger to the viewer than the one at the far edge, and the display outline becomes trapezoidal. To compensate, the projection should be transformed with a shear to maintain the apparent size of the two objects; just like looking out a window! To change up our methods and achieve this effect, we went straight to the ultimate paper on the subject: Robert Koomla’s Generalized Perspective Projection. Our port of his algorithm into C#/Unity is below.

The code follows the mouse pointer to change perspective (not a tracked face) and does not change depth (the way a face would). We are currently in the midst of wrapping our C++ libs into a DLL for Unity to consume and enable us to grab the 3D position of the face and then compute the camera projection matrix using the face position and the position of the computer screen in relation to the camera.

Last but not least we leave you with this week’s demo of the game. Some final art for UI elements are in, levels of increasing difficulty have been implemented and some initial sound effects are in the game.

As always, please ask if you have any questions on what we are doing, or if you just have something to say we would love to hear from you. Leave us a comment! In the meantime we will be coding All Night Long!

, , , , , , ,

START 2013 – A Conference Not to Miss

March 19th, 2013 by Rebecca Allen

Last Thursday, Chris Allen (one of my business partners and husband) and I headed on a train to New York City for the first inaugural conference called Start. We were one of 23 startups invited to show off our product, Brass Monkey, to the highly curated group of 500 attendees. Hands down, it has to be one of the best events I have ever attended. From the moment we arrived at  Centre 548 in Chelsea at 7:30am Friday morning until we left at 6:30pm that evening, it was one great conversation after another. Paddy Cosgrave and his amazing team of organizers at f.ounders did an outstanding job. We were honored to be selected as an exhibitor and excited to be amongst such innovative products and applications. Here are a few of my favorites: LittleBits , 3Doodler, BrandYourself and Magisto. LittleBits is an open source library of electronic modules that snap together with magnets for prototyping, learning and fun. Such a cool product that totally hits on so much that we love: open source technology, education, fun and creativity!

Since Chris and I were managing our booth, we were unable to attend the round tables and talks that happened throughout the day. We are excited that the talks were recorded, and Chris and I will be spending some quality time going through all of this great content. We had a fabulous day and would recommend to anyone that’s into Startups to attend Start 2014 when it comes around next year. I look forward to making it to WebSummit, f.ounders other event in the fall. Dublin, here we come!

, , , , ,

Infrared5 Ultimate Coder Update 4: Flamethrowers, Wingsuits, Rearview Mirrors and Face Tracking!

March 18th, 2013 by admin

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!
Week three seemed to go much better for the Infrared5 team. We are back on our feet with head tracking, and despite the judges lack of confidence in our ability to track eyes, we still believe that we’ve got a decent chance of pulling it off. Yes, it’s true as Nicole as said in her post this week, that the Intel Perceptual Computing (IPC) SDK isn’t yet up to the task. She had an interview with the Perceptual computing team and they told her “that eye tracking was going to be implemented later”. What’s funny about the lack of eye tracking and even decent gaze tracking in the IPC SDK is that the contest is showing this:



Yes we know it’s just marketing, but it is a pretty misleading image. They have a 3D mesh over a guy’s face giving the impression that the SDK can do AAM and POSIT. That would be so cool!  Look out FaceAPI! Unfortunately it totally doesn’t do that. At least not yet.

This isn’t to say that Intel is taking a bad approach with the IPC SDK beta either. They are trying out a lot of things at once and not getting lost in the specifics of just a few features. This allows developers to tell them what they want to do with it without spending tremendous effort on features that wouldn’t even be used.

The lack of decent head, gaze and eye tracking is what’s inspired us on to eventually release our tracking code as open source. Our hope is that future developers can leverage our work on these features and not have to go through the pain we did in this contest. Maybe Intel will just merge our code into the IPC SDK and we can continue to make the product better together.

Another reason we are sticking with our plan on gaze and eye tracking is that we feel strongly, as do the judges, that these features are some of the most exciting aspects of the perceptual computing camera. A convertible ultrabook has people’s hands busy with typing, touch gestures, etc. and having an interface that works using your face is such a natural fit for this kind of setup.

Latest Demo of Kiwi Catapult Revenge

Check out the latest developments with the Unity Web Player version. We’ve added a new fireball/flamethrower style effect, updated skybox, sheep and more. Note that this is still far from final art and behavior for the game, but we want to continue showing the process we are going through by providing these snapshots of the game in progress. This build requires the free Brass Monkey app for iOS or Android.

A Polished Experience

In addition to being thoroughly entertained by the judges’ video blooper this week, one thing we heard consistently from them is that they were expecting more polished apps from the non-individual teams. We couldn’t agree more! One advantage that we have in the contest is that we have a fantastic art and game design team. That’s not to say our tech skills are lacking either. We are at our core a very technically focused company, but we tend not to compartmentalize the design process and the technology implementation in projects we take on. Design and technology have to work together in harmony to create an amazing user experience, and that’s exactly what we’re doing in this challenge.

Game design is a funny, flexible and agile process. What you set out to do in the beginning rarely ends up being what you make in the end. Our initial idea started as a sort of Mad Max road warrior style driving and shooting game (thus Sascha thinking ours was a racing game early on), but after having read some bizarre news articles on eradicating cats in New Zealand we decided the story of Cats vs. Kiwis should be the theme. Plus Rebecca and Aaron really wanted to try out this 2D paper, pop-up book style, and the Kiwi story really lends itself to that look.

Moving to this new theme kept most of the core game mechanics as the driving game. Tracking with the head and eyes to shoot and using the phone as a virtual steering wheel are exactly the same in the road warrior idea. Since our main character Karl Kiwi has magical powers and can fly, we made it so he would be off the ground (unlike a car that’s fixed to the ground). Another part of the story is that Karl can breathe fire like a dragon, so we thought that’s an excellent way to use the perceptual computing camera by having the player open their mouth to be able to shoot fire. Shooting regular bullets didn’t work with the new character either, so we took some inspiration from funny laser cats memes, SNL and decided that he should be able to shoot lasers from his eyes. Believe it or not, we have been wanting to build a game involving animals and lasers for a while now. “Invasion of the Killer Cuties” was a game we concepted over two years ago where you fly a fighter plane in space against cute rodents that shoot lasers from their eyes (initial concept art shown below).



Since Chris wrote up the initial game design document (GDD) for Kiwi Catapult Revenge there have been plenty of other changes we’ve made throughout the contest. One example: our initial pass at fire breathing (a spherical projectile) wasn’t really getting the effect we wanted. In the GDD it was described as a fireball so this was a natural choice. What we found though is that it was hard to hit the cats, and the ball didn’t look that good either. We explored how dragon fire breathing is depicted in movies, and the effect is much more like how a flamethrower works. The new fire breathing effect that John implemented this week is awesome! And we believe it adds to the overall polish of our entry for the contest.

(image credit MT Falldog)


Another aspect of the game that wasn’t really working so far was that the main character was never shown. We chose a first person point of view so that the effect of moving your head and peering around items would feel incredibly immersive, giving the feeling that you are really right in this 3D world. However, this meant that you would never see Karl, our protagonist.

Enter the rear view mirror effect. We took a bit of inspiration from the super cool puppets that Sixense showed last week, and this video of an insane wingsuit base jump and came up with a way to show off our main character. Karl Kiwi will be fitted with a rear view mirror so that he can see what’s behind him, and you as the player can the character move the same as you. When you tilt your head, Karl will tilt his, when you look right, so will Karl, and when you open your mouth Karl’s beak will open. This will all happen in real time, and the effect will really show the power of the perceptual computing platform that Intel has provided.

Head Tracking Progress Plus Code and Videos

It wouldn’t be a proper Ultimate Coder post without some video and some code, so we have provided you some snippets for your perusal. Steff did a great job of documenting his progress this week, and we want to show you step by step where we are heading by sharing a bit of code and some video for each of these face detection examples. Steff is working from this plan, and knocking off each of the individual algorithms step by step. Note that this week’s example requires the OpenCV library and a C compiler for Windows.

This last week of Steff’s programming was all about two things: 1) switching from working entirely in Unity (with C#) to a C++ workflow in Visual Studio, and 2) refining our face tracking algorithm.  As noted in last week’s post, we hit a roadblock trying to write everything in C# in Unity with DLL for the Intel SDK and OpenCV.  There were just limits to the port of OpenCV that we needed to shed.  So, we spent some quality time setting up in VS 2012 Express and enjoying the sharp sting of pointers, references, and those type of lovely things that we have avoided by working in C#.  However there is good news, we did get back the amount of lower level control needed to refine face detection!

Our main refinement this week was to break through the limitations of tracking faces that we encountered when implementing the Viola-Jones detection method using Haar Cascades. This is a great way to find a face, but it’s not the best for tracking a face from frame to frame.  It has limitations in orientation; e.g. if the face is tilted to one side the Haar Cascade no longer detects a face.  Another drawback is that while looking for a face, the algorithm is churning through images per every set block of pixels.  It can really slow things down. To break through this limitation, we took inspiration from the implementation by the team at ROS.org . They have done a nice job putting face tracking together using python, OpenCV, and an RGB camera + Kinect. Following their example, we have implemented feature detection with GoodFeaturesToTrack and then tracked each feature from frame to frame using Optical Flow. The video below shows the difference between the two methods and also includes a first pass at creating a blue screen from the depth data.

This week, we will be adding depth data into this tracking algorithm.  With depth, we will be able to refine our Region Of Interest to include an good estimate of face size and we will also be able to knock out the background to speed up Face Detection with the Haar Cascades. Another critical step is integrating our face detection algorithms into the Unity game. We look forward to seeing how all this goes and filling you in with next week’s post!

We are also really excited about all the other teams’ progress so far, and in particular we want to congratulate Lee on making a super cool video last week!  We had some plans to do a more intricate video based on Lee’s, but a huge snowstorm in Boston put a bit of a wrench in those plans. Stay tuned for next week’s post though, as we’ve got some exciting (and hopefully funny) stuff to show you!

For you code junkies out there, here is a code snippet showing how we implemented GoodFeaturesToTrack and Lucas-Kanada Optical Flow:


#include "stdafx.h"

#include "cv.h"

#include "highgui.h"

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <assert.h>

#include <math.h>

#include <float.h>

#include <limits.h>

#include <time.h>

#include <ctype.h>

#include <vector>

#include "CaptureFrame.h"

#include "FaceDetection.h"

using namespace cv;

using namespace std;

static void help()

{

// print a welcome message, and the OpenCV version

cout << "\nThis is a demo of Robust face tracking use Lucas-Kanade Optical Flow,\n"

"Using OpenCV version %s" << CV_VERSION << "\n"

<< endl;

cout << "\nHot keys: \n"

"\tESC - quit the program\n"

"\tr - restart face tracking\n" << endl;

}

// function declaration for drawing the region of interest around the face

void drawFaceROIFromRect(IplImage *src, CvRect *rect);

// function declaration for finding good features to track in a region

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi);

// function declaration for finding a trackbox around an array of points

CvBox2D findTrackBox(CvPoint2D32f *features, int numPoints);

// function declaration for finding the distance a point is from a given cluster of points

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints);

// Storage for the previous gray image

IplImage *prevGray = 0;

// Storage for the previous pyramid image

IplImage *prevPyramid = 0;

// for working with the current frame in grayscale

IplImage *gray = 0;

// for working with the current frame in grayscale2 (for L-K OF)

IplImage *pyramid = 0;

// max features to track in the face region

int const MAX_FEATURES_TO_TRACK = 300;

// max features to add when we search on top of an existing pool of tracked points

int const MAX_FEATURES_TO_ADD = 300;

// min features that we can track in a face region before we fail back to face detection

int const MIN_FEATURES_TO_RESET = 6;

// the threshold for the x,y mean squared error indicating that we need to scrap our current track and start over

float const MSE_XY_MAX = 10000;

// threshold for the standard error on x,y points we're tracking

float const STANDARD_ERROR_XY_MAX = 3;

// threshold for the standard error on x,y points we're tracking

double const EXPAND_ROI_INIT = 1.02;

// max distance from a cluster a new tracking can be

int const ADD_FEATURE_MAX_DIST = 20;

int main(int argc, char **argv)

{

// Init some vars and const

// name the window

const char *windowName = "Robust Face Detection v0.1a";

// box for defining the region where a face was detected

CvRect *faceDetectRect = NULL;

// Object faceDetection of the class "FaceDetection"

FaceDetection faceDetection;

// Object captureFrame of the class "CaptureFrame"

CaptureFrame captureFrame;

// for working with the current frame

IplImage *currentFrame;

// for testing if the stream is finished

bool finished = false;

// for storing the features

CvPoint2D32f features[MAX_FEATURES_TO_TRACK] = {0};

// for storing the number of current features that we're tracking

int numFeatures = 0;

// box for defining the region where a features are being tracked

CvBox2D featureTrackBox;

// multiplier for expanding the trackBox

float expandROIMult = 1.02;

// threshold number for adding more features to the region

int minFeaturesToNewSearch = 50;

// Start doing stuff ------------------>

// Create a new window

cvNamedWindow(windowName, 1);

// Capture from the camera

captureFrame.StartCapture();

// initialize the face tracker

faceDetection.InitFaceDetection();

// capture a frame just to get the sizes so the scratch images can be initialized

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

currentFrame = captureFrame.getFrameCopy();

// init the images

prevGray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

prevPyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

gray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

pyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

// iterate through each frame

while(1)

{

// check if the video is finished (kind of silly since we're only working on live streams)

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

// save a reference to the current frame

currentFrame = captureFrame.getFrameCopy();

// check if we have a face rect

if (faceDetectRect)

{

// Create a grey version of the current frame

cvCvtColor(currentFrame, gray, CV_RGB2GRAY);

// Equalize the histogram to reduce lighting effects

cvEqualizeHist(gray, gray);

// check if we have features to track in our faceROI

if (numFeatures > 0)

{

bool died = false;

//cout << "\nnumFeatures: " << numFeatures;

// track them using L-K Optical Flow

char featureStatus[MAX_FEATURES_TO_TRACK];

float featureErrors[MAX_FEATURES_TO_TRACK];

CvSize pyramidSize = cvSize(gray->width + 8, gray->height / 3);

CvPoint2D32f *featuresB = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

CvPoint2D32f *tempFeatures = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

cvCalcOpticalFlowPyrLK(prevGray, gray, prevPyramid, pyramid, features, featuresB, numFeatures, cvSize(10,10), 5, featureStatus, featureErrors, cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, -3), 0);

numFeatures = 0;

float sumX = 0;

float sumY = 0;

float meanX = 0;

float meanY = 0;

// copy back to features, but keep only high status points

// and count the number using numFeatures

for (int i = 0; i < MAX_FEATURES_TO_TRACK; i++)

{

if (featureStatus[i])

{

// quick prune just by checking if the point is outside the image bounds

if (featuresB[i].x < 0 || featuresB[i].y < 0 || featuresB[i].x > gray->width || featuresB[i].y > gray->height)

{

// do nothing

}

else

{

// count the good values

tempFeatures[numFeatures] = featuresB[i];

numFeatures++;

// sum up to later calc the mean for x and y

sumX += featuresB[i].x;

sumY += featuresB[i].y;

}

}

//cout << "featureStatus[" << i << "] : " << featureStatus[i] << endl;

}

//cout << "numFeatures: " << numFeatures << endl;

// calc the means

meanX = sumX / numFeatures;

meanY = sumY / numFeatures;

// prune points using mean squared error

// caclulate the squaredError for x, y (square of the distance from the mean)

float squaredErrorXY = 0;

for (int i = 0; i < numFeatures; i++)

{

squaredErrorXY += (tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y  - meanY) * (tempFeatures[i].y - meanY);

}

//cout << "squaredErrorXY: " << squaredErrorXY << endl;

// calculate mean squared error for x,y

float meanSquaredErrorXY = squaredErrorXY / numFeatures;

//cout << "meanSquaredErrorXY: " << meanSquaredErrorXY << endl;

// mean squared error must be greater than 0 but less than our threshold (big number that would indicate our points are insanely spread out)

if (meanSquaredErrorXY == 0 || meanSquaredErrorXY > MSE_XY_MAX)

{

numFeatures = 0;

died = true;

}

else

{

// Throw away the outliers based on the x-y variance

// store the good values in the features array

int cnt = 0;

for (int i = 0; i < numFeatures; i++)

{

float standardErrorXY = ((tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y - meanY) * (tempFeatures[i].y - meanY)) / meanSquaredErrorXY;

if (standardErrorXY < STANDARD_ERROR_XY_MAX)

{

// we want to keep this point

features[cnt] = tempFeatures[i];

cnt++;

}

}

numFeatures = cnt;

// only bother with fixing the tail of the features array if we still have points to track

if (numFeatures > 0)

{

// set everything past numFeatures to -10,-10 in our updated features array

for (int i = numFeatures; i < MAX_FEATURES_TO_TRACK; i++)

{

features[i] = cvPoint2D32f(-10,-10);

}

}

}

// check if we're below the threshold min points to track before adding new ones

if (numFeatures < minFeaturesToNewSearch)

{

// add new features

// up the multiplier for expanding the region

expandROIMult *= EXPAND_ROI_INIT;

// expand the trackBox

float newWidth = featureTrackBox.size.width * expandROIMult;

float newHeight = featureTrackBox.size.height * expandROIMult;

CvSize2D32f newSize = cvSize2D32f(newWidth, newHeight);

CvBox2D newRoiBox = {featureTrackBox.center, newSize, featureTrackBox.angle};

// find new points

CvPoint2D32f additionalFeatures[MAX_FEATURES_TO_ADD] = {0};

int numAdditionalFeatures = findFeatures(gray, additionalFeatures, newRoiBox);

int endLoop = MAX_FEATURES_TO_ADD;

if (MAX_FEATURES_TO_TRACK < endLoop + numFeatures)

endLoop -= numFeatures + endLoop - MAX_FEATURES_TO_TRACK;

// copy new stuff to features, but be mindful of the array max

for (int i = 0; i < endLoop; i++)

{

// TODO check if they are way outside our stuff????

int dist = findDistanceToCluster(additionalFeatures[i], features, numFeatures);

if (dist < ADD_FEATURE_MAX_DIST)

{

features[numFeatures] = additionalFeatures[i];

numFeatures++;

}

}

// TODO check for duplicates???

// check if we're below the reset min

if (numFeatures < MIN_FEATURES_TO_RESET)

{

// if so, set to numFeatures 0, null out the detect rect and do face detection on the next frame

numFeatures = 0;

faceDetectRect = NULL;

died = true;

}

}

else

{

// reset the expand roi mult back to the init

// since this frame didn't need an expansion

expandROIMult = EXPAND_ROI_INIT;

}

// find the new track box

if (!died)

featureTrackBox = findTrackBox(features, numFeatures);

}

else

{

// convert the faceDetectRect to a CvBox2D

CvPoint2D32f center = cvPoint2D32f(faceDetectRect->x + faceDetectRect->width * 0.5, faceDetectRect->y + faceDetectRect->height * 0.5);

CvSize2D32f size = cvSize2D32f(faceDetectRect->width, faceDetectRect->height);

CvBox2D roiBox = {center, size, 0};

// get features to track

numFeatures = findFeatures(gray, features, roiBox);

// verify that we found features to track on this frame

if (numFeatures > 0)

{

// find the corner subPix

cvFindCornerSubPix(gray, features, numFeatures, cvSize(10, 10), cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03));

// define the featureTrackBox around our new points

featureTrackBox = findTrackBox(features, numFeatures);

// calculate the minFeaturesToNewSearch from our detected face values

minFeaturesToNewSearch = 0.9 * numFeatures;

// wait for the next frame to start tracking using optical flow

}

else

{

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

}

}

else

{

// reset the current features

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

// save gray and pyramid frames for next frame

cvCopy(gray, prevGray, 0);

cvCopy(pyramid, prevPyramid, 0);

// draw some stuff into the frame to show results

if (numFeatures > 0)

{

// show the features as little dots

for(int i = 0; i < numFeatures; i++)

{

CvPoint myPoint = cvPointFrom32f(features[i]);

cvCircle(currentFrame, cvPointFrom32f(features[i]), 2, CV_RGB(0, 255, 0), CV_FILLED);

}

// show the tracking box as an ellipse

cvEllipseBox(currentFrame, featureTrackBox, CV_RGB(0, 0, 255), 3);

}

// show the current frame in the window

cvShowImage(windowName, currentFrame);

// wait for next frame or keypress

char c = (char)waitKey(30);

if(c == 27)

break;

switch(c)

{

case 'r':

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

break;

}

}

// Release the image and tracker

captureFrame.DeallocateFrames();

// Destroy the window previously created

cvDestroyWindow(windowName);

return 0;

}

// draws a region of interest in the src frame based on the given rect

void drawFaceROIFromRect(IplImage *src, CvRect *rect)

{

// Points to draw the face rectangle

CvPoint pt1 = cvPoint(0, 0);

CvPoint pt2 = cvPoint(0, 0);

// setup the points for drawing the rectangle

pt1.x = rect->x;

pt1.y = rect->y;

pt2.x = pt1.x + rect->width;

pt2.y = pt1.y + rect->height;

// Draw face rectangle

cvRectangle(src, pt1, pt2, CV_RGB(255,0,0), 2, 8, 0 );

}

// finds features and stores them in the given array

// TODO move this method into a Class

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi)

{

//cout << "findFeatures" << endl;

int featureCount = 0;

double minDistance = 5;

double quality = 0.01;

int blockSize = 3;

int useHarris = 0;

double k = 0.04;

// Create a mask image to be used to select the tracked points

IplImage *mask = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// Begin with all black pixels

cvZero(mask);

// Create a filled white ellipse within the box to define the ROI in the mask.

cvEllipseBox(mask, roi, CV_RGB(255, 255, 255), CV_FILLED);

// Create the temporary scratchpad images

IplImage *eig = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

IplImage *temp = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// init the corner count int

int cornerCount = MAX_FEATURES_TO_TRACK;

// Find keypoints to track using Good Features to Track

cvGoodFeaturesToTrack(src, eig, temp, features, &cornerCount, quality, minDistance, mask, blockSize, useHarris, k);

// iterate through the array

for (int i = 0; i < cornerCount; i++)

{

if ((features[i].x == 0 && features[i].y == 0) || features[i].x > src->width || features[i].y > src->height)

{

// do nothing

}

else

{

featureCount++;

}

}

//cout << "\nfeatureCount = " << featureCount << endl;

// return the feature count

return featureCount;

}

// finds the track box for a given array of 2d points

// TODO move this method into a Class

CvBox2D findTrackBox(CvPoint2D32f *points, int numPoints)

{

//cout << "findTrackBox" << endl;

//cout << "numPoints: " << numPoints << endl;

CvBox2D box;

// matrix for helping calculate the track box

CvMat *featureMatrix = cvCreateMat(1, numPoints, CV_32SC2);

// collect the feature points in the feature matrix

for(int i = 0; i < numPoints; i++)

cvSet2D(featureMatrix, 0, i, cvScalar(points[i].x, points[i].y));

// create an ellipse off of the featureMatrix

box = cvFitEllipse2(featureMatrix);

// release the matrix (cause we're done with it)

cvReleaseMat(&featureMatrix);

// return the box

return box;

}

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints)

{

int minDistance = 10000;

for (int i = 0; i < numClusterPoints; i++)

{

int distance = abs(point.x - cluster[i].x) + abs(point.y - cluster[i].y);

if (distance < minDistance)

minDistance = distance;

}

return minDistance;

}

, , , , ,

The Project Discovery Phase, Dissected

March 14th, 2013 by Dominick Accattato

When clients first reach out to Infrared5, they are often extremely excited about turning their ideas into a reality. We share their enthusiasm and can’t wait to dig into the project details. However, sometimes, there are numerous great ideas and not a lot of concrete information. This can be true for any project — games to enterprise applications. When this is the case, we sometimes suggest a Discovery and Planning Phase.

The Discovery and Planning Phase allows both the client and our team leaders to work together to elicit requirements and document the system architecture in a way that is meaningful for developers, designers and the client. It is typically very collaborative in nature. This phase also ties in disciplines such as business analysis, domain driven design, technical writing and design.

It’s important to note that not every project requires a Discovery and Planning Phase. Not all discovery phases are set up the same way. Some clients have a very detailed understanding of what they are trying to accomplish. In these cases, the client may already have specifications, but they are unable to develop the very complex technical component. In these cases, we suggest a separate path; one in which a focused Proof of Concept is provided. (We will cover Proof of Concept in a future post.) For now, we’ll assume the client has a larger system and is in need of defining the project. This is what the Discovery and Planning Phase attempts to achieve.

What is a Discovery and Planning phase?
A discovery and planning phase allows for the client to have direct access to our senior software developers and/or creative lead in order to define project requirements. With these requirements in hand, our development and design team can investigate and document the software/design components of the project. The goal is to clarify scope and verify the parties are on the same page prior to beginning production. Another benefit of the discovery phase is that certain technical challenges may surface from these discussions. (Pioneering applications are a specialty of the house here at Infrared5.) These high risk items may lead to a phased approach whereby the highest risk items are given their own Proof of Concept phases. (This is discussed with the client so that they have an understanding of our findings and why we have suggested a multi project, phased approach.) In the end, clients have the opportunity to remove the high risk item if it doesn’t fit with their release date or budget.

Who is involved in the Discovery Phase?
During the Discovery Phase, the team consists of a project manager and technical lead who are in charge of assessing the technical challenges that lie ahead for the development phase. The technical leads here at Infrared5 each have their own expertise. For instance, if the client approached us with an immersive 3D game, we would allocate one of our senior game developers to the Discovery and Planning Phase. The same would be true of a complex web application. One of the potential benefits of using a group like Infrared5 is that we also maintain a diverse group of developers who are experts in their own field of discipline. From gaming to streaming applications, we employ a renowned team of developers and designers who are truly experts in their field. Also during this phase, our creative team works closely with the client in order to flesh out UI designs, experience design and branding needs of the project. The creative team helps clients define their goals and the best strategy to meet them.

What can be accomplished during the Discovery phase?
One of the common questions we get from clients is, “What are the benefits of doing a Discovery and Planning Phase?” In most cases, there are a few documents produced. These are the Technical Requirements and the Software Requirements Specifications. It can be noted however that depending on the needs of the project, this may only require one of the two or a hybrid of each. Another document which may be produced during the Discovery and Planning Phase is a High Level Technical Overview. Just as it sounds, this document is high level. It does not aim to get into too much detail at the software component level. Instead, it resolves the more general system architecture and specifies which components may be used for the development phase.
For gaming projects, there are different details to focus on and these must be determined before the developers begin programming. A Game Design Document is necessary for describing the game play and the mechanics behind some of the simulations. Often this requires the technical lead and game designer to work together.

For both gaming and applications, the creative team delivers initial design concepts and wireframes to be augmented later in the design phase. The creative team also works closely with the client in regards to the user experience.

Ultimately, the Discovery Phase ensures both parties are aligned before any, more extensive, design or development begins in later phases.

What is delivered at the end of a Discovery Phase?
At the end of the Discovery Phase, the three important documents delivered to a client are:
• High Level Technical Overview
• Technical Requirements Specification
Software Requirements Specification

In the case of a gaming project, the typical document would be:
Game Design Document

In the case of both gaming and application projects, the following design related material is provided:
• Design Concepts
Wireframes

Upon completion of the Discovery phase, Infrared5 has enough information to provide more accurate estimates and timelines for completion.Each of these documents are important and we suggest searching online in order to further your understanding of their purposes. This article illustrates what steps are taken and what is delivered at the end of our Discover and Planning Phase.

, , , ,

IR5 Tech Talk: Modular Development in JavaScript

March 11th, 2013 by Todd Anderson

I recently had the pleasure to present on modular development in JavaScript at Infrared5′s bi-weekly Tech Talks. It was mostly centered around Asynchronous Module Definition (AMD) and the RequireJS library, but covered some of the history and possible future of module implementations in the JavaScript language. Dependency management and modular programming has been a large focus of mine in application development for some time, with interest in crossing implementations within many languages. As more web-based projects cropped up, I had first started looking at application frameworks that would support such a development and build workflow. This initially led me to Dojo, which I would highly recommend looking into. If you are already familiar with Dojo, then it would be no surprise that it led me to RequireJS, as it was created by the same developer – James Burke – who worked on the Dojo Loader. Eventually, it was AMD – and specifically, utilizing RequireJS and r.js – that I would incorporate into the development workflow and build processes for projects, letting the tie to any specific application framework for JS be severed. That’s not to say that application frameworks don’t have their place – especially on bigger teams. But such a discussion is perhaps a whole other Tech Talk in itself! You can view the presentation here.

IR5 Interactive Piece

March 5th, 2013 by Keith Peters

Introduction by Rebecca Allen:

We are creating a new website that will be launching at the end of March. Working on our own site is always an exciting process and one that is challenging as well. Our goal was to do the following with the new site:

1. Make it memorable!
2. Create a unique and fun interactive experience that captures our brand.
3. Display quickly and beautifully no matter what size device.
4. Communicate our mission and what we do clearly.

Today, we are going to give a sneak peak into #2. Keith Peters will walk you through the steps he took to create this interactive experience. Keith, take it from here!

——–

As part of Infrared5’s new web site design, I was asked to create an interactive piece for the main page. The mock ups called for the design that has been our trademark since day one – an abstract space made of random isometric planes and lines. This is the design that is on our letterhead, business cards, and previous versions of our site.
I was given free range to come up with something interactive based on that design. One idea floated was to have the planes connect  with the lines and rotate around in the space. I decided to go with this concept. I realized that the idea was a bit odd once I got down to coding it. I mean, isometry itself is a form of 3D projection, and we also wanted to move the planes around in a 3D space with actual perspective. It could become quite a mess if done wrong. Specifically, in an isometric system the angles do not change and objects do not change size with distance. But I forged ahead anyway to see what could be done that might look decent.
The first thing I did was just get a 3D system of rotating particles. This was something I’d done and written about before, so was relatively straightforward. As you click and drag the mouse vertically the particles rotate around the x-axis, changing their y and z coordinates. When you drag on horizontally they rotate around the y-axis, changing on x and z.

Next step was to change the particles into isometric planes. I started out with true isometric projection, meaning the angles representing each axis are each 120 degrees apart. I soon switched over to dimetric projection, which has two angles at approximately 116 degrees and the third at about 127.

This has several advantages. First, it’s easier to calculate sizes of objects as the x-axis is simply twice the size of the y-axis. This also results in smoother lines with less antialiasing.

There are three different shapes I needed to draw: a panel facing left, one facing right, and a floor/ceiling panel.

As these would be animating, I didn’t want to have to redraw them on each frame using the canvas drawing API. So I made a panel object that creates a small canvas and draws a single panel to it with a random width and height. The program can then blit each one of these mini panel canvases to the main canvas on each frame. Each panel also got its own random gray shade and the result was something like this:

Now as I said earlier, when you move things around in a 3d space, they are supposed to grow larger or smaller depending on the distance from the camera. But in isometric/dimetric projection this is not the case. So we’re really mixing two forms of perspective. Having the panels scale as they went into the distance didn’t look right at all. Having them remain unchanged wasn’t exactly correct either but gave an odd trippy feel to the piece that I actually like a lot. So that’s how I left it. Also, to mix things up a bit, I made some of the panels fixed in space and not rotating. This comes up to about one in ten of the panels being stationary.

Next was the lines. When creating the panels, I made it so that some – but not all – of the panels connect to one other panel with a line. About 40 percent of the time a connection is made. This seemed to give the right density of lines on screen. Here’s what that looked like initially:

Pretty ugly because the lines go directly from one corner of a panel to one corner of another, breaking the isometric/dimetric space. They just look random and chaotic. To solve that I forced the lines to follow the same dimetric angles as the planes. This looked a million times better.

In order to add a bit more interaction, I added a few functions to allow users to add and remove planes and to assign various color schemes to the planes (or return to grayscale). For the colors, rather than just use a random color for each plane, which would be a bit chaotic, I found an HSV to RGB algorithm. Taking an initial hue, I generate a different color for each panel by randomly varying its hue and saturation. This gives a more cohesive look no matter what hue is chosen.

The way the colors work is by redrawing each of the individual panel canvases with the same parameters, but the newly chosen color. Again, this makes it so it only has to happen a single time and the panels can then be blitted to the main canvas on each frame.

All in all, this was a fun project that I’m glad I had the chance to work on.

,

The Balanced Approach: Hackathons and Marathons

March 2nd, 2013 by admin

The other day a blog post called “Hackathons are bad for you” struck a chord with developers and other members of the technology world.  The post, from Chinmay Pendharkar, a developer in Singapore, thoughtfully called out the code-all-night-drink-too-much-coffee-and-alcohol-and-eat-junk-food mentality.  It received hundreds of kudos from the obviously over- tired and over-caffeinated developer community. “Chinpen” makes a lot of good points, especially when he talks about culture and the glorification of the geek lifestyle.

We also give him thumbs up for making concrete suggestions around healthy hackathons.  (We’ve seen some of those guidelines in place locally  For example, the Battle for the Charles Startup Weekend organizers made great efforts to supply healthy eats and gave everyone reusable water bottles so they could hydrate without generating hundreds of empty disposable water bottles.)

Like everything in this world, there is room for a healthy balance.  Hackathons crackle with creative energy.  They can be a wonderful source of new ideas and inspiration.  Our own team is currently participating in the Intel Ultimate Coder Challenge, and we’re all excited and energized by the new technology and techniques.  We are already looking at ways we can employ these in our everyday work.

Over the last five years, we’ve grown Infrared5 significantly while holding the line on unrealistic release schedules and development timelines that deplete us mentally and physically.  While we have crunch times like everyone else, we offer comp time to make up for overtime. We encourage restful nights and weekends for restoring our creative selves.  Walking the dogs who are our office companions are great times for partner meetings.  Keith and Rebecca have taken up running, and Rebecca plans to compete in 10 races this year including one half-marathon.  She also wants to run a 5K in under 8 minute miles.

And yet it isn’t all bean sprouts and granola — as many of you know, we have (infamous) “The Infrared5 beer:30” get-together on Friday afternoons where we connect socially as a team and do some craft beer sampling.  This is an important part of our healthy balance.

Last week we spent part of this get-together brainstorming our “Wicked Ten” – how we define projects we would all like to work on.  Number 6 on the list was “Reasonable timeline/good budget.”  While this may seem obvious, it is rarer than we’d all like.  Yet, we know that some of the work we are proudest of comes when we work with people who also take time off to rest their creative muscles and exercise their physical bodies.

How are you achieving balance?