Infrared5 Ultimate Coder Update 4: Flamethrowers, Wingsuits, Rearview Mirrors and Face Tracking!

March 18th, 2013 by admin

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!
Week three seemed to go much better for the Infrared5 team. We are back on our feet with head tracking, and despite the judges lack of confidence in our ability to track eyes, we still believe that we’ve got a decent chance of pulling it off. Yes, it’s true as Nicole as said in her post this week, that the Intel Perceptual Computing (IPC) SDK isn’t yet up to the task. She had an interview with the Perceptual computing team and they told her “that eye tracking was going to be implemented later”. What’s funny about the lack of eye tracking and even decent gaze tracking in the IPC SDK is that the contest is showing this:



Yes we know it’s just marketing, but it is a pretty misleading image. They have a 3D mesh over a guy’s face giving the impression that the SDK can do AAM and POSIT. That would be so cool!  Look out FaceAPI! Unfortunately it totally doesn’t do that. At least not yet.

This isn’t to say that Intel is taking a bad approach with the IPC SDK beta either. They are trying out a lot of things at once and not getting lost in the specifics of just a few features. This allows developers to tell them what they want to do with it without spending tremendous effort on features that wouldn’t even be used.

The lack of decent head, gaze and eye tracking is what’s inspired us on to eventually release our tracking code as open source. Our hope is that future developers can leverage our work on these features and not have to go through the pain we did in this contest. Maybe Intel will just merge our code into the IPC SDK and we can continue to make the product better together.

Another reason we are sticking with our plan on gaze and eye tracking is that we feel strongly, as do the judges, that these features are some of the most exciting aspects of the perceptual computing camera. A convertible ultrabook has people’s hands busy with typing, touch gestures, etc. and having an interface that works using your face is such a natural fit for this kind of setup.

Latest Demo of Kiwi Catapult Revenge

Check out the latest developments with the Unity Web Player version. We’ve added a new fireball/flamethrower style effect, updated skybox, sheep and more. Note that this is still far from final art and behavior for the game, but we want to continue showing the process we are going through by providing these snapshots of the game in progress. This build requires the free Brass Monkey app for iOS or Android.

A Polished Experience

In addition to being thoroughly entertained by the judges’ video blooper this week, one thing we heard consistently from them is that they were expecting more polished apps from the non-individual teams. We couldn’t agree more! One advantage that we have in the contest is that we have a fantastic art and game design team. That’s not to say our tech skills are lacking either. We are at our core a very technically focused company, but we tend not to compartmentalize the design process and the technology implementation in projects we take on. Design and technology have to work together in harmony to create an amazing user experience, and that’s exactly what we’re doing in this challenge.

Game design is a funny, flexible and agile process. What you set out to do in the beginning rarely ends up being what you make in the end. Our initial idea started as a sort of Mad Max road warrior style driving and shooting game (thus Sascha thinking ours was a racing game early on), but after having read some bizarre news articles on eradicating cats in New Zealand we decided the story of Cats vs. Kiwis should be the theme. Plus Rebecca and Aaron really wanted to try out this 2D paper, pop-up book style, and the Kiwi story really lends itself to that look.

Moving to this new theme kept most of the core game mechanics as the driving game. Tracking with the head and eyes to shoot and using the phone as a virtual steering wheel are exactly the same in the road warrior idea. Since our main character Karl Kiwi has magical powers and can fly, we made it so he would be off the ground (unlike a car that’s fixed to the ground). Another part of the story is that Karl can breathe fire like a dragon, so we thought that’s an excellent way to use the perceptual computing camera by having the player open their mouth to be able to shoot fire. Shooting regular bullets didn’t work with the new character either, so we took some inspiration from funny laser cats memes, SNL and decided that he should be able to shoot lasers from his eyes. Believe it or not, we have been wanting to build a game involving animals and lasers for a while now. “Invasion of the Killer Cuties” was a game we concepted over two years ago where you fly a fighter plane in space against cute rodents that shoot lasers from their eyes (initial concept art shown below).



Since Chris wrote up the initial game design document (GDD) for Kiwi Catapult Revenge there have been plenty of other changes we’ve made throughout the contest. One example: our initial pass at fire breathing (a spherical projectile) wasn’t really getting the effect we wanted. In the GDD it was described as a fireball so this was a natural choice. What we found though is that it was hard to hit the cats, and the ball didn’t look that good either. We explored how dragon fire breathing is depicted in movies, and the effect is much more like how a flamethrower works. The new fire breathing effect that John implemented this week is awesome! And we believe it adds to the overall polish of our entry for the contest.

(image credit MT Falldog)


Another aspect of the game that wasn’t really working so far was that the main character was never shown. We chose a first person point of view so that the effect of moving your head and peering around items would feel incredibly immersive, giving the feeling that you are really right in this 3D world. However, this meant that you would never see Karl, our protagonist.

Enter the rear view mirror effect. We took a bit of inspiration from the super cool puppets that Sixense showed last week, and this video of an insane wingsuit base jump and came up with a way to show off our main character. Karl Kiwi will be fitted with a rear view mirror so that he can see what’s behind him, and you as the player can the character move the same as you. When you tilt your head, Karl will tilt his, when you look right, so will Karl, and when you open your mouth Karl’s beak will open. This will all happen in real time, and the effect will really show the power of the perceptual computing platform that Intel has provided.

Head Tracking Progress Plus Code and Videos

It wouldn’t be a proper Ultimate Coder post without some video and some code, so we have provided you some snippets for your perusal. Steff did a great job of documenting his progress this week, and we want to show you step by step where we are heading by sharing a bit of code and some video for each of these face detection examples. Steff is working from this plan, and knocking off each of the individual algorithms step by step. Note that this week’s example requires the OpenCV library and a C compiler for Windows.

This last week of Steff’s programming was all about two things: 1) switching from working entirely in Unity (with C#) to a C++ workflow in Visual Studio, and 2) refining our face tracking algorithm.  As noted in last week’s post, we hit a roadblock trying to write everything in C# in Unity with DLL for the Intel SDK and OpenCV.  There were just limits to the port of OpenCV that we needed to shed.  So, we spent some quality time setting up in VS 2012 Express and enjoying the sharp sting of pointers, references, and those type of lovely things that we have avoided by working in C#.  However there is good news, we did get back the amount of lower level control needed to refine face detection!

Our main refinement this week was to break through the limitations of tracking faces that we encountered when implementing the Viola-Jones detection method using Haar Cascades. This is a great way to find a face, but it’s not the best for tracking a face from frame to frame.  It has limitations in orientation; e.g. if the face is tilted to one side the Haar Cascade no longer detects a face.  Another drawback is that while looking for a face, the algorithm is churning through images per every set block of pixels.  It can really slow things down. To break through this limitation, we took inspiration from the implementation by the team at ROS.org . They have done a nice job putting face tracking together using python, OpenCV, and an RGB camera + Kinect. Following their example, we have implemented feature detection with GoodFeaturesToTrack and then tracked each feature from frame to frame using Optical Flow. The video below shows the difference between the two methods and also includes a first pass at creating a blue screen from the depth data.

This week, we will be adding depth data into this tracking algorithm.  With depth, we will be able to refine our Region Of Interest to include an good estimate of face size and we will also be able to knock out the background to speed up Face Detection with the Haar Cascades. Another critical step is integrating our face detection algorithms into the Unity game. We look forward to seeing how all this goes and filling you in with next week’s post!

We are also really excited about all the other teams’ progress so far, and in particular we want to congratulate Lee on making a super cool video last week!  We had some plans to do a more intricate video based on Lee’s, but a huge snowstorm in Boston put a bit of a wrench in those plans. Stay tuned for next week’s post though, as we’ve got some exciting (and hopefully funny) stuff to show you!

For you code junkies out there, here is a code snippet showing how we implemented GoodFeaturesToTrack and Lucas-Kanada Optical Flow:


#include "stdafx.h"

#include "cv.h"

#include "highgui.h"

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#include <assert.h>

#include <math.h>

#include <float.h>

#include <limits.h>

#include <time.h>

#include <ctype.h>

#include <vector>

#include "CaptureFrame.h"

#include "FaceDetection.h"

using namespace cv;

using namespace std;

static void help()

{

// print a welcome message, and the OpenCV version

cout << "\nThis is a demo of Robust face tracking use Lucas-Kanade Optical Flow,\n"

"Using OpenCV version %s" << CV_VERSION << "\n"

<< endl;

cout << "\nHot keys: \n"

"\tESC - quit the program\n"

"\tr - restart face tracking\n" << endl;

}

// function declaration for drawing the region of interest around the face

void drawFaceROIFromRect(IplImage *src, CvRect *rect);

// function declaration for finding good features to track in a region

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi);

// function declaration for finding a trackbox around an array of points

CvBox2D findTrackBox(CvPoint2D32f *features, int numPoints);

// function declaration for finding the distance a point is from a given cluster of points

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints);

// Storage for the previous gray image

IplImage *prevGray = 0;

// Storage for the previous pyramid image

IplImage *prevPyramid = 0;

// for working with the current frame in grayscale

IplImage *gray = 0;

// for working with the current frame in grayscale2 (for L-K OF)

IplImage *pyramid = 0;

// max features to track in the face region

int const MAX_FEATURES_TO_TRACK = 300;

// max features to add when we search on top of an existing pool of tracked points

int const MAX_FEATURES_TO_ADD = 300;

// min features that we can track in a face region before we fail back to face detection

int const MIN_FEATURES_TO_RESET = 6;

// the threshold for the x,y mean squared error indicating that we need to scrap our current track and start over

float const MSE_XY_MAX = 10000;

// threshold for the standard error on x,y points we're tracking

float const STANDARD_ERROR_XY_MAX = 3;

// threshold for the standard error on x,y points we're tracking

double const EXPAND_ROI_INIT = 1.02;

// max distance from a cluster a new tracking can be

int const ADD_FEATURE_MAX_DIST = 20;

int main(int argc, char **argv)

{

// Init some vars and const

// name the window

const char *windowName = "Robust Face Detection v0.1a";

// box for defining the region where a face was detected

CvRect *faceDetectRect = NULL;

// Object faceDetection of the class "FaceDetection"

FaceDetection faceDetection;

// Object captureFrame of the class "CaptureFrame"

CaptureFrame captureFrame;

// for working with the current frame

IplImage *currentFrame;

// for testing if the stream is finished

bool finished = false;

// for storing the features

CvPoint2D32f features[MAX_FEATURES_TO_TRACK] = {0};

// for storing the number of current features that we're tracking

int numFeatures = 0;

// box for defining the region where a features are being tracked

CvBox2D featureTrackBox;

// multiplier for expanding the trackBox

float expandROIMult = 1.02;

// threshold number for adding more features to the region

int minFeaturesToNewSearch = 50;

// Start doing stuff ------------------>

// Create a new window

cvNamedWindow(windowName, 1);

// Capture from the camera

captureFrame.StartCapture();

// initialize the face tracker

faceDetection.InitFaceDetection();

// capture a frame just to get the sizes so the scratch images can be initialized

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

currentFrame = captureFrame.getFrameCopy();

// init the images

prevGray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

prevPyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

gray = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

pyramid = cvCreateImage(cvGetSize(currentFrame), IPL_DEPTH_8U, 1);

// iterate through each frame

while(1)

{

// check if the video is finished (kind of silly since we're only working on live streams)

finished = captureFrame.CaptureNextFrame();

if (finished)

{

captureFrame.DeallocateFrames();

cvDestroyWindow(windowName);

return 0;

}

// save a reference to the current frame

currentFrame = captureFrame.getFrameCopy();

// check if we have a face rect

if (faceDetectRect)

{

// Create a grey version of the current frame

cvCvtColor(currentFrame, gray, CV_RGB2GRAY);

// Equalize the histogram to reduce lighting effects

cvEqualizeHist(gray, gray);

// check if we have features to track in our faceROI

if (numFeatures > 0)

{

bool died = false;

//cout << "\nnumFeatures: " << numFeatures;

// track them using L-K Optical Flow

char featureStatus[MAX_FEATURES_TO_TRACK];

float featureErrors[MAX_FEATURES_TO_TRACK];

CvSize pyramidSize = cvSize(gray->width + 8, gray->height / 3);

CvPoint2D32f *featuresB = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

CvPoint2D32f *tempFeatures = new CvPoint2D32f[MAX_FEATURES_TO_TRACK];

cvCalcOpticalFlowPyrLK(prevGray, gray, prevPyramid, pyramid, features, featuresB, numFeatures, cvSize(10,10), 5, featureStatus, featureErrors, cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, -3), 0);

numFeatures = 0;

float sumX = 0;

float sumY = 0;

float meanX = 0;

float meanY = 0;

// copy back to features, but keep only high status points

// and count the number using numFeatures

for (int i = 0; i < MAX_FEATURES_TO_TRACK; i++)

{

if (featureStatus[i])

{

// quick prune just by checking if the point is outside the image bounds

if (featuresB[i].x < 0 || featuresB[i].y < 0 || featuresB[i].x > gray->width || featuresB[i].y > gray->height)

{

// do nothing

}

else

{

// count the good values

tempFeatures[numFeatures] = featuresB[i];

numFeatures++;

// sum up to later calc the mean for x and y

sumX += featuresB[i].x;

sumY += featuresB[i].y;

}

}

//cout << "featureStatus[" << i << "] : " << featureStatus[i] << endl;

}

//cout << "numFeatures: " << numFeatures << endl;

// calc the means

meanX = sumX / numFeatures;

meanY = sumY / numFeatures;

// prune points using mean squared error

// caclulate the squaredError for x, y (square of the distance from the mean)

float squaredErrorXY = 0;

for (int i = 0; i < numFeatures; i++)

{

squaredErrorXY += (tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y  - meanY) * (tempFeatures[i].y - meanY);

}

//cout << "squaredErrorXY: " << squaredErrorXY << endl;

// calculate mean squared error for x,y

float meanSquaredErrorXY = squaredErrorXY / numFeatures;

//cout << "meanSquaredErrorXY: " << meanSquaredErrorXY << endl;

// mean squared error must be greater than 0 but less than our threshold (big number that would indicate our points are insanely spread out)

if (meanSquaredErrorXY == 0 || meanSquaredErrorXY > MSE_XY_MAX)

{

numFeatures = 0;

died = true;

}

else

{

// Throw away the outliers based on the x-y variance

// store the good values in the features array

int cnt = 0;

for (int i = 0; i < numFeatures; i++)

{

float standardErrorXY = ((tempFeatures[i].x - meanX) * (tempFeatures[i].x - meanX) + (tempFeatures[i].y - meanY) * (tempFeatures[i].y - meanY)) / meanSquaredErrorXY;

if (standardErrorXY < STANDARD_ERROR_XY_MAX)

{

// we want to keep this point

features[cnt] = tempFeatures[i];

cnt++;

}

}

numFeatures = cnt;

// only bother with fixing the tail of the features array if we still have points to track

if (numFeatures > 0)

{

// set everything past numFeatures to -10,-10 in our updated features array

for (int i = numFeatures; i < MAX_FEATURES_TO_TRACK; i++)

{

features[i] = cvPoint2D32f(-10,-10);

}

}

}

// check if we're below the threshold min points to track before adding new ones

if (numFeatures < minFeaturesToNewSearch)

{

// add new features

// up the multiplier for expanding the region

expandROIMult *= EXPAND_ROI_INIT;

// expand the trackBox

float newWidth = featureTrackBox.size.width * expandROIMult;

float newHeight = featureTrackBox.size.height * expandROIMult;

CvSize2D32f newSize = cvSize2D32f(newWidth, newHeight);

CvBox2D newRoiBox = {featureTrackBox.center, newSize, featureTrackBox.angle};

// find new points

CvPoint2D32f additionalFeatures[MAX_FEATURES_TO_ADD] = {0};

int numAdditionalFeatures = findFeatures(gray, additionalFeatures, newRoiBox);

int endLoop = MAX_FEATURES_TO_ADD;

if (MAX_FEATURES_TO_TRACK < endLoop + numFeatures)

endLoop -= numFeatures + endLoop - MAX_FEATURES_TO_TRACK;

// copy new stuff to features, but be mindful of the array max

for (int i = 0; i < endLoop; i++)

{

// TODO check if they are way outside our stuff????

int dist = findDistanceToCluster(additionalFeatures[i], features, numFeatures);

if (dist < ADD_FEATURE_MAX_DIST)

{

features[numFeatures] = additionalFeatures[i];

numFeatures++;

}

}

// TODO check for duplicates???

// check if we're below the reset min

if (numFeatures < MIN_FEATURES_TO_RESET)

{

// if so, set to numFeatures 0, null out the detect rect and do face detection on the next frame

numFeatures = 0;

faceDetectRect = NULL;

died = true;

}

}

else

{

// reset the expand roi mult back to the init

// since this frame didn't need an expansion

expandROIMult = EXPAND_ROI_INIT;

}

// find the new track box

if (!died)

featureTrackBox = findTrackBox(features, numFeatures);

}

else

{

// convert the faceDetectRect to a CvBox2D

CvPoint2D32f center = cvPoint2D32f(faceDetectRect->x + faceDetectRect->width * 0.5, faceDetectRect->y + faceDetectRect->height * 0.5);

CvSize2D32f size = cvSize2D32f(faceDetectRect->width, faceDetectRect->height);

CvBox2D roiBox = {center, size, 0};

// get features to track

numFeatures = findFeatures(gray, features, roiBox);

// verify that we found features to track on this frame

if (numFeatures > 0)

{

// find the corner subPix

cvFindCornerSubPix(gray, features, numFeatures, cvSize(10, 10), cvSize(-1,-1), cvTermCriteria(CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03));

// define the featureTrackBox around our new points

featureTrackBox = findTrackBox(features, numFeatures);

// calculate the minFeaturesToNewSearch from our detected face values

minFeaturesToNewSearch = 0.9 * numFeatures;

// wait for the next frame to start tracking using optical flow

}

else

{

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

}

}

else

{

// reset the current features

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

}

// save gray and pyramid frames for next frame

cvCopy(gray, prevGray, 0);

cvCopy(pyramid, prevPyramid, 0);

// draw some stuff into the frame to show results

if (numFeatures > 0)

{

// show the features as little dots

for(int i = 0; i < numFeatures; i++)

{

CvPoint myPoint = cvPointFrom32f(features[i]);

cvCircle(currentFrame, cvPointFrom32f(features[i]), 2, CV_RGB(0, 255, 0), CV_FILLED);

}

// show the tracking box as an ellipse

cvEllipseBox(currentFrame, featureTrackBox, CV_RGB(0, 0, 255), 3);

}

// show the current frame in the window

cvShowImage(windowName, currentFrame);

// wait for next frame or keypress

char c = (char)waitKey(30);

if(c == 27)

break;

switch(c)

{

case 'r':

numFeatures = 0;

// try for a new face detect rect for the next frame

faceDetectRect = faceDetection.detectFace(currentFrame);

break;

}

}

// Release the image and tracker

captureFrame.DeallocateFrames();

// Destroy the window previously created

cvDestroyWindow(windowName);

return 0;

}

// draws a region of interest in the src frame based on the given rect

void drawFaceROIFromRect(IplImage *src, CvRect *rect)

{

// Points to draw the face rectangle

CvPoint pt1 = cvPoint(0, 0);

CvPoint pt2 = cvPoint(0, 0);

// setup the points for drawing the rectangle

pt1.x = rect->x;

pt1.y = rect->y;

pt2.x = pt1.x + rect->width;

pt2.y = pt1.y + rect->height;

// Draw face rectangle

cvRectangle(src, pt1, pt2, CV_RGB(255,0,0), 2, 8, 0 );

}

// finds features and stores them in the given array

// TODO move this method into a Class

int findFeatures(IplImage *src, CvPoint2D32f *features, CvBox2D roi)

{

//cout << "findFeatures" << endl;

int featureCount = 0;

double minDistance = 5;

double quality = 0.01;

int blockSize = 3;

int useHarris = 0;

double k = 0.04;

// Create a mask image to be used to select the tracked points

IplImage *mask = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// Begin with all black pixels

cvZero(mask);

// Create a filled white ellipse within the box to define the ROI in the mask.

cvEllipseBox(mask, roi, CV_RGB(255, 255, 255), CV_FILLED);

// Create the temporary scratchpad images

IplImage *eig = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

IplImage *temp = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);

// init the corner count int

int cornerCount = MAX_FEATURES_TO_TRACK;

// Find keypoints to track using Good Features to Track

cvGoodFeaturesToTrack(src, eig, temp, features, &cornerCount, quality, minDistance, mask, blockSize, useHarris, k);

// iterate through the array

for (int i = 0; i < cornerCount; i++)

{

if ((features[i].x == 0 && features[i].y == 0) || features[i].x > src->width || features[i].y > src->height)

{

// do nothing

}

else

{

featureCount++;

}

}

//cout << "\nfeatureCount = " << featureCount << endl;

// return the feature count

return featureCount;

}

// finds the track box for a given array of 2d points

// TODO move this method into a Class

CvBox2D findTrackBox(CvPoint2D32f *points, int numPoints)

{

//cout << "findTrackBox" << endl;

//cout << "numPoints: " << numPoints << endl;

CvBox2D box;

// matrix for helping calculate the track box

CvMat *featureMatrix = cvCreateMat(1, numPoints, CV_32SC2);

// collect the feature points in the feature matrix

for(int i = 0; i < numPoints; i++)

cvSet2D(featureMatrix, 0, i, cvScalar(points[i].x, points[i].y));

// create an ellipse off of the featureMatrix

box = cvFitEllipse2(featureMatrix);

// release the matrix (cause we're done with it)

cvReleaseMat(&featureMatrix);

// return the box

return box;

}

int findDistanceToCluster(CvPoint2D32f point, CvPoint2D32f *cluster, int numClusterPoints)

{

int minDistance = 10000;

for (int i = 0; i < numClusterPoints; i++)

{

int distance = abs(point.x - cluster[i].x) + abs(point.y - cluster[i].y);

if (distance < minDistance)

minDistance = distance;

}

return minDistance;

}

, , , , ,

The Project Discovery Phase, Dissected

March 14th, 2013 by Dominick Accattato

When clients first reach out to Infrared5, they are often extremely excited about turning their ideas into a reality. We share their enthusiasm and can’t wait to dig into the project details. However, sometimes, there are numerous great ideas and not a lot of concrete information. This can be true for any project — games to enterprise applications. When this is the case, we sometimes suggest a Discovery and Planning Phase.

The Discovery and Planning Phase allows both the client and our team leaders to work together to elicit requirements and document the system architecture in a way that is meaningful for developers, designers and the client. It is typically very collaborative in nature. This phase also ties in disciplines such as business analysis, domain driven design, technical writing and design.

It’s important to note that not every project requires a Discovery and Planning Phase. Not all discovery phases are set up the same way. Some clients have a very detailed understanding of what they are trying to accomplish. In these cases, the client may already have specifications, but they are unable to develop the very complex technical component. In these cases, we suggest a separate path; one in which a focused Proof of Concept is provided. (We will cover Proof of Concept in a future post.) For now, we’ll assume the client has a larger system and is in need of defining the project. This is what the Discovery and Planning Phase attempts to achieve.

What is a Discovery and Planning phase?
A discovery and planning phase allows for the client to have direct access to our senior software developers and/or creative lead in order to define project requirements. With these requirements in hand, our development and design team can investigate and document the software/design components of the project. The goal is to clarify scope and verify the parties are on the same page prior to beginning production. Another benefit of the discovery phase is that certain technical challenges may surface from these discussions. (Pioneering applications are a specialty of the house here at Infrared5.) These high risk items may lead to a phased approach whereby the highest risk items are given their own Proof of Concept phases. (This is discussed with the client so that they have an understanding of our findings and why we have suggested a multi project, phased approach.) In the end, clients have the opportunity to remove the high risk item if it doesn’t fit with their release date or budget.

Who is involved in the Discovery Phase?
During the Discovery Phase, the team consists of a project manager and technical lead who are in charge of assessing the technical challenges that lie ahead for the development phase. The technical leads here at Infrared5 each have their own expertise. For instance, if the client approached us with an immersive 3D game, we would allocate one of our senior game developers to the Discovery and Planning Phase. The same would be true of a complex web application. One of the potential benefits of using a group like Infrared5 is that we also maintain a diverse group of developers who are experts in their own field of discipline. From gaming to streaming applications, we employ a renowned team of developers and designers who are truly experts in their field. Also during this phase, our creative team works closely with the client in order to flesh out UI designs, experience design and branding needs of the project. The creative team helps clients define their goals and the best strategy to meet them.

What can be accomplished during the Discovery phase?
One of the common questions we get from clients is, “What are the benefits of doing a Discovery and Planning Phase?” In most cases, there are a few documents produced. These are the Technical Requirements and the Software Requirements Specifications. It can be noted however that depending on the needs of the project, this may only require one of the two or a hybrid of each. Another document which may be produced during the Discovery and Planning Phase is a High Level Technical Overview. Just as it sounds, this document is high level. It does not aim to get into too much detail at the software component level. Instead, it resolves the more general system architecture and specifies which components may be used for the development phase.
For gaming projects, there are different details to focus on and these must be determined before the developers begin programming. A Game Design Document is necessary for describing the game play and the mechanics behind some of the simulations. Often this requires the technical lead and game designer to work together.

For both gaming and applications, the creative team delivers initial design concepts and wireframes to be augmented later in the design phase. The creative team also works closely with the client in regards to the user experience.

Ultimately, the Discovery Phase ensures both parties are aligned before any, more extensive, design or development begins in later phases.

What is delivered at the end of a Discovery Phase?
At the end of the Discovery Phase, the three important documents delivered to a client are:
• High Level Technical Overview
• Technical Requirements Specification
Software Requirements Specification

In the case of a gaming project, the typical document would be:
Game Design Document

In the case of both gaming and application projects, the following design related material is provided:
• Design Concepts
Wireframes

Upon completion of the Discovery phase, Infrared5 has enough information to provide more accurate estimates and timelines for completion.Each of these documents are important and we suggest searching online in order to further your understanding of their purposes. This article illustrates what steps are taken and what is delivered at the end of our Discover and Planning Phase.

, , , ,

The Balanced Approach: Hackathons and Marathons

March 2nd, 2013 by admin

The other day a blog post called “Hackathons are bad for you” struck a chord with developers and other members of the technology world.  The post, from Chinmay Pendharkar, a developer in Singapore, thoughtfully called out the code-all-night-drink-too-much-coffee-and-alcohol-and-eat-junk-food mentality.  It received hundreds of kudos from the obviously over- tired and over-caffeinated developer community. “Chinpen” makes a lot of good points, especially when he talks about culture and the glorification of the geek lifestyle.

We also give him thumbs up for making concrete suggestions around healthy hackathons.  (We’ve seen some of those guidelines in place locally  For example, the Battle for the Charles Startup Weekend organizers made great efforts to supply healthy eats and gave everyone reusable water bottles so they could hydrate without generating hundreds of empty disposable water bottles.)

Like everything in this world, there is room for a healthy balance.  Hackathons crackle with creative energy.  They can be a wonderful source of new ideas and inspiration.  Our own team is currently participating in the Intel Ultimate Coder Challenge, and we’re all excited and energized by the new technology and techniques.  We are already looking at ways we can employ these in our everyday work.

Over the last five years, we’ve grown Infrared5 significantly while holding the line on unrealistic release schedules and development timelines that deplete us mentally and physically.  While we have crunch times like everyone else, we offer comp time to make up for overtime. We encourage restful nights and weekends for restoring our creative selves.  Walking the dogs who are our office companions are great times for partner meetings.  Keith and Rebecca have taken up running, and Rebecca plans to compete in 10 races this year including one half-marathon.  She also wants to run a 5K in under 8 minute miles.

And yet it isn’t all bean sprouts and granola — as many of you know, we have (infamous) “The Infrared5 beer:30” get-together on Friday afternoons where we connect socially as a team and do some craft beer sampling.  This is an important part of our healthy balance.

Last week we spent part of this get-together brainstorming our “Wicked Ten” – how we define projects we would all like to work on.  Number 6 on the list was “Reasonable timeline/good budget.”  While this may seem obvious, it is rarer than we’d all like.  Yet, we know that some of the work we are proudest of comes when we work with people who also take time off to rest their creative muscles and exercise their physical bodies.

How are you achieving balance?

Face Tracking, Depth Sensors, Flying and Art = Progress!

February 28th, 2013 by admin

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

This week our team made a ton of progress on our game Kiwi Catapult Revenge. In their posts, the judges made some suggestions that the projects we are undertaking are ambitious and are perhaps a bit more than what can be accomplished in seven weeks. We have to agree that none of the teams are taking the easy way out, and we feel that because everyone is taking on such lofty goals it will only spur on more creativity from the entire group of competitors. Lucky for us, the Infrared5 guys/gals are accustomed to tight deadlines, insane schedules and hard to achieve deliverables, so the Ultimate Coder Challenge just feels like a demanding client. But it’s not like this is going to be easy. We’ve got quite a few challenges and areas of risk that we need to keep an eye on, plus it’s so tempting to continue to add scope, especially when you have the “fun factor” element which is so essential to creating a good game.

Speaking of the competitors, what many people following the contest may not realize is that we actually all are in communication quite a bit. We’ve got our own informal mailing list going, and there’s a lot of back and forth between the teams and sharing of ideas across projects. There is more of a collaborative spirit rather than a cut-throat competitive nature amongst the teams. We’ve even got a Google Hangout session scheduled this week so that we can all talk face to face. Unfortunately, Lee’s super cool video chat program isn’t ready for the task. We, at Infrared5, strongly believe that sharing ideas spurs on innovation and it will up our game to be so open with the other teams competing. After all, great ideas don’t happen in a vacuum.

In addition to our post this week, we did a quick video where Chris talked to our team members about head tracking, art and more.

Face Tracking

Let’s start with the biggest challenge we’ve given ourselves: Face tracking. We have been playing with OpenCV and the Intel Perceptual Computing SDK in different Unity proof of concept projects last week. Looking at our projected plan that we created at the start of the competition, our focus was on implementing basic face tracking by detecting Haar-like features. This works well, but he face detection algorithm currently has limits. If the target face is rotated to too far to either side then it will not be recognized and tracked as a “face.” Fortunately, we are aware of the limitation in the algorthim and have plans to implement a patch. We created Obama and Beyonce controllers so those of us with beards (Steff) can have more faces to test without bothering anyone in the office to “come and put your face in front of this screen.” Our current setup will cause issues if you have a beard and wear a hat – foreheads and mouths are important with this algorithm! Check out our “custom controllers” below.

Best news of the week: the depth sensing camera is awesome! It gives much better detail than we originally saw with the samples that came packaged with the SDK. The not-as-good news: since this SDK is still in beta, the documentation is not so awesome. Things do not always match up, especially with the prepackaged port to Unity3d. We are experiencing a good amount of crashing and might have to back out of this and write some of our own C code to port in the methods that we need for mapping the depth data to the RGB data. Stay tuned for what happens there!

Back to the good news. We originally were only going to use the data from the depth sensor to wipe out the background (one of the first steps in our planned pipeline). However, the depth data is so good, it will definitely also help us later on when we are calculating the pose of the player’s head. Pose calculations depend on estimating the position of non-coplanar points (read up on POSIT if you really want to geek-out now, but we will fill in more detail on POSIT once we implement it in our system), and finding these points is going to be much less of an iterative process with this depth data since we can actually look up the depth and associated confidence level for any point in the RGB image!

Introducing Gaze Tracking

Because of all the details from the depth + RBG cameras, we are optimistic that we will be able to track the player’s pupils. This of course means that we will be able to get gaze tracking working with our game. In Kiwi Catapult Revenge aiming at your targets won’t just lock into the center of where you are tilting your head, but we will allow you to fire precisely where you are looking, at any point in time. This one feature combined with the head tilt is where you start to really see how video games based on perceptual computing are going to have tremendous performance advantages over typical game controls like joypads, keyboard/mouse, etc. Now imagine adding another sensor device to the mix like Google Glass. What would be possible then? Maybe we can convince Google to give us early access to find out.

Game Engine

John has made a ton of progress on the game mechanics this week. He’s got a really good flow for flying in the game. We took inspiration from the PS3 game Flower for the player character movement we wanted to create in Kiwi Catapult Revenge. There’s a nice bounce and easing to the movement, and the ability to subtly launch over hills and come back down smoothly is going to really bring the flying capabilities of Karl Kiwi to life. John managed to get this working in a demo along with head tracking (currently mapped to the mouse movement). You can fly around (WASD keys), and look about, and get a general feel for how the game is going to play. We’ve posted a quick Unity Web Player version (here) of the demo for you to try out. Keep in mind that the controls aren’t yet mapped to the phone, nor is the artwork even close to final in this version.

Art and Game Design

Speaking of artwork, Rebecca, Aaron and Elena have been producing what everyone seems to agree is amounting to a very unique and inspiring visual aspect to our game. Chris did a quick interview with Rebecca and Aaron on the work they are doing and what inspired them to come up with the idea. We’ve included that in our video this week as well.

This week the design team plans to experiment more with paper textures and lighting, as well as rigging up some of the characters for some initial looks at animation and movement in the game.

Oh, and in case you missed it, we posted an update on the game with the background story of our main character. There you can also find some great concept art from Aaron and an early version of the 3D environment to whet your appetite.

That’s it for this week. What do you think about the playable game demo? What about our approach to face tracking? Is there anything else that we should be considering? Please let us know what you think in the comments below.

Plan of Attack and Face Tracking Challenges Using the Intel Perceptual Computing SDK

February 19th, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We are very excited to be working with the Intel Perceptual Computing (IPC) SDK and to be a part of the Ultimate Coder Challenge! The new hardware and software that Intel and its partners have created allows for some very exciting possibilities. It’s our goal to really push the boundaries of what’s possible using the technology. We believe that perceptual computing plays a huge role in the future of human-to-computer interaction, and isn’t just a gimmick shown in movies like Minority Report. We hope to prove out some of the ways that it can actually improve the user experience with the game that we are producing for the competition.

Before we begin with the bulk of this post, we should cover a little bit on the makeup of our team and the roles that each of us play on the project. Unlike many of the teams in the competition, we aren’t a one man show, so each of our members play a vital role in creating the ultimate application. Here’s a quick rundown of our team:

Kelly Wallick – Project Manager

TECH
Chris Allen – Producer, Architect and Game Designer
Steff Kelsey – Tech Lead, Engineer focusing on the Intel Perceptual Computing SDK inputs
John Grden – Game Developer focusing on the Game-play

ART
Rebecca Allen – Creative Director
Aaron Artessa – Art Director, Lead Artist, Characters, effects, etc.
Elena Ainley – Environment Artist and Production Art

When we first heard about the idea of the competition we started thinking about ways that we could incorporate our technology (Brass Monkey) with the new 3D image tracking inputs that Intel is making accessible to developers. Most of the examples being shown with the Perceptual Computing SDK focus on hand gestures, and we wanted to take on something a bit different. After much deliberation we arrived at the idea of using the phone as a tilt-based game controller input, and using head and eye tracking to create a truly immersive experience. We strongly believe that this combination will make for some very fun game play.

Our art team was also determined not to make the standard 3D FPS shoot-em-up game that we’ve seen so many times before, so we arrived at a very creative use of the tech with a wonderful background story of a young New Zealand Kiwi bird taking revenge on the evil cats that killed his family. To really show off the concept of head-tracking and peering around items in the world, we decided on a paper cutout art style. Note that this blog post and the other posts we will be doing on the Ultimate Coder site are really focused on the technical challenges and processes we are taking, and much less on the art and game design aspects of the project. After all, the competition is call the Ultimate Coder, not the Ultimate Designer. If you are interested in the art and design of our project, and we hope that you are, then you should follow our posts on our company’s blogs that will be covering much more of those details. We will be sure to reference these posts on every blog post here as well so that you can find out more about the entire process we are undertaking.

The name of the game that we’ve come up with for the competition is called Kiwi Catapult Revenge.

So with that, let’s get right to the technical nitty gritty.

Overview of the Technology We are Using

Unity

As we wanted to make a 3D game for the competition we decided to use Unity as our platform of choice. This tool allows for fast prototyping, ease of art integration and much more. We are also well versed in using Unity for a variety of projects at Infrared5, and our Brass Monkey SDK support for it is very polished.

Brass Monkey

We figured that one of our unique advantages in the competition would be to make use of the technology that we created. Brass Monkey SDK for Unity allows us to turn the player’s smartphone into a game controller for Unity games. We can leverage the accelerometers, gyroscopes and touch screens of the device as another form of input to the game. In this case, we want to allow for steering your Kiwi bird through the environment using tilt, and allow firing and control of the speed via the touch screen on the player’s phone.

Intel Perceptual Computing SDK

We decided to leverage the ICP SDK for head tracking, face recognition and possibly more. In the case of Kiwi Catapult Revenge the payer will use his eyes for aiming (the player character can shoot lasers from his eyes). The environment will also shift according to the angle in which the user is viewing causing the scene to feel like real 3D. Take a look at this example using a Wiimote for a similar effect that we are going for. In addition, our player can breath fire by opening his or her mouth in the shape of an “o” and pressing the fire button on the phone.

There are certainly other aspects of the SDK we hope to leverage, but we will leave those for later posts.

OpenCV

We are going to use this C-based library for more refined face tracking algorithms. Read more to find out why we chose OpenCV(opencv.org) to work in conjunction with the IPC SDK. Luckily, OpenCV is also developed by Intel, so hopefully that gets us additional points for using two of Intel’s libraries.

Head Tracking

The biggest risk item in our project is getting head tracking that performs well enough to be a smooth experience in game play, so we’ve decided to tackle this early on.

When we first started looking at the examples that shipped with the IPC SDK there were very few dealing with head tracking. In fact it was really only in the latest release where we found anything that was even close to what we proposed to build. That, and it was in this release that they exposed these features to the Unity version of the SDK. What we found are examples that simply don’t perform very well. They are slow, not all that accurate, and unfortunately just won’t cut it for the experience we are shooting for.

To make matters worse, the plugin for Unity is very limited. It didn’t allow us to manipulate much, if anything, with regards to head tracking or face recognition algorithms. As a Unity developer you either have to accept the poor performing pre-canned versions of the algorithms the SDK exposes, or get the raw data from the camera and do all the calculations yourself. What we found is that face tracking with what they provide gives us sub 3 frame per second performance that wasn’t very accurate. Now to be clear, the hand gesture features are really very polished, and work well in Unity.  It seems that Intel’s priority has been on those features, and head/face detection is lagging very much behind. This presents a real problem for our vision of the game, and we quickly realized that we were going to have to go about it differently if we were going to continue with our idea.

OpenCV

When we realized the current version of the IPC SDK wasn’t going to cut it by itself, we started looking into alternatives. Chris had done some study of OpenCV (CV stands for computer vision) a while back, and he had a book on the subject. He suggested that we take a look at that library to see if anyone else had written more effective head and eye tracking algorithms using that tool-set. We also discovered what looked like a very polished and effective head tracking library called OpenTL . We got very excited with what we saw, but when we went to download the library, we discovered the OpenTL isn’t so open after all. It’s not actually open source software, and we didn’t want to get involved with licensing a 3rd party tool for the competition. Likewise the FaceAPI from SeeingMachines looked very promising, but it also carried with it a proprietary license in order for us to use it.  Luckily what we found using OpenCV appeared to be more than capable of doing the job.

Since OpenCV is a C library we needed to figure out how to get it to work within Unity. We knew that we would need to compile a dll that would expose the functions to the Mono based Unity environment, or find a version out on the Internet that had already done this. Luckily we found this example, and incorporated it into our plans.

Use of the Depth Camera

The other concern we had was that all the examples we saw of face tracking in real-time didn’t make use of any special camera. They all used a simple webcam, and we really wanted to leverage the unique hardware that Intel provided us for the challenge. One subtle thing that we noticed with most of the examples we saw was they performed way better with the person in front of a solid background. The less noise the image had the better it would perform. So, we thought, why not use the depth sensor to block out anything behind the user’s head, essentially guaranteeing less noise in our images being processed regardless of what’s behind our player. This would be a huge performance boost over traditional webcams!

Application Flow and Architecture

After carefully considering our tools we finally settled on an architecture that spelled out how all the pieces would work together. We would use the Unity IPC SDK for the camera frames as raw images, and for getting the depth sensor data to block out only the portions of the image that had the person’s head. We would then leverage OpenCV for face tracking algorithms via a plugin to Unity.

We will be experimenting with a few different combinations of algorithms until we find something that give us the performance we need to implement as a game controller and (hopefully) also satisfy the desired feature set of tracking the head position and rotation, identifying if the mouth is open or closed, and tracking the gaze direction of the eyes.  Each step in the process is done to set up the the steps that follow.

In order to detect the general location of the face, we propose to use the Viola-Jones detection method.  The result of this method will be a smaller region of interest (ROI) for mouth and eye detection algorithms to sort through.

There are few proposed methods to track the facial features and solve for the rotation of the head.  The first method is to take use the results from the first pass to define 3 new ROIs and to search specifically for the mouth and the eyes using sets of comparative images designed specifically for the task.  The second method is to use the Active Appearance Model (AAM) to find match a shape model of facial features in the region.  We will go into more detail about these methods in future posts after we attempt them.

Tracking the gaze direction will be done by examining the ROI for each eye and determining the location of the iris and pupil by the Adaptive EigenEye method.

Trackable points will be constrained with Lucas-Kanade optical flow.  The optical flow compares the previous frame with the current one and finds the most likely locations of tracked points using a least squares estimation.

Summing it Up

We believe that we’ve come up with an approach that leverages the unique capabilities of the Perceptual Computing Camera and actually adds to the user experience of our game concept. As we start in on the development it’s going to be interesting to see how much this changes over the next seven weeks. We already have several questions about how it’s going to go: How much did we think would actually work will? What performance tuning will we need to do? How many of the other features of the IPC SDK can we leverage to make our game more engaging? Will we have enough time to pull off such an ambitious project in such a short time frame?

Wow! That turned out to be a long post! Thanks for taking the time to read what we are up to.

We are also curious to hear from you, other developers out there. What would you do differently given our goals for the project? If you’ve got experience with computer vision algorithms, or even just want to chime in with your support, we would love to hear from you!

, , , , , , , , , , , ,

Seven weeks. Seven teams. ONE ULTIMATE APP!

February 6th, 2013 by Rosie

Infrared5 and Brass Monkey are excited to announce their participation in Intel Software’s Ultimate Coder Challenge, ‘Going Perceptual’! The IR5/Brass Monkey team, along with six other teams from across the globe, will be competing in this seven week challenge to build the ultimate app. The teams will be using the latest Ultrabook convertible hardware, along with the Intel Perceptual Computing SDK and camera to build the prototype. The competitors range from large teams to individual developers, and each will take a unique approach to the challenge. The question will be which team or individual can execute their vision with the most success under such strict time constraints?

Here at Infrared5/Brass Monkey headquarters, we have our heads in the clouds and our noses to the grindstone. We are dreaming big, hoping to create a game that will take user experience to the next level. We are combining game play experiences like those available on Nintendo’s Wii U and Microsoft Kinect. The team will use the Intel Perceptual Computing SDK for head tracking, which will allow the player to essentially peer into the tablet/laptop screen like a window. The 3D world will change as the player moves his head. We’ve seen other experiments that do this with other technology and think it is really remarkable. This one using Wii-motes by Johnny Lee is one of the most famous. Our team will be exploring this effect and other uses of the Intel Perceptual Computing SDK combined with the Brass Monkey’s SDK (using a smartphone as a controller) to create a cutting edge, immersive experience. Not only that, but our creative team is coming up with all original IP to showcase the work.

Intel will feature documentation of the ups and downs of this process for each team, beginning February 15th. We will be posting weekly on our progress, sharing details about the code we are writing, and pointing out the challenges we face along the way. Be sure to check back here as the contest gets under way.

What would you build if you were in the competition? Let us know if you have creative ideas on how to use this technology; we would love to hear them.

We would like to thank Intel for this wonderful opportunity and wish our competitors the best of luck! Game on!

, , , , , , , , , , ,

Top 10 Boston Area Game Conferences, Festivals and Symposia You Should Know About for 2013

January 10th, 2013 by Elliott Mitchell

PAX East Indie Megabooth Developers - Photo Courtesy Ichiro Lambe

1 ) Pax East
Spawned from Washington State based Penny Arcade Conference, Boston’s three day PAX East Conference is debatably the largest game conference in the United States. Boasting over 70K attendees in 2011 and even more in 2012, PAX East has much to offer game enthusiasts, developers, students and the press. Some highlights of PAX East are: The indie Megabooth, panel talks, the expo floor and the multitudes of game enthusiasts.  http://east.paxsite.com/

2 ) Boston Festival of Indie Games
A vibrant offspring of the Boston Indies Group, the Boston Festival of Indie Games (Boston FIG) held it’s first event in 2012 at MIT in Cambridge, MA. Several thousand attendees from across the the region comprised of game enthusiasts, all manner of game developers, tech startups, students and supportive parents enjoyed the day long festival. Highlights include:  prominent industry speakers, playing local indie games while having many opportunities to talk to the game developers, cutting edge tech demos, screening films like Indie Game the Movie, participating in a game jam, local industry art show and amazing networking opportunities. http://bostonfig.com/

3 ) Games for Health
Established 9 years ago in Boston, MA, Games for Health is an unique conference focused on non-traditional uses of game technology and motivational game mechanics utilized to facilitate healing, healthy practices and gathering data. The conference is attended by a wide range of professional game designers, tech startups, researchers, educators, healthcare providers and more. http://www.gamesforhealth.org/

4 ) MIT Business in Gaming
Originating in 2009, the MIT BIG conference was conceived as an event to bring together the best and brightest business leaders around Massachusetts to talk about succeeding in the business of making games. High profile panels and industry focused topics make this conference unique. Attendees range from entrepreneurs, publishers, investors, AAA studios, students and indie developers. http://www.mitbig.com/

5 ) Mass Digi Game Challenge
Initiated in 2012, the Mass DiDI Game Challenge is an annual games industry event and competition focused on mentoring aspiring game development teams. The goal of the conference is to boost the odds of new startups in Massachusetts to pitch, fund, create and publish successful games. Winners receive prizes, valuable mentorship, new industry connections and lots of publicity. http://www.massdigi.org/gamechallenge/

6 ) Boston GameLoop
Established back in 2008, Boston GameLoop is an amazing unConference where all walks of game industry people unite for an afternoon or self organizing talks, debates, presentations and networking opportunities. Indies, Students and AAA industry folks come together for a day of inspiration. http://www.bostongameloop.com/

7 ) MIT Game Lab Symposium
First held in 2012, The MIT Game Lab Symposium is a fascinating event focused on game research, education and non-traditional use cases of game mechanics and technologies. The all day event is highlighted with expert panel discussions and amazing networking opportunities. Attendees include industry leaders, top researchers, indies, students and other interested people from various external disciplines and industries.
http://gamelab.mit.edu/symposium/

8 ) 3D Stimulus Day
Conceived in 2009, 3D Stimulus Day is unique because it is the only all-day 3D game related event around Boston. Attendees network, watch professional’s speak, learn about new technologies, demo games, receive vital information on how to get jobs as well as show off their portfolios. 3D artists ranging from industry veterans to students unite for a day dedicated to 3D for games. http://greateasterntech.com/events-a-news/22-3d-stimulus-day

9 ) No Show Conference
Initiated in 2012, the self described goal of the two day No Show Conference is “to give game industry professionals a space to explore our skillsets, our motivations, and our limits as developers”. The conference is comprised in part by highly pertinent industry related presentations, networking opportunities, a demo hall showcasing local indie game developers and a game jam.
http://noshowconf.com/

10 ) MassTLC Innovation Unconference
This annual event held by the Mass Technology Leadership Council for C-level executives, young entrepreneurs, investors and students is not solely focused on the games industry. The Innovation Unconference is a great forum to network, exchange ideas, learn from the pros and gather fresh ideas from new innovators.  Although the conference is not solely focused on games, a high percentage of attendees with connections to the game industry do attend. http://www.masstlc.org/?page=unConference

Elliott Mitchell
Technical Director @ Infrared5.com
Indie Game Developer
Twitter: @mrt3d

, , ,

Top 10 Prominent Boston Area Game Developer Groups and Organizations That You Should Pay Attention To

December 14th, 2012 by Elliott Mitchell

Top 10 Prominent Boston Area Game Developer Groups and Organizations That You Should Pay Attention To:

Scott Macmillan (co-founder Boston Indies), Darius Kazemi (co-organizer Boston Post Mortem) and Alex Schwartz (co-founder Boston Unity Group) preparing for a Boston Post Mortem presentation July 2011. (Photo- Elliott Mitchell co-founder Boston Unity Group)

The Boston area game developer scene has a generous and open community that nurtures indies, startups, students and AAA game studios alike. The evidence of this is more than abundant. On almost any given day one can find a game industry event ranging from casual meet-ups, demo nights and intense panel discussions. As I am an indie game developer and technical director, I will focus more closely on groups that are indie game developer related. One thing can be assured, all of these groups are prominent, worthwhile and you should check them out if you haven’t already done so!

1 ) International Game Developers Association (IDGA) – Boston Post Mortem (BPM)

The Boston based chapter of the IDGA was founded in 1997 by Kent Quirk, Steve Meretzk & Rick Goodman at John Harvard’s Brewhouse. Boston Post Mortem is internationally renowned as an example of how to grow and nurture a game developer community. BPM is the seminal game developer organization in the Boston area. Currently held at The Skellig in Waltham, MA, BPM is a monthly IDGA chapter meeting focused around industry related topics. BPM hosts expert speakers, industry panels, great networking opportunities and grog.

Frequency: Monthly
Membership Required: No, but IDGA membership is encouraged
Admission to Meetings: Usually free
Web: http://www.bostonpostmortem.org/
Twitter: @BosPostMortem

2 ) Boston Indies (BI)

Boston Indies is, as the name would indicate, a Boston based group for indie game developers. BI was founded in 2009 by Scott Macmillan and Jim Buck as an indie game developer alternative to the large Boston Post Mortem group.  Boston Indies featured indie developer presentations, BYOB and chipping in for pizza. Meet-ups were hosted at the Betahouse co-working space at MIT in Cambridge, MA. BI quickly grew larger and moved locations to The Asgard and settling most recently at the Bocoup Loft in South Boston. At BI meetups, indie developers present on relevant topics, hold game demo nights and network. Boston Indies is notable because it spawned the very successful Boston Festival of Indie Games in the fall of 2012.

Frequency: Monthly
Membership Required: No
Admission to Meetings: Free
Web: www.bostonindies.com
Twitter: @BostonIndies

3 ) The Boston Unity Group (BUG)

Founded in 2012 by Alex Schwartz and Elliott Mitchell, The Boston Unity User Group (BUG) is a bi-monthly gathering of Unity developers in the Boston area. Born from the inspiration and traditions of Boston Post Mortem and Boston Indies, BUG events are Unity game development related meetups where members ranging from professionals to hobbyist unite to learn from presentations, demo their projects, network and continue to build bridges in the Boston area game development community and beyond. BUG is renowned by local and international developers, as well as by Unity Technologies, as one of the first and largest Unity user groups in the world. Meetings have been frequently held at the Microsoft New England Research Center, Meadhall and the Asgard in Cambridge, MA.

Frequency:  Bi-Monthly
Membership Required:  Meetup.com registration required
Admission to Meetings:  Free
Web:  http://www.meetup.com/B-U-G-Boston-Unity-Group/
Twitter:  @BosUnityGroup

4 ) Women In Games (WIG)

Founded by Courtney Stanton in 2010, Women in Games Boston is the official Boston chapter of the International Game Developers Association’s Women in Games Special Interest Group. Renown industry speakers present on relevant game development topics but what differentiates WIG is the it’s predominately female perspective and unique industry support. WIG meets monthly at The Asgard in Cambridge. Developers from AAA, indie studios and students regularly attend. WIG is an event open to women and their allies to attend.

Frequency: Monthly
Membership Required: No
Admission to Meetings: Free
Web: http://wigboston.wordpress.com/
Twitter: @WIGboston

5 ) Boston HTML5 Game Development Group

The Boston HTML5 Game Developer Group was founded in 2010 by Pascal Rettig. On the group’s meetup webpage, the description reads  “A gathering of the minds on tips, tricks and best practices for using HTML5 as a platform for developing highly-interactive in-browser applications (with a focus on Game Development)”. The HTML5 game development Group in Boston boasts an impressive roster of members and speakers. Attended and led by prominent industry leaders and innovators, the Boston HTML5 Game Developer Group is a monthly meetup held at Bocoup Loft in Boston, MA.

Membership Required: Meetup membership encouraged
Admission to Meetings: Free
Web: http://www.meetup.com/Boston-HTML5-Game-Development/
Twitter: #Boston #HTML5

6 ) MIT Enterprise Forum of Cambridge  - New England Games Community Circle (NEGamesSIG)

Originally founded in 2007 by Michael Cavaretta as The New England Game SIG, newly renamed New England Games Community Cirle  is a group rooted in greater MIT Enterprise Forum of Cambridge. NEGCC focuses on being a hub for dynamic games and interactive entertainment industries throughout New England.  NEGCC events are predictably very good and well attended with their professional panel discussions featuring a mix of innovative leaders from across the business of games. Events regularly are held in various locations around Cambridge, MA including the MIT Stata Center and the Microsoft New England Research Center.

Frequency: Regularly dispersed throughout the year
Membership Required: Not Always / Membership encouraged with worthwhile benefits.
Admission to Meetings: Depends on event and if you’re a member
Web: http://gamescircle.org/
Twitter: #NEGCC #NEGamesSIG

7 ) The Massachusetts Digital Games Institute (MassDiGI)

The Massachusetts Digital Games Institute was founded in 2010 by Timothy Loew and Robert E. Johnson, Ph. D.  This is a unique group focused on building pathways between academia and industry, while nurturing entrepreneurship and economic development within the game industry across Massachusetts. MassDiGI holds game industry related events not only in the Boston area but across the entire Commonwealth. MassDiGI also runs some larger events and programs like the MassDiGI Game Challenge, where prominent industry experts mentor competing game development teams. Mass DiGI also holds a Summer of Innovation Program where students are mentored by industry experts while they form teams and develop marketable games over the summer. Mass DiGI is headquartered at Becker College in Worcester, MA.

Frequency: Slightly Random
Membership Required: No Membership
Admission to Meetings: Mostly free / Some events and programs cost money
Web: http://www.massdigi.org/
Twitter: @mass_digi

8 ) Mass Technology Leadership Council – Digital Games Cluster (MassTLC)

MassTLC is a large organization that encompasses much more than games. The MassTLC Digital Games Cluster is led by the likes of Tom Hopcroft and Christine Nolan, among others, who work diligently to raise awareness about the region’s game industry and build support for a breadth of Massachusetts game developers.  MassTLC holds regular events benefit startups, midsized companies and large corporations across Massachusetts. With a focus on economic development, MassTLC helps those those looking to network, find mentors, funding and other resources vital to a game studio of any scale. One of my favorite MassTLC events is the MassTLC PAX East – Made in MA Party. The Party serves to highlight hundreds of Massachusetts game developers to the media as well as out of state industry folks on the evening before the the massive PAX East game developer conference begins. MassTLC Events are frequently held at the Microsoft New England Research Center.

Frequency: Regularly / Slightly Random
Membership Required: Not Always / Membership encouraged with worthwhile benefits.
Admission to Meetings: Depends on event and if you’re a member
Web: http://www.masstlc.org/?page=DigitalGames
Twitter: @MassTLC

9 ) Boston Game Jams

Founded in 2011 by Darren Torpey, Boston Game Jams is a unique group. Modeled after the Nordic Game Jam, IGDA Global Game Jam and others less  known game jams, Boston Game Jams is an ongoing series of ad-hoc game jams held in the Boston area. As Darren States on the Boston Game Jam’s website, “It is not a formal organization of any kind, but rather it’s more of a grassroots community that is growing out of a shared desire to learn and create games together in an open, fun, and highly collaborative environment.” Boston Game Jams is a great venue for people of all skill levels to come together and collaboratively create games around given themes within a very short period of time. Participants range from professionals to novices. Boston Game Jams have historically been held at the innovative Singapore-MIT GAMBIT Game Lab which has recently morphed into the new MIT Game Lab.

Frequency: Random
Membership Required: No
Admission to Meetings: Free / Food Donations Welcome
Web: http://bostongamejams.com/
Twitter: @bostongamejams

10 ) Boston Autodesk Animation User Group Association (BostonAAUGA)

BostonAAUGA is an official Autodesk User Group. Founded in 2008, BostonAAUGA joined forces in June 2012 with the The Boston Maya User Group (bMug) which was founded in 2010 by Tereza Flaxman. United into one 3D powerhouse, BostonAAUGA and mBug serve as a forum for 3D artists and animators seeking professional training, community engagement and networking opportunities. BostonAAUGA hosts outstanding industry speakers and panelists. It should be noted that not all of their events are game industry specific hence their number 10 slot ranking. BostonAAUGA is regularly hosted at Neoscape in Boston, MA.

Membership Required: No Membership
Admission to Meetings: Free

Web: http://www.aaugaboston.com/

Twitter: @BostonAAUGA

Get out there!

—-

Elliott Mitchell
Technical Director @ Infrared5.com
Indie Game Developer
Twitter: @mrt3d

, , , , , , , ,

GameDraw: 3D Power in Unity

October 5th, 2012 by Elliott Mitchell


At Infrared5, we are continuously seeking ways to improve the quality of our craft while increasing our efficiency in developing games for our clients. Our Unity engineers and creatives are ninjas, masters of their trade, and yet there are situations when leveraging the Unity’s Asset Store is extremely advantageous. Why reinvent the wheel by creating extra custom tools when there are relatively inexpensive, pre-existing tools in the Asset Store? GameDraw, by Mixed Dimensions, is one of those indispensable tools available on the Unity Asset Store.

I had the pleasure of evaluating a few pre-release builds of GameDraw after meeting the Mixed Dimensions team at GDC 2012 and more recently at Unite 2012. I was super impressed on both occasions. As stated by Mixed Dimensions, ‘The purpose of GameDraw is to make the life of designers easier by giving them possibilities inside Unity itself and cutting down time and cost.’’ GameDraw is not exactly a single tool, perhaps better described as an expansive suite of 3D tools for the Unity Editor.  Within GameDraw, one can actually manipulate pre-existing models, create new 3D assets, optimize 3D assets and a whole lot more.


Key Features Are:

Polygonal Modeling, Sculpting, Generation and Optimization Tools
UV Editor
City Generator
Runtime API
Character Customizer

Each of these features individually are worth the cost of GameDraw on the Asset Store. Drilling down deeper, GameDraw offers much more. It’s pretty amazing to see the degree of power GameDraw unleashes in the Unity Editor, offering features such as:

Mesh Editing ( Vertex, Edge, Triangle, Element)
Mesh manipulation functions (Extrude, Weld, Subdivide, Delete, Smooth,…etc)
Assigning new Materials
Mesh Optimization
UV editing
Primitives (25 basic model)
Sculpting
Boolean operations
Node based mesh generation
2D tools (Geometry painting, 2D to 3D image tracing)
Character customizer (NEXT UPDATE V 0,87)
City Generator (NEXT UPDATE V 0.87)
Warehouse “hundreds of free assets” (NEXT UPDATE V 0.87)

As a Beta product, GameDraw is slightly more functional on the PC than the Mac computers at the moment. Even though I primarily use a Mac in my daily routine, I was very impressed with GameDraw’s functionality on the Mac.


Being a hardcore Maya artist, I can’t see GameDraw eliminating my need for Maya anytime soon. I use Maya for more than creating Unity assets. However, I happily purchased the GameDraw from the Asset Store and use it on projects. I see a significant number of  instances when I want the ability to make changes to models, create new models, generate a cities, animate morph targets…all within Unity. For any of these tasks alone, GameDraw is a must have and very worth the cost.

-Elliott Mitchell

TD Infrared5
Co-Founder Boston Unity Group

Follow us on Twitter! @infrared5

, , , , , ,

Next Entries »