The Project Discovery Phase, Dissected

March 14th, 2013 by Dominick Accattato

When clients first reach out to Infrared5, they are often extremely excited about turning their ideas into a reality. We share their enthusiasm and can’t wait to dig into the project details. However, sometimes, there are numerous great ideas and not a lot of concrete information. This can be true for any project — games to enterprise applications. When this is the case, we sometimes suggest a Discovery and Planning Phase.

The Discovery and Planning Phase allows both the client and our team leaders to work together to elicit requirements and document the system architecture in a way that is meaningful for developers, designers and the client. It is typically very collaborative in nature. This phase also ties in disciplines such as business analysis, domain driven design, technical writing and design.

It’s important to note that not every project requires a Discovery and Planning Phase. Not all discovery phases are set up the same way. Some clients have a very detailed understanding of what they are trying to accomplish. In these cases, the client may already have specifications, but they are unable to develop the very complex technical component. In these cases, we suggest a separate path; one in which a focused Proof of Concept is provided. (We will cover Proof of Concept in a future post.) For now, we’ll assume the client has a larger system and is in need of defining the project. This is what the Discovery and Planning Phase attempts to achieve.

What is a Discovery and Planning phase?
A discovery and planning phase allows for the client to have direct access to our senior software developers and/or creative lead in order to define project requirements. With these requirements in hand, our development and design team can investigate and document the software/design components of the project. The goal is to clarify scope and verify the parties are on the same page prior to beginning production. Another benefit of the discovery phase is that certain technical challenges may surface from these discussions. (Pioneering applications are a specialty of the house here at Infrared5.) These high risk items may lead to a phased approach whereby the highest risk items are given their own Proof of Concept phases. (This is discussed with the client so that they have an understanding of our findings and why we have suggested a multi project, phased approach.) In the end, clients have the opportunity to remove the high risk item if it doesn’t fit with their release date or budget.

Who is involved in the Discovery Phase?
During the Discovery Phase, the team consists of a project manager and technical lead who are in charge of assessing the technical challenges that lie ahead for the development phase. The technical leads here at Infrared5 each have their own expertise. For instance, if the client approached us with an immersive 3D game, we would allocate one of our senior game developers to the Discovery and Planning Phase. The same would be true of a complex web application. One of the potential benefits of using a group like Infrared5 is that we also maintain a diverse group of developers who are experts in their own field of discipline. From gaming to streaming applications, we employ a renowned team of developers and designers who are truly experts in their field. Also during this phase, our creative team works closely with the client in order to flesh out UI designs, experience design and branding needs of the project. The creative team helps clients define their goals and the best strategy to meet them.

What can be accomplished during the Discovery phase?
One of the common questions we get from clients is, “What are the benefits of doing a Discovery and Planning Phase?” In most cases, there are a few documents produced. These are the Technical Requirements and the Software Requirements Specifications. It can be noted however that depending on the needs of the project, this may only require one of the two or a hybrid of each. Another document which may be produced during the Discovery and Planning Phase is a High Level Technical Overview. Just as it sounds, this document is high level. It does not aim to get into too much detail at the software component level. Instead, it resolves the more general system architecture and specifies which components may be used for the development phase.
For gaming projects, there are different details to focus on and these must be determined before the developers begin programming. A Game Design Document is necessary for describing the game play and the mechanics behind some of the simulations. Often this requires the technical lead and game designer to work together.

For both gaming and applications, the creative team delivers initial design concepts and wireframes to be augmented later in the design phase. The creative team also works closely with the client in regards to the user experience.

Ultimately, the Discovery Phase ensures both parties are aligned before any, more extensive, design or development begins in later phases.

What is delivered at the end of a Discovery Phase?
At the end of the Discovery Phase, the three important documents delivered to a client are:
• High Level Technical Overview
• Technical Requirements Specification
Software Requirements Specification

In the case of a gaming project, the typical document would be:
Game Design Document

In the case of both gaming and application projects, the following design related material is provided:
• Design Concepts

Upon completion of the Discovery phase, Infrared5 has enough information to provide more accurate estimates and timelines for completion.Each of these documents are important and we suggest searching online in order to further your understanding of their purposes. This article illustrates what steps are taken and what is delivered at the end of our Discover and Planning Phase.

, , , ,

IR5 Interactive Piece

March 5th, 2013 by Keith Peters

Introduction by Rebecca Allen:

We are creating a new website that will be launching at the end of March. Working on our own site is always an exciting process and one that is challenging as well. Our goal was to do the following with the new site:

1. Make it memorable!
2. Create a unique and fun interactive experience that captures our brand.
3. Display quickly and beautifully no matter what size device.
4. Communicate our mission and what we do clearly.

Today, we are going to give a sneak peak into #2. Keith Peters will walk you through the steps he took to create this interactive experience. Keith, take it from here!


As part of Infrared5’s new web site design, I was asked to create an interactive piece for the main page. The mock ups called for the design that has been our trademark since day one – an abstract space made of random isometric planes and lines. This is the design that is on our letterhead, business cards, and previous versions of our site.
I was given free range to come up with something interactive based on that design. One idea floated was to have the planes connect  with the lines and rotate around in the space. I decided to go with this concept. I realized that the idea was a bit odd once I got down to coding it. I mean, isometry itself is a form of 3D projection, and we also wanted to move the planes around in a 3D space with actual perspective. It could become quite a mess if done wrong. Specifically, in an isometric system the angles do not change and objects do not change size with distance. But I forged ahead anyway to see what could be done that might look decent.
The first thing I did was just get a 3D system of rotating particles. This was something I’d done and written about before, so was relatively straightforward. As you click and drag the mouse vertically the particles rotate around the x-axis, changing their y and z coordinates. When you drag on horizontally they rotate around the y-axis, changing on x and z.

Next step was to change the particles into isometric planes. I started out with true isometric projection, meaning the angles representing each axis are each 120 degrees apart. I soon switched over to dimetric projection, which has two angles at approximately 116 degrees and the third at about 127.

This has several advantages. First, it’s easier to calculate sizes of objects as the x-axis is simply twice the size of the y-axis. This also results in smoother lines with less antialiasing.

There are three different shapes I needed to draw: a panel facing left, one facing right, and a floor/ceiling panel.

As these would be animating, I didn’t want to have to redraw them on each frame using the canvas drawing API. So I made a panel object that creates a small canvas and draws a single panel to it with a random width and height. The program can then blit each one of these mini panel canvases to the main canvas on each frame. Each panel also got its own random gray shade and the result was something like this:

Now as I said earlier, when you move things around in a 3d space, they are supposed to grow larger or smaller depending on the distance from the camera. But in isometric/dimetric projection this is not the case. So we’re really mixing two forms of perspective. Having the panels scale as they went into the distance didn’t look right at all. Having them remain unchanged wasn’t exactly correct either but gave an odd trippy feel to the piece that I actually like a lot. So that’s how I left it. Also, to mix things up a bit, I made some of the panels fixed in space and not rotating. This comes up to about one in ten of the panels being stationary.

Next was the lines. When creating the panels, I made it so that some – but not all – of the panels connect to one other panel with a line. About 40 percent of the time a connection is made. This seemed to give the right density of lines on screen. Here’s what that looked like initially:

Pretty ugly because the lines go directly from one corner of a panel to one corner of another, breaking the isometric/dimetric space. They just look random and chaotic. To solve that I forced the lines to follow the same dimetric angles as the planes. This looked a million times better.

In order to add a bit more interaction, I added a few functions to allow users to add and remove planes and to assign various color schemes to the planes (or return to grayscale). For the colors, rather than just use a random color for each plane, which would be a bit chaotic, I found an HSV to RGB algorithm. Taking an initial hue, I generate a different color for each panel by randomly varying its hue and saturation. This gives a more cohesive look no matter what hue is chosen.

The way the colors work is by redrawing each of the individual panel canvases with the same parameters, but the newly chosen color. Again, this makes it so it only has to happen a single time and the panels can then be blitted to the main canvas on each frame.

All in all, this was a fun project that I’m glad I had the chance to work on.


The Balanced Approach: Hackathons and Marathons

March 2nd, 2013 by admin

The other day a blog post called “Hackathons are bad for you” struck a chord with developers and other members of the technology world.  The post, from Chinmay Pendharkar, a developer in Singapore, thoughtfully called out the code-all-night-drink-too-much-coffee-and-alcohol-and-eat-junk-food mentality.  It received hundreds of kudos from the obviously over- tired and over-caffeinated developer community. “Chinpen” makes a lot of good points, especially when he talks about culture and the glorification of the geek lifestyle.

We also give him thumbs up for making concrete suggestions around healthy hackathons.  (We’ve seen some of those guidelines in place locally  For example, the Battle for the Charles Startup Weekend organizers made great efforts to supply healthy eats and gave everyone reusable water bottles so they could hydrate without generating hundreds of empty disposable water bottles.)

Like everything in this world, there is room for a healthy balance.  Hackathons crackle with creative energy.  They can be a wonderful source of new ideas and inspiration.  Our own team is currently participating in the Intel Ultimate Coder Challenge, and we’re all excited and energized by the new technology and techniques.  We are already looking at ways we can employ these in our everyday work.

Over the last five years, we’ve grown Infrared5 significantly while holding the line on unrealistic release schedules and development timelines that deplete us mentally and physically.  While we have crunch times like everyone else, we offer comp time to make up for overtime. We encourage restful nights and weekends for restoring our creative selves.  Walking the dogs who are our office companions are great times for partner meetings.  Keith and Rebecca have taken up running, and Rebecca plans to compete in 10 races this year including one half-marathon.  She also wants to run a 5K in under 8 minute miles.

And yet it isn’t all bean sprouts and granola — as many of you know, we have (infamous) “The Infrared5 beer:30” get-together on Friday afternoons where we connect socially as a team and do some craft beer sampling.  This is an important part of our healthy balance.

Last week we spent part of this get-together brainstorming our “Wicked Ten” – how we define projects we would all like to work on.  Number 6 on the list was “Reasonable timeline/good budget.”  While this may seem obvious, it is rarer than we’d all like.  Yet, we know that some of the work we are proudest of comes when we work with people who also take time off to rest their creative muscles and exercise their physical bodies.

How are you achieving balance?

Tech Talk: Creating More Responsive and Quicker Websites and Web Apps

February 4th, 2013 by Kyle Kellogg

This past Friday, Infrared5 debuted the first of its bi-weekly in house ‘Tech Talk’. I was lucky to be able to give the first talk. I chose a subject I care very deeply about: How to make websites and web apps better and with less trouble. In order for it to fit in the hour format, I focused on a few key points.

Given the ever-changing nature of the web, technology as a whole, and the way the two interact, the primary focus of my talk was how to make websites and web apps more responsive. The primary recommendations were to plan ahead for ‘N’ devices, because you don’t know how many screens variation will view the website or web app. In accordance with that, and to make websites and web apps faster, I paraphrased from Jon Rohan about how to optimize CSS for faster rendering. In order to lighten and ease the load time, I recommended using Picturefill.js. This prevents heavy assets from loading when they aren’t required by the device. Finally, I discussed how we can use the Webkit Dev Tools to improve testing.

While the talk wasn’t anything groundbreaking, it was helpful to talk about best practices and discuss these points. Here is a link to the slides presented during the talk:

We look forward to sharing our next ‘Tech Talk’ with you!

, ,

Introducing madmin: An admin console for generating mock services with RESTful URIs.

January 29th, 2013 by Todd Anderson


madmin is a node application that provides a means to construct RESTful URIs that are immediately accessible on the server. While URIs can be defined using the command line – using such CLI tools such as cURLmadmin also provides an admin console as a GUI to aide in defining the URI and JSON response data structure.

The github repo for madmin can be found at


madmin was born out of the intent to minimize time spent by front-end development teams building applications against a living spec for service requirements.

> The Problem

We found that our front-end developers were curating two service layers implementing a common interface during development of an application:

  • [Fake] One that does not communicate with a remote resource and provides _fake_ data.

    Used during development without worry of remote endpoint being available (either from being undefined or no network) and can be modified to provide different responses in testing application response.

  • [Live] One that does communicate with a remote resource, sometimes utilizing libraries that abstract the communication.

    Used for integration tests and QA on staging before pushing application to production.

This would allow the service layer dependency to easily be switched out during development and deployment while providing a common API for other components of the application to interact with.

Though these service layers are developed against the same interface providing a common API, the curation of both in tandem during development can be exhaustive timewise as specifications and requirements change. When we multiplied that curation time across the numerous applications being developed, it became clearer that the fake service layer needed to be eliminated from development – especially seeing as it is not part of the release or unit tests at all.

> The Solution

Instead of defining the service layer as a dependency between these two implementations, if we could define the endpoints that the service layer communicates with then we could eliminate the need to have a fake service layer.

Just as the service references are changed from staging to production, why couldn’t we provide a living service endpoint with URIs that are being discussed and hashed out between teams. As well, why can’t we deploy that service locally and eliminate the need for a network resource to boot – we could continue our front-end development while relaxing on some remote un-connected island!

That is what madmin sets out to do.

> The By-Product

Though the initial intent was to eliminate the curation of an unnecessary service layer from front-end development, by defining RESTful URIs using madmin we were actually providing useful documentation of the service layer and opened up comminication between the back-end and front-end teams with regards to requirements and data structure.

Opening channels for communication is always a plus, and the fact that it provided self-documentation just seemed like a winner!

:: What It is Not

madmin is not meant to replace writing proper unit tests for client-side applications that communicate with a remote service nor is it mean to stand in for integration testing.


The madmin application works by updating a target JSON source file that describes the RESTful URIs. This file is modified using a RESTful API of the madmin server application, itself. You can check out the schema for the source JSON file that defines the API at

While it is possible to interact with the madmin server-side applicaiton using the command line – with such CLI tools such as cURL – a client-side applicaiton is available that provides ease of use and self-documentation.

> Usage

Full instructions on how to clone and install dependencies for madmin can be found at the repository for the madmin project:

Once installed, you can start the madmin server from the command line with the following options:

$> node index.js [port] [json]

The following example shows it’s usage and the default values:

The json source file provided will be read from and modified as you add URIs in madmin. The most common and easiest way to add URIs is to use the client-side console application available at http://localhost:<port> after starting the madmin node server.

> Client-Side Console

Once the server is started, you can access the GUI console for madmin at either: http://localhost:<port>/ or http://localhost:<port>/admin, with the <port> value wither being the default (8124) or the one specified using the --port command line option.

With an empty JSON API resource file, you will be presented with a console that provides an “add new” button only:
Empty madmin console

Upon adding a new route, you are presented with an empty editable console with various parameters:
Empty new route in console.

The following is a breakdown of each section from this route console UI:

– Method –

The Method dropdown allows you to select the desired REST method to associate with the URI defined in the Path field:
Route Method panel

– Path –

The Path field defines the URI to add to the REST service:
Path panel

The Summary field allows for entering a description for the URI. When input focus is lost on the Path field, the listing of Parameters is updated and allows for providing descriptions for each variable:
Route with multiple parameters

– Response –

The Response field allows for defining the JSON returned from the URI. As well, you can choose which response to provide:
Route Response panel

In reality it will return a 200 status with the selected JSON from either Success or Error. We often supply errors on a 200 and parse the response. This was an easy way for the team to coordinate the successful and error responses that come in JSON from the request.

Viewing Route URIs and Responses

When saved, the new route will be added to the supplies source JSON file and the client-side madmin console will change to the listing of URIs defined:
Saved Route to madmin

As well, the path and its proper response will be available immediately and available to develop against.

With Error selected from the Response field:
Defined Error Response

With Success selected from the Response field:
Defined Success Response

As mentioned previously, the source JSON file provided when launching the madmin server is updated while working on the URIs. If left to the default – or otherwise accessible from the server directory – you can point your web browser to that JSON resource file and check for validity:
Updated JSON route URIs

— note —

The default admin console location can be found at the http://localhost:<port>/admin location. As such, /admin is a reserved route and can not be defined as a valid URI in madmin.

It is on the TODO list to define a custom URI for the admin portal of madmin in order to allow the currently reserved /admin.


> Server-Side

madmin server-side application has been tested against Node.js version >=0.8.4

> Client-Side

madmin client-side application utilizes some ES5 objects and properties – ie, Object.create and Array.prototype.indexOf – and does not load in an additional polyfills to provide support for non-modern browsers.

The madmin client-side application should work properly in the following:

  • Chrome 12+
  • Safari 4+
  • IE 9+
  • Opera 12+

Grunt Integration

The madmin repository has build files for grunt with support for <=0.3.x (grunt.js) and ~0.4.0 (Gruntfile.js) and tasks for linting and testing both the server-side and client-side code utilizing Jasmine.

To run the grunt build tasks simply run the following from the command line in the directory where you cloned the madmin repository:

Depending on your install version of grunt the proper build file should be run. To learn more about grunt and how to install, please visit the grunt project page.


We saw a need here at Infrared5 to cut out front-end development time in curating multiple service layer implementations in order to support development efforts when resources – including server-side and network – were unavailable. The madmin application is our effort in reducing that time and extra effort and code that never saw the light of staging or production.

While doing so, we hope madmin can open up the communication channels between server-side and client-side teams in discussion service requirements and JSON data structure, all while having a living document of the endpoint URIs.

Hopefully you will find it as useful as we have!

, ,

A Practical, Applicable Approach to Responsive Web Design

January 4th, 2013 by Kyle Kellogg

When choosing a design for your web and mobile needs, it is important to chose a design approach that suits the needs of the project. There are two approaches that I want to discuss in this post. The first approach involves creating two compelling experiences, one for the web and the other for mobile. This design approach allows for separate designs and gives the designer more freedom in regards to the user experience for each device. The other approach revolves around a responsive design, which uses CSS3 properties to decide which media to show and where to place that media based on the constraints of the browser. This is not to say that responsive design cannot do both web and mobile with the same freedom, but that responsive design may have additional effort needed in order to achieve the same results. Responsive design is quickly becoming an effective way to design and develop maintainable web sites that reach a majority of endpoints and browsers.

To begin, let’s address what responsive web design is and how it differs from an adaptive approach. For our purposes here, and most others that I can think of, let’s limit responsive web design to be the combination of  a design and layout that respond to the size, orientation, and visible capabilities of whatever they’re being viewed in. What do I mean by visible capabilities? I’m talking about the specific capabilities or quirks of how the browser/renderer shows your user what you’ve made, i.e. drop shadows – but only if they’re available. Let me be clear:  We’re defining a responsive approach as being limited to changing styling and not functionality or content (although, in a little bit, I’ll describe a cheap and effective way to blur that definition in order to accommodate a semi-adaptive approach).

The benefits of a responsive approach may be self-evident, but for specificity’s sake we’ll review them. You end up with a single product and a single codebase. It’s much easier to maintain than making multiple, synchronized edits to a split (mobile-specific and non-mobile) approach. It’s usually quicker and more cost-effective to create than a split approach, however that can vary depending on how forward thinking you are with the website or web app as retrofitting one to support a responsive design can be a bit more involved than creating one from scratch. It can cater to your users’ or customers’ devices’ capabilities, which allows for a unified look and feel while still being forgiving for older or less capable browsers. It lends itself to reusing graphics, which can later be made adaptive by loading graphics only as big as they need to be (see more about responsive images here). It helps us meet the demands of a world in which resolutions and devices are changing at a lightning-quick pace. There already exist many excellent foundations and tools to build upon or with (Grids:,,,, or make your own at (Foundations/Frameworks:,, and (Tools: There are many more grids, foundations, frameworks, and tools out there to assist you – just search for whatever you’re looking for and you’ll probably be provided with many options to choose from.

Like any approach, a responsive approach also has some considerations that must be taken into account before proceeding with it. Developers and designers must both be forward-thinking in their approaches so that all variables are considered. For instance, if you’re developing and know that a specific section must be right aligned for desktop targets but center or left aligned for mobile targets then plan ahead so you’ll be able to accommodate that when the time comes. If you’re designing, consider how much a specific grouping or section will change with each target so that nothing crazy needs to happen on the developers’ side. This planning will be extremely helpful in cutting down the overall time for creating a finished product. This approach also necessitates that certain requirements be met, so shims for media queries ( and HTML5 tags ( will be necessary as well as feature detection (I prefer As you are developing, you’ll need to curate and optimize your codebase so that you’re not adding too much and creating a long load time. Most importantly, however, responsive web design is not a magic bullet – remember its limitation of affecting only styling.

Now that I’ve gone over some of the benefits and considerations, let’s get to the important bit – how can you apply this to current and future projects? Let’s break it down into steps:

  1. Planning
    1. Plan your targets. Will you have a different design or layout for desktops, tablets, and mobile devices? A different design or layout for iOS and Android? Nail down exactly what you’ll be designing and then developing for.
    2. Plan your designs and layouts. How will your designs and layouts differ in your targets? Imagine how they’ll flow from one to the next and describe everything in as much detail with still graphics, animations, or words as best as you are able to.
    3. Plan your development. How can you best architect the markup to allow for a fluid layout for each target?
  2. Designing
    1. Create designs for each target based on your planning.
    2. Using your designs, save/export each asset you’ll need.
    3. If your graphics are changing size, save/export each size as it’s own asset.
    4. Compile assets into a spritesheet for use by the developer(s). If you want, you can split it up into one spritesheet for each target.
  3. Developing
    1. Create your markup. Allow for a loose architecture that can be adapted to all targets.
    2. Input your content and implement responsive images (if you so choose).
      1. If you need to swap some content or shake the layout up a bit for a specific target, or for each target, consider adding with target classes so that you can swap by changing the visibility and/or display styling. While not lightweight, this is a quick and cheap way to adopt a semi-adaptive approach.
    3. Styling
      1. Create basic styling first. Start with your primary target and work from there.
      2. Move from your layout up to implementing the design specifics (font sizing, et cetera)
      3. Add on to your basic styling for each target, making only the necessary changes to each.
      4. Remember to make full use of Modernizr, or whatever feature detection you’re using, so that as much of your implemented design will remain as possible no matter what browser or renderer it’s being viewed in.
      5. Try and keep the tips from this video about Github’s CSS performance ( in your mind when styling
  4. Testing
    1. Test on as many devices, browsers, resolutions, and operating systems as possible.
    2. Though I’ve never had much luck with them, you can try to use to capture what your site looks like on operating systems or browsers you don’t have access to.
    3. You can also use,,, and to test at sizes your browser may not get to or multiple sizes at once.
    4. Once you’ve tested, test it again – trust me.

Those steps, while in a logical order, are also in order of importance (from greatest to least) – except for testing, which is of utmost importance throughout every project. I cannot stress enough how important the planning stage is to a responsive approach (or any approach, for that matter). With enough forethought and constant, consistent consideration for that planning, testing is merely a matter of double-checking your work to make sure you haven’t forgotten to cross a ‘t’ or dot an ‘i’ somewhere. You’ll find small things, but it shouldn’t be anything big enough to warrant serious time and effort in order to rectify. It’s unlikely you’ll be able to nail this the first time around, so test as frequently as you want or are able to. The more you test the more likely it is that you’ll get relevant feedback from recent changes and gain an understanding of how to improve your own processes. If you’ve planned and designed everything really well, with considerations to how the layouts and designs can flow into one another, you’ll find development becomes much easier. Each step stands upon the shoulders of the step before it, so have a strong base and you’ll be able to build bigger, better projects.

Hopefully you feel more confident in applying a responsive approach to your next website or web app.

Top 10 Prominent Boston Area Game Developer Groups and Organizations That You Should Pay Attention To

December 14th, 2012 by Elliott Mitchell

Top 10 Prominent Boston Area Game Developer Groups and Organizations That You Should Pay Attention To:

Scott Macmillan (co-founder Boston Indies), Darius Kazemi (co-organizer Boston Post Mortem) and Alex Schwartz (co-founder Boston Unity Group) preparing for a Boston Post Mortem presentation July 2011. (Photo- Elliott Mitchell co-founder Boston Unity Group)

The Boston area game developer scene has a generous and open community that nurtures indies, startups, students and AAA game studios alike. The evidence of this is more than abundant. On almost any given day one can find a game industry event ranging from casual meet-ups, demo nights and intense panel discussions. As I am an indie game developer and technical director, I will focus more closely on groups that are indie game developer related. One thing can be assured, all of these groups are prominent, worthwhile and you should check them out if you haven’t already done so!

1 ) International Game Developers Association (IDGA) – Boston Post Mortem (BPM)

The Boston based chapter of the IDGA was founded in 1997 by Kent Quirk, Steve Meretzk & Rick Goodman at John Harvard’s Brewhouse. Boston Post Mortem is internationally renowned as an example of how to grow and nurture a game developer community. BPM is the seminal game developer organization in the Boston area. Currently held at The Skellig in Waltham, MA, BPM is a monthly IDGA chapter meeting focused around industry related topics. BPM hosts expert speakers, industry panels, great networking opportunities and grog.

Frequency: Monthly
Membership Required: No, but IDGA membership is encouraged
Admission to Meetings: Usually free
Twitter: @BosPostMortem

2 ) Boston Indies (BI)

Boston Indies is, as the name would indicate, a Boston based group for indie game developers. BI was founded in 2009 by Scott Macmillan and Jim Buck as an indie game developer alternative to the large Boston Post Mortem group.  Boston Indies featured indie developer presentations, BYOB and chipping in for pizza. Meet-ups were hosted at the Betahouse co-working space at MIT in Cambridge, MA. BI quickly grew larger and moved locations to The Asgard and settling most recently at the Bocoup Loft in South Boston. At BI meetups, indie developers present on relevant topics, hold game demo nights and network. Boston Indies is notable because it spawned the very successful Boston Festival of Indie Games in the fall of 2012.

Frequency: Monthly
Membership Required: No
Admission to Meetings: Free
Twitter: @BostonIndies

3 ) The Boston Unity Group (BUG)

Founded in 2012 by Alex Schwartz and Elliott Mitchell, The Boston Unity User Group (BUG) is a bi-monthly gathering of Unity developers in the Boston area. Born from the inspiration and traditions of Boston Post Mortem and Boston Indies, BUG events are Unity game development related meetups where members ranging from professionals to hobbyist unite to learn from presentations, demo their projects, network and continue to build bridges in the Boston area game development community and beyond. BUG is renowned by local and international developers, as well as by Unity Technologies, as one of the first and largest Unity user groups in the world. Meetings have been frequently held at the Microsoft New England Research Center, Meadhall and the Asgard in Cambridge, MA.

Frequency:  Bi-Monthly
Membership Required: registration required
Admission to Meetings:  Free
Twitter:  @BosUnityGroup

4 ) Women In Games (WIG)

Founded by Courtney Stanton in 2010, Women in Games Boston is the official Boston chapter of the International Game Developers Association’s Women in Games Special Interest Group. Renown industry speakers present on relevant game development topics but what differentiates WIG is the it’s predominately female perspective and unique industry support. WIG meets monthly at The Asgard in Cambridge. Developers from AAA, indie studios and students regularly attend. WIG is an event open to women and their allies to attend.

Frequency: Monthly
Membership Required: No
Admission to Meetings: Free
Twitter: @WIGboston

5 ) Boston HTML5 Game Development Group

The Boston HTML5 Game Developer Group was founded in 2010 by Pascal Rettig. On the group’s meetup webpage, the description reads  “A gathering of the minds on tips, tricks and best practices for using HTML5 as a platform for developing highly-interactive in-browser applications (with a focus on Game Development)”. The HTML5 game development Group in Boston boasts an impressive roster of members and speakers. Attended and led by prominent industry leaders and innovators, the Boston HTML5 Game Developer Group is a monthly meetup held at Bocoup Loft in Boston, MA.

Membership Required: Meetup membership encouraged
Admission to Meetings: Free
Twitter: #Boston #HTML5

6 ) MIT Enterprise Forum of Cambridge  - New England Games Community Circle (NEGamesSIG)

Originally founded in 2007 by Michael Cavaretta as The New England Game SIG, newly renamed New England Games Community Cirle  is a group rooted in greater MIT Enterprise Forum of Cambridge. NEGCC focuses on being a hub for dynamic games and interactive entertainment industries throughout New England.  NEGCC events are predictably very good and well attended with their professional panel discussions featuring a mix of innovative leaders from across the business of games. Events regularly are held in various locations around Cambridge, MA including the MIT Stata Center and the Microsoft New England Research Center.

Frequency: Regularly dispersed throughout the year
Membership Required: Not Always / Membership encouraged with worthwhile benefits.
Admission to Meetings: Depends on event and if you’re a member
Twitter: #NEGCC #NEGamesSIG

7 ) The Massachusetts Digital Games Institute (MassDiGI)

The Massachusetts Digital Games Institute was founded in 2010 by Timothy Loew and Robert E. Johnson, Ph. D.  This is a unique group focused on building pathways between academia and industry, while nurturing entrepreneurship and economic development within the game industry across Massachusetts. MassDiGI holds game industry related events not only in the Boston area but across the entire Commonwealth. MassDiGI also runs some larger events and programs like the MassDiGI Game Challenge, where prominent industry experts mentor competing game development teams. Mass DiGI also holds a Summer of Innovation Program where students are mentored by industry experts while they form teams and develop marketable games over the summer. Mass DiGI is headquartered at Becker College in Worcester, MA.

Frequency: Slightly Random
Membership Required: No Membership
Admission to Meetings: Mostly free / Some events and programs cost money
Twitter: @mass_digi

8 ) Mass Technology Leadership Council – Digital Games Cluster (MassTLC)

MassTLC is a large organization that encompasses much more than games. The MassTLC Digital Games Cluster is led by the likes of Tom Hopcroft and Christine Nolan, among others, who work diligently to raise awareness about the region’s game industry and build support for a breadth of Massachusetts game developers.  MassTLC holds regular events benefit startups, midsized companies and large corporations across Massachusetts. With a focus on economic development, MassTLC helps those those looking to network, find mentors, funding and other resources vital to a game studio of any scale. One of my favorite MassTLC events is the MassTLC PAX East – Made in MA Party. The Party serves to highlight hundreds of Massachusetts game developers to the media as well as out of state industry folks on the evening before the the massive PAX East game developer conference begins. MassTLC Events are frequently held at the Microsoft New England Research Center.

Frequency: Regularly / Slightly Random
Membership Required: Not Always / Membership encouraged with worthwhile benefits.
Admission to Meetings: Depends on event and if you’re a member
Twitter: @MassTLC

9 ) Boston Game Jams

Founded in 2011 by Darren Torpey, Boston Game Jams is a unique group. Modeled after the Nordic Game Jam, IGDA Global Game Jam and others less  known game jams, Boston Game Jams is an ongoing series of ad-hoc game jams held in the Boston area. As Darren States on the Boston Game Jam’s website, “It is not a formal organization of any kind, but rather it’s more of a grassroots community that is growing out of a shared desire to learn and create games together in an open, fun, and highly collaborative environment.” Boston Game Jams is a great venue for people of all skill levels to come together and collaboratively create games around given themes within a very short period of time. Participants range from professionals to novices. Boston Game Jams have historically been held at the innovative Singapore-MIT GAMBIT Game Lab which has recently morphed into the new MIT Game Lab.

Frequency: Random
Membership Required: No
Admission to Meetings: Free / Food Donations Welcome
Twitter: @bostongamejams

10 ) Boston Autodesk Animation User Group Association (BostonAAUGA)

BostonAAUGA is an official Autodesk User Group. Founded in 2008, BostonAAUGA joined forces in June 2012 with the The Boston Maya User Group (bMug) which was founded in 2010 by Tereza Flaxman. United into one 3D powerhouse, BostonAAUGA and mBug serve as a forum for 3D artists and animators seeking professional training, community engagement and networking opportunities. BostonAAUGA hosts outstanding industry speakers and panelists. It should be noted that not all of their events are game industry specific hence their number 10 slot ranking. BostonAAUGA is regularly hosted at Neoscape in Boston, MA.

Membership Required: No Membership
Admission to Meetings: Free


Twitter: @BostonAAUGA

Get out there!


Elliott Mitchell
Technical Director @
Indie Game Developer
Twitter: @mrt3d

, , , , , , , ,

TXJS: A Look at Javascript as a Language and Community

August 30th, 2012 by Keith Peters

Recently, Todd Anderson and I attended TXJS, a JavaScript conference in Austin, TX. I wanted to give some general feedback on the conference itself, and then discuss a bit about JavaScript as a language and the JavaScript community.

The Conference

The conference was a one-day, two-track setup with nine slots and a day of training beforehand. Each slot was 40 minutes, which in my mind is short for a technical presentation. With a full hour, you can start to teach a few concrete techniques. But 40 minutes just leaves you time to get across a general idea, suggestion or viewpoint. In other words, in a shorter session, you might be able to say WHY you should do something, but in a longer session you could show HOW. Then again, even an hour is barely enough time to teach anything concrete and many people do not do well at it. All too often I’ve found myself getting bored and looking at my watch towards the end of a longer session. Perhaps conferences need to offer a mix of longer and shorter sessions.
It was very strange to be at a conference where I was not a speaker and knew nobody except the other person from my company I came with. I believe the last conference I attended without speaking at was Flash Forward NYC in 2004. In that sense, it was kind of a relief to just sit back and go to all the sessions and take it all in without having to worry about my own talk. Much stranger was just not knowing anybody there. Todd and I mostly hung out together and talked to a few others here and there. I’m far more used to meeting up with the usual crowd that has been present at every Flash event for the last 10 years. Also, being a speaker affords you a certain amount of mini-celebrity at an event, with people you don’t know coming up and talking to you, mentioning your talk, your site, your books, whatever. It was odd to just be another nameless face in the crowd. Odd, but probably good to get that perspective now and then.

The Training

The training on the first day was an overview of the JavaScript language from Ben Alman of Bocoup, a company located right here in Boston. Ben undoubtedly knows his subject matter and it was a solid day of training, but I would say his talk was targeted a bit more towards newcomers to the subject. I picked up on several small concepts I didn’t know and clarified several others, but it was largely a review of material for me. This is not to say that a review is bad. I went through a lot of the examples and tried out various iterations and came out with more confidence than I went in with, so that’s good.

Given the nature of the training and the time constraints of the sessions themselves, I can’t say that I walked away from the conference with a huge wealth of new knowledge of specific things about JavaScript. But I did walk away with several areas sparked with interest for future study, which I am actively pursuing. One of these subjects is Node.js.

New Interests

Node.js, for the uninformed, is a JavaScript engine outside of the browser, wrapped, optimized, and enhanced for use as a server, but also useful on a local machine. Node allows you to write server side applications in JavaScript, and also allows you to write command line JavaScript programs that run on your local machine. This can be useful for testing, build processes, automation tasks, etc. This is similar to how Ruby is largely used on the server, but commonly used for local build processes and other tasks as well. I picked up a book on Node.js and was amazed that within an hour of starting the book I was writing HTTP and socket servers. Not only did they work, but I understood every line of code that went into them. Amazing. Very simple yet powerful stuff that I’m glad to add to my knowledge base.

The next area of interest that was sparked at TXJS was WebGL. This is an implementation of OpenGL, the 3d graphics library used in many computer platforms including all iOS and Android devices, but implemented in the browser with a JavaScript API. Although I wound up missing the one WebGL session at TXJS (a tough choice between that and another session going on at the same time), I vowed to do some investigation of it on my own. Still working through Node right now, but WebGL is next on the list.

Communities and Focus

I’ve been thinking a lot about tech communities lately. I got a foot in the door of the Flash community back in the early 2000′s and while I’ve always delved into other various technologies, I never got really involved in any other tech communities. Now, I’m not saying anything as inflammatory as “Flash is Dead”, but I do think the Golden Age of Flash is in the past and it’s probably not going to see the level of excitement and support it saw in its heyday. Personally, I do not have a ton of interest in where Flash’s current road map is leading it, which is what led me to become more interested in HTML5/JavaScript. But I haven’t joined the anti-Flash crowd either. We’re still pulling in lots of Flash work here at Infrared5 so I imagine I’ll have a hand in AS3 for a good long while to come.
Anyway, when I started getting more interested in JavaScript development, I naturally started looking at the JS community. Unfortunately, I think it has quite a different dynamic than what I have become accustomed to with Flash. Firstly, the JS community is probably a lot larger than the Flash community ever was. But for a community of that size, it seems like there are disproportionately fewer well known names – JavaScript celebrities or “rock stars” if you will. I think a lot of this has to do with a lack of focus in the language and development practices. Love it or hate it, Adobe (and Macromedia before it) has always been a central point around which the Flash community revolved. They’ve held the reins on the product. You’ve always known you that you’d be getting a new version of Flash every 18 months or so, and that there would be a finite number of cool new features in that update. If you’re lucky enough, you might get onto the beta. Adobe/Macromedia have historically sponsored most if not all of the major Flash conferences and the same group of evangelists and developer relations people show up to talk and listen. Moreover, Adobe has given the ActionScript language a focus, with official components, tutorials and sample code that established a set of best practice and way to code.

Contrast that with JavaScript as it stands today. There is a standards committee that can’t seem to agree on anything. If they do agree on something today, it could be literally years before it is official. Meanwhile, browser vendors are moving forward implementing features years before they are official and sometimes before they are even agreed upon. Some are even implementing their own features that are not part of any standards process.

And that’s only the core language. Ask a group of JavaScript developers about the best architecture for an application and you will start a holy war. Classes or prototypal inheritance? The best MV* framework? AMD or CJS? Forget about it. World War III almost broke out a few months ago over the question of whether or not you should use semicolons! There’s even a growing question about whether or not it makes sense to code JavaScript IN JavaScript. With a steadily increasing list of languages that “compile to javaScript” but offer different – possibly better – constructs for managing large applications, it is a valid question. But with so many of these new languages around, and no standard, we are again back into the question of which is “the best”. And the arguments ensue…


It’s an exciting time to be into JavaScript, but not an easy one. There is massive innovation happening on so many fronts. There are dozens of new frameworks and libraries coming out daily – more than any person could possibly keep up with. Standards are evolving and there is something new to learn every day. The pace of this innovation makes it hard to keep up, and the lack of focus means that no matter what you do, how you code, or which framework or library you use, there will be countless people waiting in the wings to tell you you’re doing it all wrong. My advice is to learn what you can, don’t worry about the rest, and have fun.

Keith Peters

, , ,

Boid Flocking and Pathfinding in Unity, Part 3

July 23rd, 2012 by Anthony Capobianchi

In this final installment, I will explore how to set up a ray caster to determine a destination object for the Boids, and how to organize a number of different destination points for your Boids so that they do not pile on top of each other.

Organizing the Destinations -

The idea is to create a marker for every Boid that will be placed near the destination, defined by the ray caster. This keeps Boids from running past each other or pushing each other off track.

For each Boid in the scene, a new Destination object will be created and managed. My Destination.cs script looks like this:

This is very similar to the Boid behaviors we set up in Boid.cs. We create coherency and separation vectors just as before, except this time we use a rigid body that has the two vectors being applied to it. I am using rigid body’s velocity property to determine when the destination objects are finished moving into position.

Positioning and Managing the Destinations -

Now we create a script that handles instantiating all the destination objects we need for our Boids, placing each one in relation to a Boid, and using each destination’s Boid behaviors to organize them  I created a script called DestinationManager.cs where this will be housed.

First off we need to set up our script:

We need to create our ray caster that will tell the scene where to place the origin of our placement nodes. Mine looks like this:

The ray caster shoots a ray from the camera’s position to the ground, setting the Boid’s destination where it hits.

Next, we take the destinations that were created and move them together using the Boid behaviors we gave them.

The Boid system is primarily used for the positioning of the Destination objects. This method ensures that the Boid system will not push your objects off of their paths, confusing any pathfinding you may be using.

, , , ,

Boid Flocking and Pathfinding in Unity, Part 2

July 5th, 2012 by Anthony Capobianchi

In my last post, we worked through the steps to create a Boid system that will keep objects together in a coherent way, and a radar class that will allow the Boids to detect each other. Our next step is to figure out how to get these objects from point “A” to point “B”, by setting a destination point.

Pathfinding -

For this example, I used Aron Granberg’s A* Pathfinding Project to handle my pathfinding. It works great and uses Unity’s CharacterController to handle movement, which helps with this example. A link to download this library can be found at and a guide to help you set it up in your scene can be found here:

In Boid.cs I have my path finding scripts as such:

Applying the Forces -

Once we have the calculated force of the Boid behaviors and the path finder, we have to put those together and apply that to the character controller. We use Unity’s Update function on Boid.cs to constantly apply the forces.

In my next post, we will look at using a ray caster to set a destination point in the scene for the Boids, as well as how to organize a number of different destination points for the Boids to keep them from piling on top of each other.

Read part one here.

« Previous Entries