My Thoughts on Glass

August 27th, 2013 by Chris Allen

I recently took a trip down to NYC a few weeks ago to pick up my new Google Glass. I figured now that we’ve had a bit of time to play with it here at Infrared5 and Brass Monkey, it’s time to write a post on our findings and what we think.

Read the rest of this entry »

, , , , ,

Kiwi Katapult Revenge Case Study by Intel

July 29th, 2013 by Adam Doucette

Earlier this year, Intel invited Infrared5 to compete in its Ultimate Coder Challenge: Going Perceptual, a groundbreaking contest that provided participants with an Ultrabook device, Creative Interactive Gesture Camera development kits, a still-evolving perceptual computing SDK, and all of the support possible for letting their imaginations run rampant. With its focus on sensor-based input, using Brass Monkey technology seemed a natural complement to perceptual computing, but the question surfaced of how to mesh the two. William Van Winkle recounts the story our team experienced in this challenging but fulfilling opportunity. As always, we welcome all questions and feedback. Enjoy!

http://software.intel.com/en-us/articles/infrared5-case-study

, , , , , ,

Infrared5 Participating in the Intel Ultimate Coder Challenge

April 18th, 2013 by admin

Perceptual Computing Challenge Winner to be Announced Next Week

Boston, MA – April 18, 2013 – Infrared5, an innovative Boston-based interactive studio, today
announced that the company is one of only seven developer teams participating in the Intel® Ultimate
Coder Challenge, and the only East Coast team.

The seven teams used the latest Intel Ultrabook™ devices, the Intel Perceptual Computing Software
Developers Kit (SDK), and Creative® Interactive Gesture Camera to build a special application prototype
in just seven weeks. The Ultimate Coder teams, including the Infrared5 team, blogged about their
experiences online and a panel of experts will crown the Ultimate Coder on April 24.

The Intel Ultimate Coder Challenge is designed to promote the use of perceptual computing. Perceptual
computing creates more immersive software applications by incorporating 2D/3D object tracking, user
gesture control, facial analysis and recognition, eye control, and voice recognition. Perceptual computing
is more than just the use of touch screens; it is a new area of pioneering development.

“The Intel Ultimate Coder Challenge provides a great opportunity for participants to innovate around
perceptual computing, and learn from peers in a few short weeks,” said Bob Duffy, community manager
for the Intel Developer Zone at Intel.

“We’re very comfortable with pioneering applications of technology,” said Rebecca Smith Allen, Infrared5
CEO and part of the Challenge team. “Perceptual computing is a new frontier we’re excited to explore.”

“The combination of the immersive properties of Brass Monkey and the power of perceptual computing
allowed our team to give the judges a unique experience that will hopefully earn us the Ultimate Coder
title,” said Chris Allen, the project leader. Allen is CEO of BrassMonkey, a next generation browser-based
game system created by the Infrared5 team.

In this game, the player uses a phone to fly a character around a 3D world. When game players turn
their head, the perceptual computing camera tracks what the player is looking at, causing the scene to
shift. This effect allows the player to peer around obstacles, giving the game a real sense of depth and
immersion.

, , , , , ,

Infrared5 Ultimate Coder Week Six: GDC, Mouth Detection Setbacks, Foot Tracking and Optimizations Galore

April 3rd, 2013 by Chris Allen

For week six Chris and Aaron made the trek out to San Francisco to the annual Game Developers Conference (GDC) where they showed the latest version of our game Kiwi Catapult Revenge. The feedback we got was amazing! People were blown away at the head tracking performance that we’ve achieved, and everyone absolutely loved our unique art style. While the controls were a little difficult for some, that allowed us to gain some much needed insight into how to best fine tune the face tracking and the smartphone accelerometer inputs to make a truly killer experience. There’s nothing like live playtesting on your product!



Not only did we get a chance for the GDC audience to experience our game, we also got to meet some of the judges and the other Ultimate Coder competitors. There was an incredible amount of mutual respect and collaboration among the teams. The ideas were flowing on how to help improve each and every project in the competition. Chris gave some tips on video streaming protocols to Lee so that he will be able to stream over the internet with some decent quality (using compressed JPEGs would have only wasted valuable time). The guys from Sixense looked into Brass Monkey and how they can leverage that in their future games, and we gave some feedback to the Code Monkeys on how to knock out the background using the depth camera to prevent extra noise that messes with the controls they are implementing. Yes, this is a competition, but the overall feeling was one of wanting to see every team produce their very best.


The judges also had their fair share of positive feedback and enthusiasm. The quality of the projects obviously had impressed them, to the point that Nicole was quoted saying “I don’t know how we are going to decide”. We certainly don’t envy their difficult choice, but we don’t plan on making it any easier for them either. All the teams are taking it further and want to add even more amazing features to their applications before the April 12th deadline.


The staff in the Intel booth were super accommodating, and the exposure we got by being there was invaluable to our business. This is a perfect example of a win-win situation. Intel is getting some incredible demos of their new technology, and the teams are getting exposure and credibility by being in a top technology company’s booth. Not only that, but developers now get to see this technology in action, and can more easily discover more ways to leverage the code and techniques we’ve pioneered. Thank you Intel for being innovative and taking a chance on doing these very unique and experimental contests!


While Aaron and Chris were having a great time at GDC the rest of the team was cranking away. Steff ran into some walls with mouth detection for the breathing fire controls, but John, Rebecca and Elena were able to add more polish to the characters, environment and game play.



John added on a really compelling new feature – playing the game with your feet! We switched the detection algorithm so that it tracks your feet instead of your face. We call it Foot Tracking. It works surprisingly well, and the controls are way easier this way.



Steff worked on optimizing the face tracking algorithms and came up with some interesting techniques to get the job done.


This week’s tech tip and code snippet came to us during integration. We were working hard to combine the head tracking with the Unity game on the Ultrabook, and ZANG we had it working! But, there was a problem. It was slow. It was so slow it was almost unplayable. It was so slow that it definitely wasn’t “fun.” We had about 5 hours until Chris was supposed to go to the airport and we knew that the head tracking algorithms and the camera stream were slowing us down. Did we panic? (Don’t Panic!) No. And you shouldn’t either when faced with any input that is crushing the performance of your application. We simply found a clever way to lower the sampling rate but still have smooth output between frames.


The first step was to reduce the number of times we do a head tracking calculation per second. Our initial (optimistic) attempts were to update in realtime on every frame in Unity. Some computers could handle it, but most could not. Our Lenovo Yoga really bogged down with this. So, we introduced a framesToSkip constant and started sampling on every other frame. Then we hit a smoothing wall. Since the head controls affect every single pixel in the game window (by changing the camera projection matrix based on the head position), we needed to be smoothing the head position on every frame regardless of how often we updated the position from the camera data. Our solution was to sample the data at whatever frame rate we needed to preserve performance, save the head position at that instant as a target, and ease the current position to the new position on every single frame. That way, your sampling rate is down, but you’re still smoothing on every frame and the user feels like the game is reacting to their every movement in a non-jarring way. (For those wondering what smoothing algorithm we selected: Exponential Smoothing handles any bumps in the data between frames.) Code is below.

Feeling good about the result, we went after mouth open/closed detection with a vengeance! We thought we could deviate from our original plan of using AAM and POSIT, and lock onto the mouth using a mouth specific Haarcascade on the region of interest containing the face. The mouth Haarcascade does a great job finding and locking onto the mouth if the user is smiling – which is not so good for our purposes. We are still battling with getting a good lock on the mouth using a method that combines depth data with RGB, but we have seen why AAM exists for feature tracking. It’s not just something you can cobble together and have confidence that it will work well enough to act as an input for game controls.


Overall, this week was a step forward even with part of the team away. We’ve got some interesting and fun new features that we want to add as well. We will be sure to save that surprise for next week. Until then, please let us know if you have any questions and/or comments. May the best team win!

, , , , , , ,

Ultimate Coder Week #5: For Those About to Integrate We Salute You

March 21st, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We definitely lost one of our nine lives this week with integrating face tracking into our game, but we still have our cat’s eyes, and are still feel very confident that we will be able to show a stellar game at GDC. On the face tracking end of things we had some big wins. We are finally happy with the speed of the algorithms, and the way things are being tracked will work perfectly for putting into Kiwi Catapult Revenge. We completed some complex math to create very realistic perspective shifting in Unity. Read below on those details, as well as for some C# code to get it working yourself. As we just mentioned, getting a DLL that properly calls update() from Unity and passes in the tracking values isn’t quite there yet. We did get some initial integration with head tracking coming into Unity, but full integration with our game is going to have to wait for this week.

On the C++ side of things, we have successfully found the 3D position of the a face in the tracking space. This is huge! By tracking space, we mean the actual (x,y,z) position of the face from the camera in meters. Why do we want the 3D position of the face in tracking space? The reason is so that we can determine the perspective projection of the 3D scene (in game) from the player’s location. Two things made this task interesting: 1) The aligned depth data for a given (x,y) from the RGB image is full of holes and 2) the camera specs only include the diagonal field of view (FOV) and no sensor dimensions.

We got around the holes in the aligned depth data by first checking for a usable value at the exact (x, y) location, and if the depth value was not valid (0 or the upper positive limit), we would walk through the pixels in a rectangle of increasing size until we encountered a usable value. It’s not that difficult to implement, but annoying when you have the weight of other tasks on your back. Another way to put it: It’s a Long Way to the Top on this project.

The z-depth of the face comes back in millimeters right from the depth data, the next trick was to convert the (x, y) position from pixels on the RGB frame to meters in the tracking space. There is a great illustration here of how to break the view pyramid up to derive formulas for x and y in the tracking space. The end result is:
TrackingSpaceX = TrackingSpaceZ * tan(horizontalFOV / 2) * 2 * (RGBSpaceX – RGBWidth / 2) / RGBWidth)
TrackingSpaceY = TrackingSpaceZ * tan(verticalFOV / 2) * 2 * (RGBSpaceY – RGBHeight / 2) / RGBHeight)

Where TrackingSpaceZ is the lookup from the depth data, horizontalFOV, and verticalFOV are are derived from the diagonal FOV in the Creative Gesture Camera Specs (here). Now we have the face position in tracking space! We verified the results using a nice metric tape measure (also difficult to find at the local hardware store – get with the metric program, USA!)

From here, we can determine the perspective projection so the player will feel like they are looking through a window into our game. Our first pass at this effect involved just changing the rotation and position of the 3D camera in our Unity scene, but it just didn’t look realistic. We were leaving out adjustment of the projection matrix to compensate for the off-center view of the display. For example: consider two equally-sized (in screen pixels) objects at either side of the screen. When the viewer is positioned nearer to one side of the screen, the object at the closer edge appears larger to the viewer than the one at the far edge, and the display outline becomes trapezoidal. To compensate, the projection should be transformed with a shear to maintain the apparent size of the two objects; just like looking out a window! To change up our methods and achieve this effect, we went straight to the ultimate paper on the subject: Robert Koomla’s Generalized Perspective Projection. Our port of his algorithm into C#/Unity is below.

The code follows the mouse pointer to change perspective (not a tracked face) and does not change depth (the way a face would). We are currently in the midst of wrapping our C++ libs into a DLL for Unity to consume and enable us to grab the 3D position of the face and then compute the camera projection matrix using the face position and the position of the computer screen in relation to the camera.

Last but not least we leave you with this week’s demo of the game. Some final art for UI elements are in, levels of increasing difficulty have been implemented and some initial sound effects are in the game.

As always, please ask if you have any questions on what we are doing, or if you just have something to say we would love to hear from you. Leave us a comment! In the meantime we will be coding All Night Long!

, , , , , , ,

START 2013 – A Conference Not to Miss

March 19th, 2013 by Rebecca Allen

Last Thursday, Chris Allen (one of my business partners and husband) and I headed on a train to New York City for the first inaugural conference called Start. We were one of 23 startups invited to show off our product, Brass Monkey, to the highly curated group of 500 attendees. Hands down, it has to be one of the best events I have ever attended. From the moment we arrived at  Centre 548 in Chelsea at 7:30am Friday morning until we left at 6:30pm that evening, it was one great conversation after another. Paddy Cosgrave and his amazing team of organizers at f.ounders did an outstanding job. We were honored to be selected as an exhibitor and excited to be amongst such innovative products and applications. Here are a few of my favorites: LittleBits , 3Doodler, BrandYourself and Magisto. LittleBits is an open source library of electronic modules that snap together with magnets for prototyping, learning and fun. Such a cool product that totally hits on so much that we love: open source technology, education, fun and creativity!

Since Chris and I were managing our booth, we were unable to attend the round tables and talks that happened throughout the day. We are excited that the talks were recorded, and Chris and I will be spending some quality time going through all of this great content. We had a fabulous day and would recommend to anyone that’s into Startups to attend Start 2014 when it comes around next year. I look forward to making it to WebSummit, f.ounders other event in the fall. Dublin, here we come!

, , , , ,

Plan of Attack and Face Tracking Challenges Using the Intel Perceptual Computing SDK

February 19th, 2013 by Chris Allen

This post was featured on Intel Software’s blog, in conjunction with Intel’s Ultimate Coder Challenge. Keep checking back to read our latest updates!

We are very excited to be working with the Intel Perceptual Computing (IPC) SDK and to be a part of the Ultimate Coder Challenge! The new hardware and software that Intel and its partners have created allows for some very exciting possibilities. It’s our goal to really push the boundaries of what’s possible using the technology. We believe that perceptual computing plays a huge role in the future of human-to-computer interaction, and isn’t just a gimmick shown in movies like Minority Report. We hope to prove out some of the ways that it can actually improve the user experience with the game that we are producing for the competition.

Before we begin with the bulk of this post, we should cover a little bit on the makeup of our team and the roles that each of us play on the project. Unlike many of the teams in the competition, we aren’t a one man show, so each of our members play a vital role in creating the ultimate application. Here’s a quick rundown of our team:

Kelly Wallick – Project Manager

TECH
Chris Allen – Producer, Architect and Game Designer
Steff Kelsey – Tech Lead, Engineer focusing on the Intel Perceptual Computing SDK inputs
John Grden – Game Developer focusing on the Game-play

ART
Rebecca Allen – Creative Director
Aaron Artessa – Art Director, Lead Artist, Characters, effects, etc.
Elena Ainley – Environment Artist and Production Art

When we first heard about the idea of the competition we started thinking about ways that we could incorporate our technology (Brass Monkey) with the new 3D image tracking inputs that Intel is making accessible to developers. Most of the examples being shown with the Perceptual Computing SDK focus on hand gestures, and we wanted to take on something a bit different. After much deliberation we arrived at the idea of using the phone as a tilt-based game controller input, and using head and eye tracking to create a truly immersive experience. We strongly believe that this combination will make for some very fun game play.

Our art team was also determined not to make the standard 3D FPS shoot-em-up game that we’ve seen so many times before, so we arrived at a very creative use of the tech with a wonderful background story of a young New Zealand Kiwi bird taking revenge on the evil cats that killed his family. To really show off the concept of head-tracking and peering around items in the world, we decided on a paper cutout art style. Note that this blog post and the other posts we will be doing on the Ultimate Coder site are really focused on the technical challenges and processes we are taking, and much less on the art and game design aspects of the project. After all, the competition is call the Ultimate Coder, not the Ultimate Designer. If you are interested in the art and design of our project, and we hope that you are, then you should follow our posts on our company’s blogs that will be covering much more of those details. We will be sure to reference these posts on every blog post here as well so that you can find out more about the entire process we are undertaking.

The name of the game that we’ve come up with for the competition is called Kiwi Catapult Revenge.

So with that, let’s get right to the technical nitty gritty.

Overview of the Technology We are Using

Unity

As we wanted to make a 3D game for the competition we decided to use Unity as our platform of choice. This tool allows for fast prototyping, ease of art integration and much more. We are also well versed in using Unity for a variety of projects at Infrared5, and our Brass Monkey SDK support for it is very polished.

Brass Monkey

We figured that one of our unique advantages in the competition would be to make use of the technology that we created. Brass Monkey SDK for Unity allows us to turn the player’s smartphone into a game controller for Unity games. We can leverage the accelerometers, gyroscopes and touch screens of the device as another form of input to the game. In this case, we want to allow for steering your Kiwi bird through the environment using tilt, and allow firing and control of the speed via the touch screen on the player’s phone.

Intel Perceptual Computing SDK

We decided to leverage the ICP SDK for head tracking, face recognition and possibly more. In the case of Kiwi Catapult Revenge the payer will use his eyes for aiming (the player character can shoot lasers from his eyes). The environment will also shift according to the angle in which the user is viewing causing the scene to feel like real 3D. Take a look at this example using a Wiimote for a similar effect that we are going for. In addition, our player can breath fire by opening his or her mouth in the shape of an “o” and pressing the fire button on the phone.

There are certainly other aspects of the SDK we hope to leverage, but we will leave those for later posts.

OpenCV

We are going to use this C-based library for more refined face tracking algorithms. Read more to find out why we chose OpenCV(opencv.org) to work in conjunction with the IPC SDK. Luckily, OpenCV is also developed by Intel, so hopefully that gets us additional points for using two of Intel’s libraries.

Head Tracking

The biggest risk item in our project is getting head tracking that performs well enough to be a smooth experience in game play, so we’ve decided to tackle this early on.

When we first started looking at the examples that shipped with the IPC SDK there were very few dealing with head tracking. In fact it was really only in the latest release where we found anything that was even close to what we proposed to build. That, and it was in this release that they exposed these features to the Unity version of the SDK. What we found are examples that simply don’t perform very well. They are slow, not all that accurate, and unfortunately just won’t cut it for the experience we are shooting for.

To make matters worse, the plugin for Unity is very limited. It didn’t allow us to manipulate much, if anything, with regards to head tracking or face recognition algorithms. As a Unity developer you either have to accept the poor performing pre-canned versions of the algorithms the SDK exposes, or get the raw data from the camera and do all the calculations yourself. What we found is that face tracking with what they provide gives us sub 3 frame per second performance that wasn’t very accurate. Now to be clear, the hand gesture features are really very polished, and work well in Unity.  It seems that Intel’s priority has been on those features, and head/face detection is lagging very much behind. This presents a real problem for our vision of the game, and we quickly realized that we were going to have to go about it differently if we were going to continue with our idea.

OpenCV

When we realized the current version of the IPC SDK wasn’t going to cut it by itself, we started looking into alternatives. Chris had done some study of OpenCV (CV stands for computer vision) a while back, and he had a book on the subject. He suggested that we take a look at that library to see if anyone else had written more effective head and eye tracking algorithms using that tool-set. We also discovered what looked like a very polished and effective head tracking library called OpenTL . We got very excited with what we saw, but when we went to download the library, we discovered the OpenTL isn’t so open after all. It’s not actually open source software, and we didn’t want to get involved with licensing a 3rd party tool for the competition. Likewise the FaceAPI from SeeingMachines looked very promising, but it also carried with it a proprietary license in order for us to use it.  Luckily what we found using OpenCV appeared to be more than capable of doing the job.

Since OpenCV is a C library we needed to figure out how to get it to work within Unity. We knew that we would need to compile a dll that would expose the functions to the Mono based Unity environment, or find a version out on the Internet that had already done this. Luckily we found this example, and incorporated it into our plans.

Use of the Depth Camera

The other concern we had was that all the examples we saw of face tracking in real-time didn’t make use of any special camera. They all used a simple webcam, and we really wanted to leverage the unique hardware that Intel provided us for the challenge. One subtle thing that we noticed with most of the examples we saw was they performed way better with the person in front of a solid background. The less noise the image had the better it would perform. So, we thought, why not use the depth sensor to block out anything behind the user’s head, essentially guaranteeing less noise in our images being processed regardless of what’s behind our player. This would be a huge performance boost over traditional webcams!

Application Flow and Architecture

After carefully considering our tools we finally settled on an architecture that spelled out how all the pieces would work together. We would use the Unity IPC SDK for the camera frames as raw images, and for getting the depth sensor data to block out only the portions of the image that had the person’s head. We would then leverage OpenCV for face tracking algorithms via a plugin to Unity.

We will be experimenting with a few different combinations of algorithms until we find something that give us the performance we need to implement as a game controller and (hopefully) also satisfy the desired feature set of tracking the head position and rotation, identifying if the mouth is open or closed, and tracking the gaze direction of the eyes.  Each step in the process is done to set up the the steps that follow.

In order to detect the general location of the face, we propose to use the Viola-Jones detection method.  The result of this method will be a smaller region of interest (ROI) for mouth and eye detection algorithms to sort through.

There are few proposed methods to track the facial features and solve for the rotation of the head.  The first method is to take use the results from the first pass to define 3 new ROIs and to search specifically for the mouth and the eyes using sets of comparative images designed specifically for the task.  The second method is to use the Active Appearance Model (AAM) to find match a shape model of facial features in the region.  We will go into more detail about these methods in future posts after we attempt them.

Tracking the gaze direction will be done by examining the ROI for each eye and determining the location of the iris and pupil by the Adaptive EigenEye method.

Trackable points will be constrained with Lucas-Kanade optical flow.  The optical flow compares the previous frame with the current one and finds the most likely locations of tracked points using a least squares estimation.

Summing it Up

We believe that we’ve come up with an approach that leverages the unique capabilities of the Perceptual Computing Camera and actually adds to the user experience of our game concept. As we start in on the development it’s going to be interesting to see how much this changes over the next seven weeks. We already have several questions about how it’s going to go: How much did we think would actually work will? What performance tuning will we need to do? How many of the other features of the IPC SDK can we leverage to make our game more engaging? Will we have enough time to pull off such an ambitious project in such a short time frame?

Wow! That turned out to be a long post! Thanks for taking the time to read what we are up to.

We are also curious to hear from you, other developers out there. What would you do differently given our goals for the project? If you’ve got experience with computer vision algorithms, or even just want to chime in with your support, we would love to hear from you!

, , , , , , , , , , , ,

Seven weeks. Seven teams. ONE ULTIMATE APP!

February 6th, 2013 by Rosie

Infrared5 and Brass Monkey are excited to announce their participation in Intel Software’s Ultimate Coder Challenge, ‘Going Perceptual’! The IR5/Brass Monkey team, along with six other teams from across the globe, will be competing in this seven week challenge to build the ultimate app. The teams will be using the latest Ultrabook convertible hardware, along with the Intel Perceptual Computing SDK and camera to build the prototype. The competitors range from large teams to individual developers, and each will take a unique approach to the challenge. The question will be which team or individual can execute their vision with the most success under such strict time constraints?

Here at Infrared5/Brass Monkey headquarters, we have our heads in the clouds and our noses to the grindstone. We are dreaming big, hoping to create a game that will take user experience to the next level. We are combining game play experiences like those available on Nintendo’s Wii U and Microsoft Kinect. The team will use the Intel Perceptual Computing SDK for head tracking, which will allow the player to essentially peer into the tablet/laptop screen like a window. The 3D world will change as the player moves his head. We’ve seen other experiments that do this with other technology and think it is really remarkable. This one using Wii-motes by Johnny Lee is one of the most famous. Our team will be exploring this effect and other uses of the Intel Perceptual Computing SDK combined with the Brass Monkey’s SDK (using a smartphone as a controller) to create a cutting edge, immersive experience. Not only that, but our creative team is coming up with all original IP to showcase the work.

Intel will feature documentation of the ups and downs of this process for each team, beginning February 15th. We will be posting weekly on our progress, sharing details about the code we are writing, and pointing out the challenges we face along the way. Be sure to check back here as the contest gets under way.

What would you build if you were in the competition? Let us know if you have creative ideas on how to use this technology; we would love to hear them.

We would like to thank Intel for this wonderful opportunity and wish our competitors the best of luck! Game on!

, , , , , , , , , , ,

Unity 4 – Looking Forward

September 28th, 2012 by Anthony Capobianchi

Here at Infrared5, a good portion of our projects are based in Unity 3D. Needless to say, with the introduction of Unity 4, I was very interested in what had changed about the coming engine, and why people should make the upgrade. This post will look at few of the features of Unity 4 that I am most excited about.

The New GUI

The first time I sat down with Unity almost a year ago to work on Brass Monkey’s Monkey Dodgeball game, I knew practically nothing about the engine. That didn’t stop me from being almost immediately annoyed with Unity’s built in GUI system. Positioning elements from the OnGUI was a task of trial and error, grouping objects together was a pain, and all the draw calls that it produced made it inefficient to boot. At that time, I was unaware of the better solutions to Unity’s GUI that were developed by third party developers, but after I was made aware, I was confused as to why such a robust development tool such as Unity didn’t have these already built in.

Though the new GUI system is not a launch feature for Unity 4, Unity is building an impressive system for user interface that will allow for some really interesting aesthetics for our games. From the looks of it, the new system seems to derive from Unity’s current vein of GUIText and GUITexture objects. The difference is in the animation capabilities of each element that is created. You are now allowed to efficiently have multiple elements make up your GUI objects such as buttons, health bars, etc. Unity then allows you to animate those elements individually. Not to mention that editing text styles in the GUI is now as easy as marking it up with HTML.

One of the coolest additions is the ability to position and resize any UI element with transform grabbers that anyone who has used an Adobe product would be familiar with. This also allows for the creation of rotating elements in 3D space, which allows for creating a GUI with a sense of space and depth to it. This can lead to some really interesting effects.

The new GUI system will come packaged with pre-built controls, though there is no word as to whether or not those controls will be customizable. Unity lists one new control as a “finely tuned thumbstick [controls] for mobile games”.  A couple of months ago, I developed my own thumbstick like controls to maneuver in 3D space, and it was a pain. Hopefully these new controls will make it a lot easier. You can also easily create your own controls by extending from the GUIBehavior script. Developers should have no problem creating controls that handle the specifics of their own games.

Every image that you use to create your elements gets atlases automatically. This is a huge bonus over the old GUI system. The biggest problem Unity’s GUI system has right now is the amount of draw calls it makes to render all those elements. Third party tools like EZGUI and iGUI rely on creating UI objects that atlas images to reduce draw calls. It will be nice to have that kind of functionality in a built in system. I’ve spent a lot of time developing user interfaces in Unity over the past few months, so it makes me really excited to see that Unity is trying to correct some of their flaws for creating a component that is so important to games.

Mecanim
Unity’s current animation system is pretty basic- add animations to an object and trigger those animations based on any input or conditions that are needed. The animation blending was useful but could have been better. With Unity 4, it is better. Introducing the Mecanim: an animation blending system that uses blending trees for models with ridged bones to fluidly move from one animation to another. One of the biggest hurdles that we as developers need to overcome in projects that deal with a lot of animations is transitioning from those animations as seamlessly as possible. Not always easy!

Along with blending the animations, Mecanim allows you to edit your animations similar to how you would edit a film clip to create animations loops. Mecanim also supports IK, so for example it can change the position of a characters feet on uneven surfaces, bend hands around corners, etc. A couple of years ago I was fascinated by Natural Motion’s Endorphin engine for animation blending. Mecanim may not be as sophisticated as Endorphin, and only supports biped skeletons, but it seems like an incredible system that comes built in to Unity.

The best part is about this is that once you create a blend tree for your animations, you can drag and drop it onto another rigged model, and it will work even if the new model is a different size or proportion.


The Mobile Platform

The mobile scene is really where Unity shines. Most of the Unity projects I have worked on for Infrared5 have had some sort of mobile component to them. The mobile platform is going to get even better with Unity 4. The most interesting thing from a developer’s standpoint is the profiling system, which allows you to view your game’s GPU performance to determine where it runs smoothly, and where it needs more optimization. The addition of real-time shadows for mobile is a nice added bonus. It will definitely add a lot of aesthetic value to the products we make.

Unity 4 is going to hit the industry with amazing force. I, for one, cannot wait to get my hands on this engine and am already filled with ideas on how I want to utilize these new tools. My favorite part is going to be the mobile optimization. Mobile development is huge, and it’s not going anywhere anytime soon. With all the new capabilities of Unity’s mobile content, I should be kept interested for quite a while.

, , , ,

To plug in, or not to plug in: that is the question! 

May 17th, 2012 by Elliott Mitchell

In recent years, we have seen a tremendous amount of attention to what can only be described as a debate between browser based plugins and their more standards based equivalent technologies, HTML & Javascript. Granted, even plugin providers can argue that they have open standards, but HTML definitely has its roots originating by a standards processes like W3C which is widely accepted by the web community. While we don’t want to go down the route of arguing for either side, it’s quite interesting to consider some of the available information freely circulating on the web.

Let’s start off first by examining some of the requirements of a plugin based deployment. If a webpage requires a plugin, often the end user will be prompted to install or update before they can proceed. This prompt is often met with resistance by users who either don’t know what the plugins are, have a slow Internet connection or receive security warnings about installing the plugin. While there are steps to install browser based plugins and these may present difficulties for some, most online statistics show that this hasn’t really affected adoption rates.

To address this, I thought it would be helpful to take a peek at the current trajectory of plugin usage, plugin alternatives like HTML5, and browser usage as to better inform developers to decide whether or not to create plugin dependent content for the web browser. Let’s first take a look at desktop web browser plugin usage between September 2008 and January 2012 as measured by statowl.com:

Flash – 95.74%
Java Support 75.63%
SilverLight Support 67.37%
Quicktime Support 53.99%
Window Media Player Support 53.12%

Unity – ?% (numbers not available, estimated at 120 million installs as of May 2012)

Flash has been holding strong and is steadily installed on a more than 95% of all desktop computers. Flash is fortunate that two years after it’s launch, deals were made with all the major browsers to ship with Flash pre-installed. Pre-installs, YouTube, Facebook and 15 years on the market have made Flash the giant it is. Flash updates require user permission and a browser reboot.

Java Support updates for browsers have been holding steady for the past four years between 75% and 80%. Some of these updates can be hundreds of megabytes to download as system updates. At least on Windows systems, Java Support updates sometime require a system reboot. Apple has depreciated Java as of the release of OSX 10.6 Update 3 and is hinting of not supporting it in the future, at which time Java would rely on manual installation.

Interestingly enough, Microsoft Silverlight’s plugin install base has been steadily rising over the past four years from under 20% to almost 70% of browsers. Silverlight requires a browser reboot as well.

Both Windows Media support and Apple’s Quicktime support have seen installs drop steadily over the past four years, down from between 70% – 75% to a little more than 50%. It is worth pointing out that both these plugins are limited in their functionality when compared to the previously discussed plugins and Unity, mentioned below. Quicktime updates for OSX are handled through system updates. Windows Media Player updates are handled by Windows Systems updates. Both Windows and OSX require rebooting after updates.

Unity web player plugin has been on the rise over the past four years, although numbers are difficult to come by. The unofficial word from Unity is it has approximately 120 million installs. This is impressive due to Unity emerging from relative obscurity four years ago. Unity provides advanced capabilities and rich experiences. Unity MMO’s, like Battlestar Galactica, have over 10 million users. Social game portals like Facebook, Brass Monkey and Kongregate are seeing a rise in Unity content. Unity now targets the Flash player to leverage Flash’s install base. *The Unity plugin doesn’t require rebooting anything (See below).

So what about rich content on the desktop browser without a plugin? There are currently two options for that. The first option is HTML5 on supported browsers. HTML5 is very promising and open source but not every browser fully supports it. HTML5 runs best on Marathon & Chrome at the moment. Take a peek at html5test.com to see how desktop browsers score on supporting HTLM5 features.

The second option for a plugin free rich media content experience in the browser is Unity running natively in Chrome. That’s a great move for Chrome and Unity. How pervasive is Chrome? Check out these desktop browser statistics from around the world ranging between May 2011 to April 2012 according to StatCounter:

IE 34.07% – Steadily Decreasing
Chrome 31.23% – Steadily Increasing
Firefox 24.8% – Slightly Decreasing
Safari 7.3% – Very Slightly increasing
Opera 1.7% – Holding steady

Chrome installs are on the rise and IE is falling. At this time, Chrome’s rapid adoption rates are great for both Unity and HTML5. A big question is when will Unity run natively in IE, Firefox and/or Safari?

We’ve now covered the adoption statistics of many popular browser based plugins and the support for HTML5 provided by the top browsers. There may not really be a debate at all. It appears that there are plenty of uses for each technology at this point. It is my opinion that if the web content is spectacularly engaging, innovative and has inherent viral social marketing hooks integrated, you can proceed on either side of the divide.

, , , , , , , , , , , , , , , ,

« Previous Entries