Boid Flocking and Pathfinding in Unity

June 20th, 2012 by Anthony Capobianchi

The quest for creating believable, seemingly intelligent movement among groups of characters is a challenge many game developers encounter. A while back, I had an idea for an app that required units of moveable objects to be able to coordinate and get from point ‘A’ to point ‘B’., I’ve never built anything like an RTS before, so this concept was something new to me. I searched forums and articles looking for the answers on how people achieve this sort of behavior. The majority of the help I could find was in a system referred to as “Boid” or “Flocking”, A Boid system can be set to simulate flocks or herds, allowing moving units to animate independently, giving the illusion of artificial intelligence. Over the next three blog posts, I will outline solutions to the following problems:

  1. Boid System – Creating a system to keep objects together in a coherent way.
  2. Radar Class – Creating a class that will detect if any Boids are within a certain distance from another Boid.
  3. Path and Obstacle Avoidance – Getting an object to follow a path as well as avoid obstacles that keep them from getting to their destination.
  4. Ray Caster – Setting up a ray caster that will be used to place a destination object in the scene for the Boids.
  5. Destination Points – Organizing a number of different destination points for the Boids that will prevent   them from piling on top of each other.

BOID SYSTEM

Normally, I would split up the functions into different scripts depending on their function, for instance, a script for calculating the Boid behavior force, a script for the radar, and a script for  path calculating. However, to keep the count down and to avoid possible confusion of not always knowing which script a function goes into, I consolidated it into only a few scripts -

I.    Boid.cs
II.    Destination.cs
III.    DestinationManager.cs

NOTE: All properties and variables should go at the top of your classes, in my code examples I am putting the properties above the methods to show you which properties you need in your script and why.

•    Boids

The Boid system is accomplished by creating a script (Which was named Boid.cs) that controls the behaviors of their basic movement. This included coherency, which is the will to stick together, and also separation, which is the will to keep apart. If you are unfamiliar with the concept of Boids or flocking, a great article about it can be found at http://www.vergenet.net/~conrad/boids/pseudocode.html and a C# Unity example can be found here: http://virtualmore.org/wiki/index.php?title=SimpleBoids

To set up my Boid.cs script, I set up these properties:

My Boid behaviors are set up like this:

• RADAR CLASS

We need a way for every Boid to know if there are other Boid objects surrounding it within a certain radius. In order to create this effect, we will create functions that will handle radar scans and what the scanner is looking for. The radar will be called to scan a few times every second. It is not using the Update function to get called. If it was, every frame would be making a collision check using Physics.OverlapSphere. This could cause frame rates to drop, especially if you have a lot of Boids in the scene.

In my Boid.cs script my Radar functions are set up like this:

In my next post, I will explain how I solved the problem of getting an object to follow a path while avoiding obstacles.In addition, I will explain what will be needed to apply the forces that are calculated by the Boid and pathfinding systems to get our characters moving.
Anthony Capobianchi

, , , ,

To plug in, or not to plug in: that is the question! 

May 17th, 2012 by Elliott Mitchell

In recent years, we have seen a tremendous amount of attention to what can only be described as a debate between browser based plugins and their more standards based equivalent technologies, HTML & Javascript. Granted, even plugin providers can argue that they have open standards, but HTML definitely has its roots originating by a standards processes like W3C which is widely accepted by the web community. While we don’t want to go down the route of arguing for either side, it’s quite interesting to consider some of the available information freely circulating on the web.

Let’s start off first by examining some of the requirements of a plugin based deployment. If a webpage requires a plugin, often the end user will be prompted to install or update before they can proceed. This prompt is often met with resistance by users who either don’t know what the plugins are, have a slow Internet connection or receive security warnings about installing the plugin. While there are steps to install browser based plugins and these may present difficulties for some, most online statistics show that this hasn’t really affected adoption rates.

To address this, I thought it would be helpful to take a peek at the current trajectory of plugin usage, plugin alternatives like HTML5, and browser usage as to better inform developers to decide whether or not to create plugin dependent content for the web browser. Let’s first take a look at desktop web browser plugin usage between September 2008 and January 2012 as measured by statowl.com:

Flash – 95.74%
Java Support 75.63%
SilverLight Support 67.37%
Quicktime Support 53.99%
Window Media Player Support 53.12%

Unity – ?% (numbers not available, estimated at 120 million installs as of May 2012)

Flash has been holding strong and is steadily installed on a more than 95% of all desktop computers. Flash is fortunate that two years after it’s launch, deals were made with all the major browsers to ship with Flash pre-installed. Pre-installs, YouTube, Facebook and 15 years on the market have made Flash the giant it is. Flash updates require user permission and a browser reboot.

Java Support updates for browsers have been holding steady for the past four years between 75% and 80%. Some of these updates can be hundreds of megabytes to download as system updates. At least on Windows systems, Java Support updates sometime require a system reboot. Apple has depreciated Java as of the release of OSX 10.6 Update 3 and is hinting of not supporting it in the future, at which time Java would rely on manual installation.

Interestingly enough, Microsoft Silverlight’s plugin install base has been steadily rising over the past four years from under 20% to almost 70% of browsers. Silverlight requires a browser reboot as well.

Both Windows Media support and Apple’s Quicktime support have seen installs drop steadily over the past four years, down from between 70% – 75% to a little more than 50%. It is worth pointing out that both these plugins are limited in their functionality when compared to the previously discussed plugins and Unity, mentioned below. Quicktime updates for OSX are handled through system updates. Windows Media Player updates are handled by Windows Systems updates. Both Windows and OSX require rebooting after updates.

Unity web player plugin has been on the rise over the past four years, although numbers are difficult to come by. The unofficial word from Unity is it has approximately 120 million installs. This is impressive due to Unity emerging from relative obscurity four years ago. Unity provides advanced capabilities and rich experiences. Unity MMO’s, like Battlestar Galactica, have over 10 million users. Social game portals like Facebook, Brass Monkey and Kongregate are seeing a rise in Unity content. Unity now targets the Flash player to leverage Flash’s install base. *The Unity plugin doesn’t require rebooting anything (See below).

So what about rich content on the desktop browser without a plugin? There are currently two options for that. The first option is HTML5 on supported browsers. HTML5 is very promising and open source but not every browser fully supports it. HTML5 runs best on Marathon & Chrome at the moment. Take a peek at html5test.com to see how desktop browsers score on supporting HTLM5 features.

The second option for a plugin free rich media content experience in the browser is Unity running natively in Chrome. That’s a great move for Chrome and Unity. How pervasive is Chrome? Check out these desktop browser statistics from around the world ranging between May 2011 to April 2012 according to StatCounter:

IE 34.07% – Steadily Decreasing
Chrome 31.23% – Steadily Increasing
Firefox 24.8% – Slightly Decreasing
Safari 7.3% – Very Slightly increasing
Opera 1.7% – Holding steady

Chrome installs are on the rise and IE is falling. At this time, Chrome’s rapid adoption rates are great for both Unity and HTML5. A big question is when will Unity run natively in IE, Firefox and/or Safari?

We’ve now covered the adoption statistics of many popular browser based plugins and the support for HTML5 provided by the top browsers. There may not really be a debate at all. It appears that there are plenty of uses for each technology at this point. It is my opinion that if the web content is spectacularly engaging, innovative and has inherent viral social marketing hooks integrated, you can proceed on either side of the divide.

, , , , , , , , , , , , , , , ,

GDC12 – Game Developer Conference 2012: a Post-Mortem

March 30th, 2012 by Elliott Mitchell

GDC12- AaaaaAAaaaAAAaaAAAAaAAAAA!!! (Force = Mass x Acceleration) by Dejoban Games and Owlchemy Labs, played by Oleg Pridiuk (Unity Technologies) as Ichiro Lambe (Dejobaan Games) and Deniz Opal (Cerebral Fix) watch - Photo Elliott Mitchell (Infrared5)

This year’s Game Developer Conference (GDC) 2012 was networking, networking and more networking.

Within a one mile proximity of the San Francisco Moscone Center, hordes of game developers and artists could be seen in the streets, cafes, bars, mall food courts, and hotel lobbies and heard talking shop, showing off their games, catching up with friends, debating the ethics of cloning social games from indies, shopping to find publishers, contractors and jobs. It was an intense meeting of the minds of people who make games in the streets of San Francisco.

Google Huddle chats, Google Groups email, shared Google Calendars and Twitter were all utilized very effectively to make the most of GDC. Multitudes of varied networking opportunities streamed in real-time through my iPhone 24/7. The level of my success at GDC was determined by how much networking I could possibly handle. With the help of my friends and the social/mobile networks,  success was at my fingertips.

In addition to the obsessive networking, there were many other valuable aspects of GDC. I’ll briefly highlight a few:

Jeff Ward’s Pre-GDC Board Game Night

GDC12- Elliott Mitchell (Infrared5), John Romero (Loot Drop), Brenda Garno Brathwaite (Loot Drop) & Elizabeth Sampat (Loot Drop) playing games at Jeff Ward's (Fire Hose Games) 3rd Annual Pre-GDC Board Game Night - Photo Drew Sikora

Jeff Ward (Fire Hose Games) knows how to get an amazing collection of game designers and developers together for a night playing board games. This was one of my favorite events of GDC. When else would I ever be able to play board games with John Romero (Loot Drop) and Brenda Garno Brathwaite (Loot Drop) while enjoying hors d’oeuvre and spirits? The crowd was a rich blend of artists, game developers, game designers, indies, students and superstars. There were so many new and classic games to play. I personally played Family Business and a really fun indie game prototype about operating a successful co-operative restaurant. Walking around after playing my games, I observed a host of other cool games being played and pitched. I’ll definitely be back for this event next year.

Independent Games Summit and Main Conference Sessions

GDC12 Ryan Creighton (Untold Entertainment) presenting Ponycorns: Catching Lightning in a Jar- Photo Elliott Mitchell (Infrared5)

Many session topics were super interesting but it wasn’t possible to attend all of them. Luckily, those with a GDC All-Access pass have access to the GDC Vault filled with recorded sessions. Here are a few sessions I saw which I found useful and interesting:

*Perhaps a Time of Miracles Was at Hand: The Business & Development of #Sworcery (Nathan Vella – Capy Games)

*The Pursuit of Indie Happiness: Making Great Games without Going Crazy (Aaron Isaksen – Indie Fund LLC)

*Ponycorns: Catching Lightning in a Jar (Ryan Creighton – Untold Entertainment)

*Light Probe Interpolation Using Tetrahedral Tessellations (Robert Cupisz – Unity Technologies)

Independent Game Festival Contestants on the Expo Floor

I played a bunch of the Independent Games Festival contestants’ games on the Expo floor

GDC12 - Alex Schwartz (Owlchemy Labs) playing Johann Sebastian Joust (Die Gute Fabrik) - Photo Elliott Mitchell (Infrared5)

before the festival winners are announced. There was a whole lot of innovation on display from this group. I particularly loved Johann Sebastian Joust (Die Gute Fabrik), a game without graphics, and Dear Esther (thechineseroom) which is stunning eye candy. Check out all the games here.

12th Annual Game Developer Choice Awards

I was super stoked to see two indies win big!

Superbrothers: Sword & Sorcery EP (Capy Games/Superbrothers) took the Best Handheld/Mobile Game award.

Johann Sebastian Joust (Die Gute Fabrik) won the Innovation Award.  Johann Sebastian Joust is worthy of it’s own blog post in the future.

EXPO FLOOR

* Unity booth – Cool tech from Unity and development venders partners showing off their wares
* Google Booth – Go Home Dinosaurs (Fire Hose Games) on Google Chrome
* Autodesk Booth (Maya and Mudbox)
* Indie Game Festival area ( All of it)

GDC12 - Chris Allen (Brass Monkey) and Andrew Kostuik (Brass Monkey) at the Unity Booth - Photo by Elliott Mitchell (Infrared5)

GDC PLAY

Lots of cool tech at the 1st Annual GDC Play. Our sister company, Brass Monkey, impressed onlookers with their Brass Monkey Controller for mobile devices and Play Brass Monkey web portal for both 2d and 3d games.

UNITY FTW!

Last but not least, the most useful and pleasurable highlight of GDC was face time with the Unity Technology engineers and management. Sure, I’m on email, Skype, Twitter and Facebook with these guys but nothing is like face to face time with this crew. Time and access to Unity’s founders, engineers, evangelists and management is worth the price of GDC admission. Can’t wait until Unite 2012 in Amsterdam and GDC13 next March!

, , , , , , , , , , ,

Top 10 GDC Lists

March 1st, 2012 by Elliott Mitchell

GDC is approaching next week and I’ll be traveling to San Fransisco to participate in the epic game developer event. I’m psyched and here’s why:

TOP 10 GDC RELATED THINGS I’M EXCITED ABOUT

10  The Expo Floor
9    The History Of 3D Games exhibit
8    Experimental Gameplay Sessions
7    The Unity Party
6    Indie Game: The Movie screening & Panel
5    GDC Play
4    14th Annual Independent Games Festival Awards
3    Networking, Networking & Networking
2    Independent Game Summit
1    Unity Technology Engineers

TOP 10 GDC SESSIONS I’M LOOKING FORWARD TO

10  The Pursuit of Indie Happiness: Making Great Games without Going Crazy
9    Rapid, Iterative Prototyping Best Practices
8    Experimental Gameplay Sessions
7    Create New Genres (and Stop Wasting Your Life in the Clone Factories) [SOGS Design]
6    BURN THIS MOTHERFATHER! Game Dev Parents Rant
5    Bringing Large Scale Console Games to iOS Devices: A Technical Overview of The Bard’s Tale Adaptation
4    Light Probe Interpolation Using Tetrahedral Tessellations
3    Big Games in Small Packages: Lessons Learned In Bringing a Long-running PC MMO to Mobile
2    Art History for Game Devs: In Praise of Abstraction
1    Android Gaming on Tegra: The Future of Gaming is Now, and it’s on the Move! (Presented by NVIDIA)

If you’re going to be at GDC and want to talk shop with Infrared5 then please ping us! info (at) Infrared5 (dot) com

, , ,

Gaming Ouroboros at the Global Game Jam 2012

February 6th, 2012 by Elliott Mitchell

Now and then, as a professional 3D technical artist and game designer, I find it’s helpful to step out of my usual routine and make a game over a weekend. Why? Because it keeps life fresh and exciting while providing a rare sense of instant gratification in the crazy world of video game development. Making a video game over a weekend isn’t easy for one person alone. For this, Global Game Jam was created.

This year’s Global Game Jam was held last January 27 – 29, 2012. I registered with was the Singapore-MIT GAMBIT Game Lab, in Cambridge, Massachusetts. Here is the lowdown of my experience.

Global Game Jam 2012 - Photo Courtesy Michael Carriere

The Global Game Jam (GGJ) is an annual International Game Developer Association (IGDA) game creation event. The event unites people from across the globe to make games in under 48 hours. Anyone is welcome to participate in the game jam. Jammers range from industry professionals to hobbyists and students. The primary framework is that under common constraints, each team completes a game, without preconceived ideas or preformed teams, in under 48 hours. This is intended to encourage creativity, experimentation and collaboration resulting in small but innovative games. To support this endeavor, schools, businesses and organizations volunteer to serve as official host sites. Several prominent sponsors such as Loot Drop, Autodesk, Microsoft and Brass Monkey also helped foot the bill.

HOW IT WENT DOWN

Keynote -

Brenda Brathwaite and John Romero addressing the Global Game Jammers 2012 - Photo courtesy Michael Carriere

GGJ site facilitators kicked off the Jam with a pre-recorded video from the IGDA website titled How to Build A Game in Less Than 48 Hours. The speakers in the video were Gordon Bellamy, the  Executive Director of the IGDA, John Romero (Quake) and Brenda Brathwaite (Wizardry) both co-founders of Loot Drop, Gonzalo Frasca (Ludology.org) the co-founder of Powerful Robot Games and Will Wright (The Simms) co-founder of Maxis. They speakers all gave excellent advice on creativity, leadership, scope and collaboration within a game jam.

Global Constraint -

Ouroboros

Our primary constraint was revealed after the keynote video. It was an image of a snake eating it’s own tail. The snake represented Ouroboros, a Greek mythological immortal. Variations of the symbol span across time and space from the modern day back to antiquity. The snake, or dragon in some instances, while eating it’s own tail has made appearances in ancient Egypt, Greece, India, Mexico, West Africa, Europe, South America and elsewhere under a host of names. It’s meaning can be interpreted as opposites merging in an a unifying act of cyclical creation and destruction, immortal for eternity. To alchemists the Ouroboros symbolized the Philosopher’s Stone.

Group Brainstorming –

Brainstorming Global Game Jam 2012

After the keynote game jammers arbitrarily split into 5 or 6 groups of 11 or so and went into different labs to brainstorm Ouroboros game pitches. After an amusing ricochet of thoughts, references, revisions, personalities and passions each room crafted 6 pitches which were mostly within the scope of the 48 hour Game Jam.

Pitch and Choose -

When the groups reassembled into the main room it was time to pitch.

The Rules-

  • Pitches needed to be under a minute
  • Title is 3 words or less
  • Theme related to the Ouroboros
  • The person pitching a game did not necessarily need to be on that potential team

There were about 30 or so pitches, after which each jammer had to choose a role on a game / team that appealed to them. Each Jammer had a single piece of colored coded paper with their name, skill level and intended role.

The Roles-

Choose Your Team - Global Game Jam 2012- Photo courtesy Michael Carriere

  • Programmer
  • Artist
  • Game Design
  • Audio
  • Producer

Games with too many team members were pruned and others lacking members for roles such as programmer were either augmented or eliminated. Eventually semi-balanced teams of 4-6 members were formed around the 11 most popular pitches.

My team decided to develop our game for the Commodore 64 computer using Ethan Fenn’s Comma8 framework. We thought the game narrative and technology married well.

Time to Jam - Photo Courtesy Michael Carriere

Time to Jam -

Post team formation, clusters of lab space were claimed. Even though most of us also brought our personal laptops, the labs were stocked with sweet dual boot Windows 7 & OS X systems with cinema displays. The lab computers were pre-installed with industry standard software such as Unity3d, Maya, Photoshop… We were also provided peripherals such as stylus tablets and keyboards. Ironically, I was most excited by the real world prototyping materials like blocks and graph paper which were also provided by or host.

First Things First –

Our space at Global Game Jam 2012 at Singapore - MIT GAMBIT Game Lab

After claiming a lab with another awesome team we immediately setup:

  • Version control (SVN)
  • Installed custom tools for Comma8 (Python, Java, Spite Pad, Tiles and more)
  • Confirmed the initial scope of the game
  • Set up collaborative project management system with a team Google Group and Google Doc

Cut That Out –

We needed to refine the scope once we were all aware of all the technical limitations such as:

  • Commodore 64 from 1982 is old
  • 64 kb of RAM for system not much
  • 8 bit
  • Programed in Assembly Language
  • 300 X 200 pixels
  • 16 pre-determined crappy colors
  • 3 Oscillators
  • Rectangular pixels
  • Screen Space
  • Developing in emulation on a network
  • Loading and testing a playable on legacy Commodore 64 hardware
  • Less than 48 hours to get it all working
  • Our scope was too big, too many levels
  • Other factors causing us to consider limiting the scope further included:
  • None of us had made games for C 64 before
  • Comma8 is an experimental engine that was untested in a game jam situation and is currently in development by Ethan
  • Tools such as Sprite Pad and Tiles are very archaic and limiting apps for art creation
  • Build process would do strange things to art after build time which required constant iteration

Rapid Iterative Prototyping -

Walking Backwards Prototype Global Game Jam 2012 - Photo Courtesy Michael Carriere

Physical prototyping was employed to reduce the scope before we went too far down any rabbit holes. We used the following materials to prototype:

  • Glass white board
  • Markers
  • Masking tape on the walls
  • Paper notes tacked to the walls
  • Graph paper
  • Wooden blocks
  • Pens

Results of Physical Prototyping-

  • Cut down scope from 9 levels to 5 levels as the minimum to carry the Ouroboros circular theme of our narrative far enough
  • Nailed the key mechanics
  • Refined the narrative
  • Determined scale and placement of graphical elements
  • Limited overall scope

Naturally we ran into design roadblocks and need to revise and adapt a few times. Physical prototyping once again sped up that process and move us along to completion.

QA-

Global Game Jam 2012 - Photo Courtesy Michael Carriere

We enlisted a few play testers on the second night and final hours of the game jam to help us gauge the following:

  • Playability
  • Comprehension of the narrative
  • Recognition of the lo-res art assets
  • Overall player experiences
  • Feelings about the game
  • Suggestions
  • Bugs

We did wind up having to revise the art, level design and narrative slightly to reach a better balance and game after play testing.

Deadline -

Walking Backwards - C64 - Global Game Jam 2012

1.5 hours before the game jam was to end it was pencilsdown. Time to upload to the IDGA Global Game Jam website, any other host servers and on to the site presentation computer. Out of the total 48 hours allotted to the game jam, we

only had about 25 working lab hours. Much time was spent on logistics like the keynote video, brainstorming, pitching, uploading and presenting. Our site also was only open from 9 am to midnight so there was not 24 hour access. With 25 hours of lab time all 11 games at my site were uploaded and ready for presentation.

Presentations -

Global Game Jam - Singapore-MIT GAMBIT Game Lab Games

The best part ever! The presentations were so exciting. Many of the jammers were so focused on their work they were not aware of what other teams were up to. One by one teams went up and presented their games in whatever the current game state was at the deadline.

Most were pretty innovative, experimental and funny. Titles such as The Ouroboros Hangover and Hoop Snake had the jammers in stitches. Fire farting dragons, Hoop Snakes, drunk Ouroboros and so on were big hits. Unity, HTML 5, Flash, Flex, XNA, Comma8 and Flixel were used to create the great games in under 48 hours.

Take Aways -

My teammates and I consider the game we made, Walking Backwards, to be a success.   We accomplished our goals:

Walking Backwards Team - Global Game Jam 2012- Photo courtesy Michael Carriere

  • Experimental game
  • A compelling narrative
  • Awesome audio composition
  • Most functionality we wanted we achieved
  • Runs on an original Commodore 64 with Joysticks
  • Can be played with a Java emulator
  • Got to work together under pressure and have a blast

Would have liked-

  • Avatar to animate properly (we had bi-directional sprites made but not implemented)
  • More audio for sound effects

The final take away I had, besides feeling simultaneously exhilarated and exhausted, is how essential networking at the game jam is for greater success. Beyond just meeting new people, networking at the jam made or broke some games. Some teams didn’t take time to walk around and talk to other teams. In one instance, a team didn’t figure out a essential ghost mechanic by the end of the jam. They realized at presentation time another team had implemented the same mechanic they failed to nail down in the same engine. Networking also provided mutual feedback, play testing, critique, advise, friendships and rounds of beer after the event ended. Many of the jammers now have a better sense of each other’s strengths and weaknesses, their performance under stress, their abilities to collaborate, lead and follow.

I, for one, will be a life long game jammer, ready to collaborate while pushing into both familiar and new territories of game development with various teams, themes and dreams.

Follow this link to see all the games created at my site hosted by the Singapore-MIT GAMBIT Game Labs

——

Elliott Mitchell

Technical Director- Infrared5

Twitter: @ Mrt3d

, , , , ,

Creating 2nd UV sets in Maya for Consistent and Reliable Lightmapping in Unity 3d

January 11th, 2012 by Elliott Mitchell

Lightmaps in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

Have you ever worked on a game that was beautifully lit in the Unity editor but ran like ants on molasses on your target device? Chances are you might benefit from using lightmaps. Ever worked on a game that was beautifully lit with lightmaps but looked different between your Mac and PC in the Unity editor? Chances are you might want to create your own 2nd UV sets in Maya.

Example of a lightmap

Example of a lightmap

If you didn’t know, lightmaps are 2D textures pre-generated by baking (rendering) lights onto the surfaces of 3D objects in a scene. These textures are additively blended with the 3D model’s original textures to simulate illumination and fine shadows without the use of realtime lights at runtime. The number of realtime lights rendering at any given time can make or break a 3D game when it comes to optimal performance. By reducing the number of realtime lights and shadows your games will play through more smoothly. Using fewer realtime lights also allows for more resources to be dedicated to other aspects of the game like higher poly counts and more textures. This holds true especially when developing for most 3D platforms including iOS, Android, Mac, PC, Web, XBox, PS3 and more.

Since the release of Unity 3 back in September 2010, many Unity developers have been taking advantage of Beast Lightmapping as a one-stop lightmapping solution within the Unity editor. At first glance Beast is a phenomenal time saving and performance enhancing tool. Rather quickly, Beast can automate several tedious tasks that would have needed to be preformed by a trained 3D technical artist in an application like Maya. Those tasks being mostly UV related are:

UVs in positive UV co-ordinate space

  • Generating 2nd UV sets for lightmapping 3D objects
  • Unwrapping 3D geometry into flattened 2D shells which don’t overlap in O to 1 UV co-ordinate quadrant
  • Packing UV shells (arranging the unwrapped 2D shells to optimally fit within a square quadrant with room for mipmap bleeding)
  • Atlasing lightmap textures (combining many individual baked textures into larger texture sheets for efficiency)
  • Indexing lightmaps (linking multiple 3D model’s 2nd UV set UV co-ordinate data with multiple baked texture atlases in a scene)
  • Additively applies the lightmaps to your existing model’s shaders to give 3D objects the illusion of being illuminated by realtime lights in a scene
  • Other UV properties may be tweaked in the Advanced FBX import settings influencing how the 2nd UVs are unwrapped and packed which all may drastically alter your final results and do not always transfer through version control

Why is this significant? Well your 3D object’s original UV set is typically used to align and apply textures like diffuse, specular, normal, alpha texture maps, etc, onto the 3D object’s surfaces. There are no real restrictions on laying out your UVs for texturing. UV’s may be stretched to tile a texture, they can overlap, be mirrored… Lightmap texturing requirements in Unity, on the other hand, are different and require:

  • A 2nd UV set
  • No overlapping UVs
  • UVs and must be contained in the 0 to 1, 0 to 1 UV co-ordinate space

Unwrapping and packing UVs so they don’t overlap and are optimally contained in 0 to 1 UV co-ordinate space is tedious and time consuming for a tech artist. Many developers without a tech artist purchase 3D models online to “save time and money”. Typically those models won’t have 2nd UV sets included. Beast can Unwrap lightmapping UVs for the developer without much effort in the Unity Inspector by:

Unity FBX import settings for Lightmapping UVs

Advanced Unity FBX import settings for Lightmapping UVs

  • Selecting the FBX to lightmap in the Unity Project Editor window
  • Set the FBX to Static in the Inspector
  • Check Generate Lightmap UVs in the FBXImporter Inspector settings
  • Change options in the Advanced Settings if needed

Atlasing multiple 3D model’s UVs and textures is extremely time consuming and not always practical especially when textures and models may change at a moment’s notice during the development process.  Frequent changes to atlased assets tend to create overwhelming amounts of tedious work. Again, Beast’s automation is truly a great time saver allowing flexibility in atlasing for iterative level design plus scene, object and texture changes in the Unity editor.

Sample atlases in Unity

Beast’s automation is truly great except for when your team is using both Mac and PC computers on the same project with version control that is. Sometimes lightmaps will appear to be totally fine on a Mac and look completely messed up on PC and vise versa. It’s daunting to remedy this and may require, among several tasks, re-baking the all the lightmaps for the scene.

Why are there differences between the Mac and PC when generating 2nd UV sets in Unity? The answer is Mac and PC computers have different floating point precisions used to calculate and generate 2nd UV sets for lightmapping upon importing in the Unity editor.  The differences between Mac and PC generated UVs are minuet but can lead to drastic visual problems. One might assume that with version control like Unity Asset Server or Git, the assets would be synced and exactly the same, but they are not. Metadata and version control issues are for another blog post down the road.

What can one to do to avoid issues with 2nd UV sets across Mac and PC computers in Unity? Well, here are four of my tips to avoid lightmap issues in Unity:

Inconsistent lightmaps on Mac and PC in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

  1. Create your own 2nd UV sets and let Beast atlas, index and apply your lightmaps in your Unity scene
  2. Avoid re-importing or re-generate 2nd UV assets if the project is being developed in Unity across Mac and PC computers when your not creating your own 2nd UV sets externally
  3. Use external version control like Git with Unity Pro with metadata set to be exposed in the Explorer or Finder to better sync changes to your assets and metadata
  4. Use 3rd party editor scripts like Lightmap Manager 2 to help speedup the lightmap baking process by empowering you to be able to just re-bake single objects without having to re-bake the entire scene

Getting Down To Business – The How To Section

If your 3D model already has a good 2nd UV set and you want to enable Unity to use it:

  • Select the FBX in the Unity Project Editor window
  • Simply uncheck Generate Lightmap UVs in the FBXImporter Inspector settings
  • Re-bake lightmaps

How to add or create a 2nd UV set in Maya to export to Unity if you don’t have a 2nd UV set already available?

Workflow 1 -> When you already have UV’s that are not overlapping and contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window Menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs Menu -> UV Set Editor
  5. With your model selected click Copy in the UV Set Editor to create a 2nd UV set
  6. Rename your 2nd UV set to whatever you want
  7. Export your FBX with it’s new 2nd UV set
  8. Import the Asset back into Unity
  9. Select the FBX in the Unity Project Editor window
  10. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  11. Re-bake Lightmaps

Workflow 2 -> When you have UV’s that are overlapping and/or not contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs menu -> UV Set Editor
  5. With your model selected click either Copy or New in the UV Set Editor to create a 2nd UV set depending on whether or not you want to try to start from scratch or to work from what you already have in your original UV set
  6. Rename your 2nd UV set to whatever you want
  7. Use the UV layout tools in Maya’s UV Texture Editor to layout and edit your new 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (another tutorial on this step will be in a future blog post)
  8. Export your FBX with it’s new 2nd UV set
  9. Import the Asset back into Unity
  10. Select the FBX in the Unity Project Editor window
  11. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  12. Re-bake Lightmaps

Workflow 3 -> Add a second UV set from models unwrapped in a 3rd party UV tool like Headus UV or Zbrush to your 3D model in Maya

  1. Import your original 3D model into the 3rd party application like Heads UV and layout your 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (tutorials to come)
  2. Export your model with a new UV set for lightmapping as a new version of your model named something different from the original model.
  3. Import and select your original Model in Maya (be sure not to include import path info in your namespaces)
  4. Go to the Polygon Menu set
  5. Open the Window Menu -> UV Texture Editor to see your current UVs
  6. Go to Create UVs Menu -> UV Set Editor
  7. With your model selected click New in the UV Set Editor to create a 2nd UV set
  8. Select and rename your 2nd UV set to whatever you want in the UV Set Editor
  9. Import the new model with the new UV set being certain to have no overlapping UV’s all contained in the 0 to 1 UV co-ordinate space
  10. Make sure your two models are occupying the exact same space with all transform nodes like translation, rotation and scale values being the exactly the same
  11. Select the new model in Maya and be sure it’s UV is set selected in the UV Set Editor
  12. Shift select the old model in Maya (you may need to do this in the Outliner) and be sure it’s 2nd UV is set selected in the UV Set Editor
  13. In the Polygon Menu Set goto the Mesh Menu -> Transfer Attributes Options
  14. Reset the Transfer Attributes Options settings to default by File -> reset Settings within the Transfer Attributes Menus
  15. Set Attributes to Transfer all to -> Off except for UV Sets to -> Current
  16. Set Attribute Settings to -> Sample Space Topology with the rest of the options at default
  17. Click Transfer at the bottom of the Transfer Attributes Options
  18. Delete non-deformer history on the models or the UVs will break by going to the Edit menu -> Delete by Type -> Non-Deformer History
  19. Select the original 3D model’s 2nd UV set in the UV Set Editor window and look at the UV Texture Editor window to see it the UV’s are correct
  20. Export your FBX with it’s new 2nd UV set
  21. Import the Asset back into Unity
  22. Select the FBX in the Unity Project Editor window
  23. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  24. Re-bake Lightmaps

Once you have added your own 2nd UV sets for Unity lightmapping there will be no lightmap differences between projects in Mac and PC Unity Editors! You will have ultimate control over how 2nd UV space is packed which is great for keeping down vertex counts from your 2nd UV sets, minimize mipmap bleeding and maintain consistent lightmap results!

Keep an eye out for more tutorials on UV and Lightmap troubleshooting in Unity coming in the near future on the Infrared5 blog! You can also play Brass Monkey’s Monkey Golf to see our bear examples in action.

-Elliott Mitchell

@mrt3d on Twitter

@infrared5 on Twitter

, ,

Aerial Combat in Video Games: A.K.A Dog Fighting

September 27th, 2011 by John Grden

A while back, we produced a Star Wars title for Lucas Film LTD. called “The Trench Run” which did very well on iPhone/iPod sales and later was converted to a web game hosted on StarWars.com.  Thanks to Unity3D’s ability to allow developers to create with one IDE while supporting multiple platforms, we were able to produce these 2 versions seamlessly!  Not only that, but this was one of our first releases that included the now famous Brass Monkey™ technology, which allows you to control the game experience of The Trench Run on StarWars.com with your iPhone/Android device as a remote control.  [Click here to see the video on youtube]

Now, the reason for this article is to make good on a promise I made while speaking at Unite2009 in San Francisco.  I’d said I would go over *how* we did the dog fighting scene in The Trench Run, and I have yet to do so.  So, without further delay…

Problem

The problems surrounding this issue are a few fold:

  1. How do you get the enemy to swarm “around you”?
  2. How do you get the enemy to attack?
  3. What factors into convincing AI?

When faced with a dog fight challenge for the first time, the first question you might have is how do you control the enemy to keep them flying around you (engage you), thus one of the most important ones is how to achieve the dog fight AI and playability.  The issue was more than just simple mechanics of how to deal with dog fighting, it also included issues with having an “endless” scene, performance issues on an iDevice and seamlessly introducing waves of enemies without interrupting game flow and performance.

Solution

In debug mode - way point locations shown as spheres

The solution I came up with, was to use what I would call “way points”.  Way points are just another term for GameObjects in 3D space.  I create around 10-15 or so, randomly place them within a spherical area around the player’s ship and anchor them to the player’s ship so that they’re always relatively placed around the player ( but don’t rotate with the player – position only ).  I use GameObjects and parent them to the a GameObject that follows the player, and this solves my issue of having vectors always positioned relative to the player.  The enemies each get a group of 5 way points and continually fly between them.  This solves the issue of keeping the enemies engaged with the player no matter where they fly and allows the enemy the opportunity to get a “lock” on the player to engage.   Since the way points move with the player’s position, this also creates interesting flight patterns and behavior for attacking craft, and now we’ve officially started solving our AI problem.

Check out the Demo.  Get the files

Check out the demo – the camera changes to the next enemy that gets a target lock on the player ship (orange exhaust).  Green light means it has a firing lock, red means it has a lock to follow the player.

Download the project files and follow along.

Setting up the Enemy Manager

The Enemy manager takes care of creating the original way points and providing an api that allows any object to request a range of way points.   Its functionality is basic and to the point in this area.  But it also takes care of creating the waves of enemies and keeping track of how many are in the scene at a time (this demo does not cover that topic, I leave that to you).

First, we’ll create random way points and scatter them around.  Within a loop,  you simply use Random.insideUnitSphere to place your objects at random locations and distances from you within a sphere.  Just multiply the radius of your sphere (fieldWidth) by the value returned by insideUnitSphere, and there you go – all done.

Now, the method for handing out way points is pretty straight forward.  What we do here is give our enemy craft a random set of way points.  By doing this, we’re trying to avoid enemies having identical sets and order of way points given to each enemy.

NOTE:  You can change the scale of the GameObject that the way points are parented to and create different looking flight patterns.  In the demo files, I’ve scaled the GameController GameObject on the Y axis by setting it to 2 in the IDE.  Now, the flight patterns are more vertical and interesting, rather than flat/horizontal and somewhat boring.  Also, changing the fieldWidth to something larger will create longer flight paths and make it easier to shoot enemies.  A smaller fieldWidth means that they’ll be more evasive and drastic with their moves.  Coupled with raising the actualSensivity, you’ll see that it becomes more difficult to stay behind and get a shot on an enemy.

Setting up the enemy aircraft

The enemy needs to be able to fly one their own from way point to way point.  Once they’re in range of their target, they randomly select the next way point.   To make this look as natural as possible, we continually rotate the enemy until they’re within range of “facing” the next target and this usually looks like a nice arc/turn as if a person were flying the craft.  This is very simple to do thankfully.

First, after selecting your new target, update the rotationVector (Quaternion) property for use with the updates to rotate the ship:

Now, in the updateRotation method, we rotate the ship elegantly toward the new way point, and all you have to do is adjust “actualSensitivity” to achieve whatever aggressiveness you’re after:

As you’re flying, you’ll need to know when to change targets.  If you wait until the enemy hits the way point, it’ll likely never happen since the way point is tied to the player’s location.  So you need to set it up to see if it’s “close enough” to make the change, and you need to do this *after* you update the enemy’s position:

Enemy flying to his next way point



You can also simply change the target for an enemy on a random timer – either way would look natural.

NOTE:  Keep the speed of the player and the enemy the same unless you’re providing acceleration controls to match speeds.  Also, keep in mind, that if your actualSensitivity is low (slow turns), and your speed is fast, you will have to make the bufferDistance larger since there is an excellent chance that the enemy craft will not be able to make a tight enough turn to get to a way point, and will continue to do donuts around it.  This issue is fixed if the player is flying around, and is also remedied by using a timer to switch way point targets.    You can also add code to make the AI more convincing that would suggest that way points are switched very often if the enemy is being targeted by the player (as well as increasing the actualSensitivity to simulate someone who is panicking).

Targeting the Player

Red means target aquired : Green means target lock to fire

The next thing we need to talk about is targeting, and that’s the 2nd part of the AI.  The first part is the enemy’s flight patterns, which we solved with way points.  The other end of it is targeting the player and engaging them.  We do this by checking the angle of the enemy to the player.  If that number falls within the predefined amount, then the currentTarget of the enemy is set to the player.

The nice part about this is that, if the player decides to fly in a straight path, then eventually (very soon actually) all of the baddies will be after him and shooting at him because of the rules above.  So, the nice caveat to all of this is that it encourages the player fly evasively.  If you become lazy, you get shot 0.o

You can also change the property “actualSensitivity” at game time to reflect an easy/medium/hard/jedi selection by the player.  If they choose easy, then you set the sensitivity so that the enemy reacts more slowly to the turns.   If it’s Jedi, then he’s a lot more aggressive and the “actualSensitivity” variable would be set to have them react very quickly to target changes.

Firing

And finally, the 3rd part to the AI problem is solved by having yet another angle variable called “firingAngle”.  “firingAngle” is the angle that has to be achieved in order to fire.  While the angle for changing targets is much wider (50), the ability to fire and hit something is a much tighter angle ( >= 15 ).  So we take the “enemyAngle” and check it against “firingAngle” and if it’s less, we fire the cannons on the player.  You could also adjust the “firingAngle” to be bigger for harder levels so that the player’s ship falls into a radar lock more frequently.

In the sample, I added an ellipsoid particle emitter/particle animator/particle renderer to the enemy ship object set the references to the “leftGun / rightGun” properties and unchecked “Emit” in the inspector. Then, via the Enemy class, I simply set emit to true on both when its time to fire:

Conclusion

So, we’ve answered all 3 questions:

  1. How do you get the enemy to swarm “around you”?
  2. How do you get the enemy to attack?
  3. What factors into convincing AI?

With the way point system, you keep the enemy engaged around you and the game play will be very even and feel like a good simulation of a dog fight.  The way points keep the enemy from getting unfair angles and provide plenty of opportunity for the player to get around on the enemy and take their own shots, as well as provide flying paths that look like someone is piloting the ship.  And adjusting values like “actualSensitivity”, “fieldWidth” and “firingAngle” can give you a great variety of game play from easy to hard.  When you start to put it all together and see it in action, you’ll see plenty of room for adjustments for difficulty as well as getting the reaction and look you want out of your enemy’s AI.

Have a Bandit Day!

Beast Lightmapping in Unity3D

March 22nd, 2011 by John Grden

One of the coolest features of Unity3D is the addition of Beast Lightmap Engine! In short, you can do global illumination (bake shadows/light) right there in the Unity IDE.  And if you haven’t heard about this feature yet, then you’re about to have a “moment” (get some tissues, for slobbering/weaping etc).

Here’s the basic video for Beast in Unity3D:

http://www.youtube.com/watch?v=suxujCszLnk

And check out their in-depth explanation of the Lightmapping interface here

This is very very cool indeed! With some basic settings, you can really increase the appeal of your game’s scenes using Beast within Unity3D. Not only that, but in terms of performance, especially on an iPad/iPhone, it’s invaluable. Ok, great – so now that I have your attention, what’s this post about? It’s about lightmapping, haven’t you been paying attention?!? Ok, more directly, this post focuses on how to get quick bakes and what has the most impact on a bake time.

The *why*

When you first jump into lightmapping, you really just want to “see” something immediately to get a sense of what you can adjust to get what you want out of it.

Without Litmapping

With Lightmapping

I’m going to do a simple scene with a helicopter from my new game called “Stunt-Copter” to give you an idea of what impacts your wait time, and what gives you the quality you might be after.

What takes so long?

There are two things that affect the bake time most:  1) Resolution and 2) Final Gather Rays.  I’ve personally found that Resolution affects the bake time more than Final Gather Rays does.  Obviously, the higher the resolution, the better quality you’ll get with the shading – but you’ll also wait longer. ;)  Waiting longer is fine for final game touches, but during the development time, it’s necessary to get a scene with some lighting going so that you can make your best decisions as you go along.  Or maybe you’re just tired of looking at your unlit scene, like the one above. ;)

Getting a Quick Bake

let’s start off with how this scene is setup, then we can take a look at some basic settings. First, the models you’ve imported have to have “Generate Lightmap UVs” checked and reimported. What this does is add a second UV channel to your model’s existing set of UVs. For lightmapping to work properly, the faces of the model can’t have any overlapping or shared areas in the UVs Second, the building, ground, landingPad and helicopter are all marked as “static”. This is how the light map engine identifies what will be baked and what won’t. Now, in this scene, I’ve marked the helicopter as static so that we can see the nice shadow on the ground and the ambient occlusion on the heli itself. In one of the other screenshots, you’ll also see how the color of the body and the green from the grass is baked into the under side of the blades on top, which is extremely cool – but that’s another discussion.

Now, the other thing we need to do is put a directional light into the scene and mark it as “BakedOnly” in the Lightmapping selection at the bottom of the light’s property inspector panel. Then, you’ll need to select the “Shadow Type” and set it to “soft shadows”. The only other thing I changed here was the quality – instead of using the settings in the quality settings, I changed it to “Low Resolution”. This actually saved me 8 seconds in the bake time and I couldn’t tell a difference in the shadows. See below:

High Quality - 2:12

Low Quality - 2:04

Bake Settings

Now we need to set the lightmapping settings. Here’s a screenshot (which I always appreciate) of the settings I used in this example. I’ve set the Final Ray Gather to 200 and the Resolution to 10. I’ve also set my skylight intensity to .25 and changed the color from the blue tint to a gray tint. I know *why* they put it as blue, I just don’t think it looks good, so I set it to a shade of light gray. That’s probably just me though :) Ambient Occlusion is set all the way to 1 as is bounces. Other than that, I didn’t touch anything else. At this point, if you’ve set everything up correctly, you should be able to get a quick and dirty bake in a very reasonable amount of time.

Saving time like this is a big deal when you’re trying to make “best guess” decisions on your project in the earliest stages.

Now, in the final output, you’ll notice the ambient occlusion on the pillars of the building as it meets the floor and ceiling. Event at 10 texels this looks fairly decent and certainly gives us a good enough hint about how the final render will look. While the building looks good, the helicopter doesn’t look nearly as good unfortunately.

Resolution : 10 texels

Let’s take a look at the texels first, then take a look at the helicopter closely. In this next shot, we see the building in the background and the helicopter up close with the resolution squares showing. The building looks to have many more than the helicopter. The helicopter’s texels are much larger across it’s faces as well. So when this scene is baked, the building’s shadows actually look fairly decent, but the helicopter’s really pretty terrible. If you look closely, there’s no sign of ambient occlusion and the shadows are not distinct at all. Which, is what we asked for with all of our low quality settings for the sake of speed, right?

This really is ok for now since all we really needed to get was a fair indication of how the scene was going to look lightmapped. One other example I’ll show you is “Copteropolis” from Stunt-Copter. This was a perfect example of needing to get a quick bake on a very large city scene. In all, this bake took 1 hour. It was well worth the wait so that I could continue working on other aspects of the game, especially considering that one “high quality” bake took well over 6hrs! I may have gone off the deep end with some of the settings, but you get the point ;)

Copteropolis, the city of Stunt-Copter (iPad Game)

Higher Quality

200 Final Gather Ray count

Ok, so now that we know how to get quick one-off, let’s look at this scene with a bit more focus on the Helicopter and it’s details. In this next shot, we’re much closer to the helicopter so we can see how big the texels are as we go along. Note also, the light/color emission of the yellow body onto not only the white blades above, but it’s own body where the tail meets the main part of the body.

But one thing we’re missing as I said before is the ambient occlusion on the joints where there are hard angles, as well as fairly clear shading on the body and shadows from things like the blades and foils in the rear.

Now, just to prove my point about “Final Gather Rays” not being the culprit in the amount of time taken as Resolution, I went ahead and bumped the ray count to 1000 from 200 and did another bake.

The time was only 30 seconds longer than 2:04, and it looks identical if you ask me:

2mins, 34secs

In this final lightmap attempt, the Final Gather Rays is set to 2000, and the Resolution is set to 250 texels. The total time was 28:54, but as you can see, the affect it has on the helicopter is very nice indeed. Notice the ambient occlusion on the hard angles as well as the yellow / green emission from the helicopter body and grass on the body where the tail and body come together as well as on the top rotors. The rotors from a top view look incredible as well, although, you’ll never see them during the game ;)

Final Lightmapping - 28:54

Top view of rotors

250 texels

Smartphones Consolidate to Three Platforms

January 31st, 2011 by Chris Allen

Smartphone PlatformsJust last week Sony announced that they would be supporting Android applications on their new NGP (Next Generation Portable), the highly anticipated successor to the PSP. They also announced that they would be allowing content created for the NGP would be available on other Android devices, making the PSP games that developers have built available on a wide range of non-Sony devices. Sony is calling this feature the PlayStation Suite. Essentially it’s a store run by Sony for Android, where users can purchase PlayStation games for their tablets and smartphones. This is a bold new move for a company that in the past has stuck to their own monolithic platform over which they kept complete and total control.

Nokia also looks like they may be going with a similar plan. Rumors are everywhere declaring that Nokia will either be choosing Android or Windows Phone 7 to run on their devices. Nokia CEO, Stephen Elop was quoted saying “In addition to great device experiences we must build, capitalise and/or join a competitive ecosystem”, implying that they are looking to make a move. While it’s clear that Nokia hasn’t settled on Android yet, the very fact that they are looking for a switch indicates the industry is moving towards consolidating into three smartphone operating systems.

In other news, there seem to be reliable sources stating that RIM may be doing something similar with future Blackberry devices. If BlackBerry and Nokia run Android apps, and Sony devices do as well, this is very good news for mobile game developers. Why is that? Quite simply because there will be less platforms to port to.

Already a huge number of game developers are moving to Unity 3D, a game development platform that allows for easy deployment to iOS, Android, their own Web player, Nintendo Wii and xBox 360. Using Unity the developer needs to write one code base that will work across multiple platforms with relatively minor tweaks. The fact that Unity already supports two of the main smartphone platforms (iOS and Android) is a huge win for mobile game developers!

With Sony support for Android apps on PSP, and RIM and Nokia possibly doing the same, this just means more devices we as game developers can target. Of course with our sister company’s platform, Brass Monkey, we also are going to have more consumers that will be able to turn their devices into controllers, and that’s definitely a good thing for us. Will the consolidation of operating systems in the market help your business? Are you a mobile game developer and think this is good news too? I would love to hear your feedback in the comments.

, , , , ,

7 Key Ingredients for Designing Addictive Games

August 27th, 2010 by Chris Allen

As we are a small company, we often wear many hats here at Infrared5. While I’m the CEO, I often play the role of Software Architect, Salesman, IT Support Person and even Dishwasher from time to time. Another role that I end up doing, or at least assisting in, is that of a Game Designer. Game Design is an art form unto itself, and involves the ability to know intuitively what’s going to be fun, and perhaps more important, figure out what’s addictive. I’ve been giving a lot of thought to the addiction of games lately, as I find it a very interesting subject, and is at the core of making the best games possible for our customers.
Read the rest of this entry »

,

« Previous Entries Next Entries »