Build Periscope in 10 Minutes

November 30th, 2015 by Chris Allen

Create_header_2

With live streaming becoming increasingly prevalent in 2015, developers are focused on creating applications to address the public’s fascination with streaming media. Periscope is the prime example of such an application and the sheer size of Periscope’s user base and class-leading engagement metrics validate its dominance in the space.

But what does it take to build a live streaming and communication platform such as Periscope, with the capability to broadcast to one hundred thousand or even one million subscribers? What if I told you that you could build a live streaming application with Periscope-like functionality and scalability in just 10 minutes?

Before we created Red5 Pro it took some serious effort to build this kind of server-side infrastructure and tackle the high level of complexity to build a native Android and iOS video encoder/decoder that works with the server. We saw this trend of a new kind of mobile app that connects people in real-time, and we saw these early adopters cobble together inefficient software wasting tons of time and energy. We couldn’t allow this to happen anymore, so we decided to make it easy for developers. With Red5 Pro, you truly have the ability to build the guts of the next live streaming phenomenon in a matter of minutes, and here’s how:

Let’s first start with all the pieces, and what you would need to build if you were to do this from scratch.

The Fundamentals

1. Publish from the mobile client:

  • Access the camera

  • Encode the video

  • Encode microphone data

  • Negotiate a connection with a media server

  • Implement a low-latency streaming protocol for transport

  • Stream the data to the server

2. Intercept with a media server

  • Intercept the stream

  • Relay to other clients

      and/or

  • Re-stream to a CDN (adds latency)

  • Record the stream (optional)

3. Implement client side subscribing:

  • HLS in WebView (even more latency)

and/or

  • Setup connection with media server

  • Implement streaming protocol

  • Mix the audio and video

  • Decode video/audio

  • Render video and play the audio

 

*Note-this is actually a simplified list of all the tasks involved. Try doing this on multiple threads and getting it to perform well; it is complicated! It’s truly a rabbit hole that most developers don’t want to venture down. Given the awesome tools and libraries that exist for us developers, we thought that it was ridiculous that an easy-to-use and extensible live streaming platform just didn’t exist. That’s why we built Red5 Pro.

 

Red5 Pro to the Rescue

Let’s uncomplicate this. The Red5 Pro Streaming SDKs provide what we think is an intuitive and flexible API to remove the complexity while retaining tremendous control if you need it. Let’s take a look at the classes our SDKs provide. (note that they are the same on Android and iOS).

Graph

Let’s step through an example using these classes, piece by piece.

The Publisher

R5Configuration:

Red5Pro_tools

The first step is to create an R5Configuration. This class holds the various data used by your streaming app. It contains things like the address of your server, the ports used, protocols, etc. In this example we are connecting to a server running at 192.168.0.1 on port 8554 via the RTSP protocol. This Red5 Pro server has an app called “live” running on it, and that is what we want to connect to based on the context name. And finally we are setting the buffer time to half a second.

iOS

 

Android

 

R5Connection:

Red5Pro_links

Next, you create an R5Connection object, passing in your configuration. This establishes a connection to the Red5 Pro media server.

 

iOS

 

Android

 

 

R5Stream:

Red5Pro_stream

Now you create a stream object passing in the connection object you just created. Note that the R5Stream is also used for incoming streams, which we will get to in a bit.

 

iOS

 

 

Android

 

R5Camera:

Red5Pro_camera

Next we create a camera object and attach it to the R5Stream as a video source.

 

iOS

 

 

Android

 

 

R5Microphone:

Red5Pro_microphone

Then we create a microphone object and attach it to the stream as an audio source.

 

iOS

 

 

Android

 

 

R5VideoView:

 

Red5Pro_view

While it’s not a requirement to publish a live stream, we find it useful to provide a preview for the user of their video being streamed from the camera. This is how you set that up.

 

iOS

 

 

Android

 

R5Stream.publish():

Red5Pro_publish

Finally the last step for the publisher is to publish the stream using a unique name that the subscriber can subscribe to.

 

iOS

 

 

Android

 

The record type parameter tells the server the recording mode to use on the server. In this example we are setting it to live, meaning it won’t record the stream on the server.

 

Here are your other choices.

R5RecordTypeLive – Stream but do not record

R5RecordTypeRecord – Stream and record the file name. Replace existing save.

R5RecordTypeAppend – Stream and append the recording to any existing save.

If you compiled and ran this app with it configured to point to a running Red5 Pro server, you would be able to see it running in your browser. Open a browser window and navigate to –> http://your_red5_pro_server_ip:5080//live/streams.jsp to see a list of active streams. Click on the flash version to subscribe to your stream.

 

LiveStreaming.png

The Subscriber

Now that we’ve built the publisher we have established a live stream being published to the server. Yes, we did see the stream in Flash, but in order to consume that stream on mobile we need to build the subscriber client. Let’s dig into that now.

 

R5Configuration:

Red5Pro_tools

Just as before, we setup a configuration object holding the details of our connection and protocols.

 

iOS

 

 

Android

 

 

R5Stream:

Red5Pro_stream

Then, like in the publisher, we set up a stream by passing in the configuration into the constructor.

 

iOS

 

 

Android

 

R5VideoView:

Red5Pro_view

This is the step where things deviate just a little from the publisher. We still set up an R5View, but this time we want to use it to display the incoming stream

 

iOS

 

 

Android

 

 

R5Stream.play():

Red5Pro_play

Finally, we tell the stream to play by using the play method and passing in the unique stream name that the publisher is using.

 

iOS

 

Android

 

Voila, you can now build your own one-to-many live streaming experience, all within minutes with the help of Red5 Pro. What do you think, are there ways we could make this even easier? We love hearing feedback, so let us know in the comments or email us directly. Happy Coding!

, , , , ,

UXFest and My Talk on Second Screen Experiences

October 11th, 2013 by Chris Allen

Chris UX Fest

Last week I presented at a really well attended and inspiring conference here in Boston called UXFest run by design firm Fresh Tilled Soil. I spoke to a packed room of UX enthusiasts that were interested in what I had to say about the direction of user experiences in video games and how these designs can play out in other industries. I got quite a few requests to post my slides from the talk, but given that I tend to take the approach of one image per slide with little-to-no text, that simply wasn’t going to work. For example a slide that shows nothing but an iPhone followed by a photo of a remote controlled helicopter wouldn’t make much sense without some context. So with that, here’s a summary of my talk in blog form. Read the rest of this entry »

The Brains Behind the Booth

August 29th, 2012 by Rosie

This month, IR5’s very own Kelly Wallick is featured in the ‘Sunday Sidebar’ on VideoGameWriters.com. Kelly talks with Christopher Floyd about her work as organizer for the Indie Megabooth, the life of an extreme multi-tasker, and her most prized geek possession. Check it out here!

Be sure to say ‘Hi!’ to Kelly this weekend at PAX!

Infrared5's Kelly Wallick

, , , , ,

Boid Flocking and Pathfinding in Unity, Part 3

July 23rd, 2012 by Anthony Capobianchi

In this final installment, I will explore how to set up a ray caster to determine a destination object for the Boids, and how to organize a number of different destination points for your Boids so that they do not pile on top of each other.

Organizing the Destinations -

The idea is to create a marker for every Boid that will be placed near the destination, defined by the ray caster. This keeps Boids from running past each other or pushing each other off track.

For each Boid in the scene, a new Destination object will be created and managed. My Destination.cs script looks like this:

This is very similar to the Boid behaviors we set up in Boid.cs. We create coherency and separation vectors just as before, except this time we use a rigid body that has the two vectors being applied to it. I am using rigid body’s velocity property to determine when the destination objects are finished moving into position.

Positioning and Managing the Destinations -

Now we create a script that handles instantiating all the destination objects we need for our Boids, placing each one in relation to a Boid, and using each destination’s Boid behaviors to organize them  I created a script called DestinationManager.cs where this will be housed.

First off we need to set up our script:


We need to create our ray caster that will tell the scene where to place the origin of our placement nodes. Mine looks like this:


The ray caster shoots a ray from the camera’s position to the ground, setting the Boid’s destination where it hits.

Next, we take the destinations that were created and move them together using the Boid behaviors we gave them.


The Boid system is primarily used for the positioning of the Destination objects. This method ensures that the Boid system will not push your objects off of their paths, confusing any pathfinding you may be using.

, , , ,

Boid Flocking and Pathfinding in Unity, Part 2

July 5th, 2012 by Anthony Capobianchi

In my last post, we worked through the steps to create a Boid system that will keep objects together in a coherent way, and a radar class that will allow the Boids to detect each other. Our next step is to figure out how to get these objects from point “A” to point “B”, by setting a destination point.

Pathfinding -

For this example, I used Aron Granberg’s A* Pathfinding Project to handle my pathfinding. It works great and uses Unity’s CharacterController to handle movement, which helps with this example. A link to download this library can be found at http://www.arongranberg.com/unity/a-pathfinding/download/ and a guide to help you set it up in your scene can be found here: http://www.arongranberg.com/astar/docs/getstarted.php

In Boid.cs I have my path finding scripts as such:

Applying the Forces -

Once we have the calculated force of the Boid behaviors and the path finder, we have to put those together and apply that to the character controller. We use Unity’s Update function on Boid.cs to constantly apply the forces.

In my next post, we will look at using a ray caster to set a destination point in the scene for the Boids, as well as how to organize a number of different destination points for the Boids to keep them from piling on top of each other.

Read part one here.

Android – Live Wallpaper part 2

March 22nd, 2012 by Paul Gregoire

Let’s make the star that we created in part 1 rotate in just a few simple steps. Although before we start, I’d like to note that we are technically rotating the entire canvas and not just the star itself; adding more than one star to the display will clearly expose this minor detail. I’ve done some research on the rotation of multiple individual items at-once, but I have not yet found a solution that fits; feel free to comment if you are aware of how to accomplish it.

The code modifications below are being in the LiveWallpaper.java source file, we are not using the only other source file GoldstarActivity.java at this time.

1. Add a variable where we keep track of the first frame drawn. This will prevent some calculations from being executed more than once per instance.

2. To rotate the star, we will use degrees. In steps 2 and 3, two methods of generation are shown. In this section we use a float counter for degrees of rotation.

3. In this section we use a fixed array for degrees of rotation. To do so, we also must create an index counter and an array of 360 floats; again, its up-to-you as to which option to use.

4. If using the array method, the onCreate method is modified to pre-fill our array when the application is first initialized.

Note: I’ve tried both methods for degree cycling and they seem equally fast on my device.

4. Modify our drawing code from part 1. The code block is here for reference.

Before

After (using float counter)

After (using float array)

6. As an aside to what we done thus far, you can easily add another star with a couple extra lines like so:

The additional star will be down and to the right of the primary star. This star will not rotate per-se but instead will “orbit” the primary star.

7. Last step, build and run in the emulator. Right click on the goldstar project in Package Explorer and select Run As -> Android Application. If you haven’t created any AVD (virtual devices), you’ll be prompted to create one. Creation of an AVD is covered here.

When everything works, you’ll see this in the Eclipse console:

I recorded the emulator running the apk in this screencast

Running on my Galaxy Nexus

In the sample code, I’ve refactored the original class alittle to make things more clear.
Project Source

End of part two.

, , , ,

Android – Live Wallpaper

March 17th, 2012 by Paul Gregoire

Herein I shall walk you through the steps for creating a live wallpaper in Android. Before we begin any Android development, the SDK and ADT plugin for Eclipse will need to be installed. The best installation guide is here; Disregard this if you already have the SDK and plugin installed. For this first part, we will simply display a graphic and in the followups we will do some animation. Without further ado, let’s get started.

1. The first step, is to create the new project
File -> New -> Android Project

We will call it “goldstar” and target Android 2.1 (API 7); this version of Android was the first to support Live Wallpapers.

2. Open up the AndroidManifest.xml file and add the nodes that we will need to support our application. Here is the manifest before our additions were
made:

This is the “after” version, where we added our feature, permission, and service nodes:

3. Create a metadata file for our service. This is accomplished by making an xml directory within the res folder of the project. Create a new file named “metadata.xml” in this folder with these contents:

4. Add a description for our application. Open the strings.xml file and add a string with a name of “wallpaper_description” and a value of “Goldstar Live”. You may actually use whatever value suits you, this one is just for the example.

5. Get the svg library and place it in the “libs” folder; this folder must be created manually, if it does not already exist in the project.
We are using svg-android library from http://code.google.com/p/svg-android/ for this example. This library was also used in the popular “Androidify” application.

6. Locate an SVG image file to use in our application, preferrably one that is not copyrighted. Remember that google is your friend

https://www.google.com/search?q=star%20svg&orq=star+filetype:+svg

Here’s a gold star on wikimedia that you can use: https://upload.wikimedia.org/wikipedia/commons/9/9c/Golden_star.svg

Once you have a suitable file, save it into the “raw” directory within the “res” directory of the project. Note that your resource may only contain this range of characters in its name: a-z0-9_.

7. Now for some code; create a new class in the wallpaper package and name it LiveWallpaper. Set the super class to android.service.wallpaper.WallpaperService and click Finish. Your new class should appear like this:

8. Create an internal class named StarEngine which extends Engine. The result of this should appear like so:

9. Right-click on StarEngine and select “Source -> Override/Implement Methods”. Now select the following methods:

onCreate
onDestroy
onVisibilityChanged
onSurfaceChanged

then click ok. This will create the method stubs that we are interested in.

10. Modify the onCreateEngine method to create a new instance of our engine.

We have also added to static variables for the frame rate and scene width.

11. Load our svg asset. Create a local engine variable and modify the onCreate method like so:

This will read the file resource and parse it to create an SVG image object.

12. Thread and handler must now be setup to take care of drawing on the canvas. We modify the engine like so:

13. Drawing on the canvas. In our drawFrame method we will use our svg asset and draw it into view.

14. Build and run in the emulator; you should see something like this:

15. Lastly, if you want to have nicer launcher images for your application there are free services to utilize such as this one:

http://android-ui-utils.googlecode.com/hg/asset-studio/dist/icons-launcher.html

Just upload your image and do a little configuration and you get a zip containing all the launcher images you need.

Project Source

End of part one; for part two we will cover animation.

, , ,

Gaming Ouroboros at the Global Game Jam 2012

February 6th, 2012 by Elliott Mitchell

Now and then, as a professional 3D technical artist and game designer, I find it’s helpful to step out of my usual routine and make a game over a weekend. Why? Because it keeps life fresh and exciting while providing a rare sense of instant gratification in the crazy world of video game development. Making a video game over a weekend isn’t easy for one person alone. For this, Global Game Jam was created.

This year’s Global Game Jam was held last January 27 – 29, 2012. I registered with was the Singapore-MIT GAMBIT Game Lab, in Cambridge, Massachusetts. Here is the lowdown of my experience.

Global Game Jam 2012 - Photo Courtesy Michael Carriere

The Global Game Jam (GGJ) is an annual International Game Developer Association (IGDA) game creation event. The event unites people from across the globe to make games in under 48 hours. Anyone is welcome to participate in the game jam. Jammers range from industry professionals to hobbyists and students. The primary framework is that under common constraints, each team completes a game, without preconceived ideas or preformed teams, in under 48 hours. This is intended to encourage creativity, experimentation and collaboration resulting in small but innovative games. To support this endeavor, schools, businesses and organizations volunteer to serve as official host sites. Several prominent sponsors such as Loot Drop, Autodesk, Microsoft and Brass Monkey also helped foot the bill.

HOW IT WENT DOWN

Keynote -

Brenda Brathwaite and John Romero addressing the Global Game Jammers 2012 - Photo courtesy Michael Carriere

GGJ site facilitators kicked off the Jam with a pre-recorded video from the IGDA website titled How to Build A Game in Less Than 48 Hours. The speakers in the video were Gordon Bellamy, the  Executive Director of the IGDA, John Romero (Quake) and Brenda Brathwaite (Wizardry) both co-founders of Loot Drop, Gonzalo Frasca (Ludology.org) the co-founder of Powerful Robot Games and Will Wright (The Simms) co-founder of Maxis. They speakers all gave excellent advice on creativity, leadership, scope and collaboration within a game jam.

Global Constraint -

Ouroboros

Our primary constraint was revealed after the keynote video. It was an image of a snake eating it’s own tail. The snake represented Ouroboros, a Greek mythological immortal. Variations of the symbol span across time and space from the modern day back to antiquity. The snake, or dragon in some instances, while eating it’s own tail has made appearances in ancient Egypt, Greece, India, Mexico, West Africa, Europe, South America and elsewhere under a host of names. It’s meaning can be interpreted as opposites merging in an a unifying act of cyclical creation and destruction, immortal for eternity. To alchemists the Ouroboros symbolized the Philosopher’s Stone.

Group Brainstorming –

Brainstorming Global Game Jam 2012

After the keynote game jammers arbitrarily split into 5 or 6 groups of 11 or so and went into different labs to brainstorm Ouroboros game pitches. After an amusing ricochet of thoughts, references, revisions, personalities and passions each room crafted 6 pitches which were mostly within the scope of the 48 hour Game Jam.

Pitch and Choose -

When the groups reassembled into the main room it was time to pitch.

The Rules-

  • Pitches needed to be under a minute
  • Title is 3 words or less
  • Theme related to the Ouroboros
  • The person pitching a game did not necessarily need to be on that potential team

There were about 30 or so pitches, after which each jammer had to choose a role on a game / team that appealed to them. Each Jammer had a single piece of colored coded paper with their name, skill level and intended role.

The Roles-

Choose Your Team - Global Game Jam 2012- Photo courtesy Michael Carriere

  • Programmer
  • Artist
  • Game Design
  • Audio
  • Producer

Games with too many team members were pruned and others lacking members for roles such as programmer were either augmented or eliminated. Eventually semi-balanced teams of 4-6 members were formed around the 11 most popular pitches.

My team decided to develop our game for the Commodore 64 computer using Ethan Fenn’s Comma8 framework. We thought the game narrative and technology married well.

Time to Jam - Photo Courtesy Michael Carriere

Time to Jam -

Post team formation, clusters of lab space were claimed. Even though most of us also brought our personal laptops, the labs were stocked with sweet dual boot Windows 7 & OS X systems with cinema displays. The lab computers were pre-installed with industry standard software such as Unity3d, Maya, Photoshop… We were also provided peripherals such as stylus tablets and keyboards. Ironically, I was most excited by the real world prototyping materials like blocks and graph paper which were also provided by or host.

First Things First –

Our space at Global Game Jam 2012 at Singapore - MIT GAMBIT Game Lab

After claiming a lab with another awesome team we immediately setup:

  • Version control (SVN)
  • Installed custom tools for Comma8 (Python, Java, Spite Pad, Tiles and more)
  • Confirmed the initial scope of the game
  • Set up collaborative project management system with a team Google Group and Google Doc

Cut That Out –

We needed to refine the scope once we were all aware of all the technical limitations such as:

  • Commodore 64 from 1982 is old
  • 64 kb of RAM for system not much
  • 8 bit
  • Programed in Assembly Language
  • 300 X 200 pixels
  • 16 pre-determined crappy colors
  • 3 Oscillators
  • Rectangular pixels
  • Screen Space
  • Developing in emulation on a network
  • Loading and testing a playable on legacy Commodore 64 hardware
  • Less than 48 hours to get it all working
  • Our scope was too big, too many levels
  • Other factors causing us to consider limiting the scope further included:
  • None of us had made games for C 64 before
  • Comma8 is an experimental engine that was untested in a game jam situation and is currently in development by Ethan
  • Tools such as Sprite Pad and Tiles are very archaic and limiting apps for art creation
  • Build process would do strange things to art after build time which required constant iteration

Rapid Iterative Prototyping -

Walking Backwards Prototype Global Game Jam 2012 - Photo Courtesy Michael Carriere

Physical prototyping was employed to reduce the scope before we went too far down any rabbit holes. We used the following materials to prototype:

  • Glass white board
  • Markers
  • Masking tape on the walls
  • Paper notes tacked to the walls
  • Graph paper
  • Wooden blocks
  • Pens

Results of Physical Prototyping-

  • Cut down scope from 9 levels to 5 levels as the minimum to carry the Ouroboros circular theme of our narrative far enough
  • Nailed the key mechanics
  • Refined the narrative
  • Determined scale and placement of graphical elements
  • Limited overall scope

Naturally we ran into design roadblocks and need to revise and adapt a few times. Physical prototyping once again sped up that process and move us along to completion.

QA-

Global Game Jam 2012 - Photo Courtesy Michael Carriere

We enlisted a few play testers on the second night and final hours of the game jam to help us gauge the following:

  • Playability
  • Comprehension of the narrative
  • Recognition of the lo-res art assets
  • Overall player experiences
  • Feelings about the game
  • Suggestions
  • Bugs

We did wind up having to revise the art, level design and narrative slightly to reach a better balance and game after play testing.

Deadline -

Walking Backwards - C64 - Global Game Jam 2012

1.5 hours before the game jam was to end it was pencilsdown. Time to upload to the IDGA Global Game Jam website, any other host servers and on to the site presentation computer. Out of the total 48 hours allotted to the game jam, we

only had about 25 working lab hours. Much time was spent on logistics like the keynote video, brainstorming, pitching, uploading and presenting. Our site also was only open from 9 am to midnight so there was not 24 hour access. With 25 hours of lab time all 11 games at my site were uploaded and ready for presentation.

Presentations -

Global Game Jam - Singapore-MIT GAMBIT Game Lab Games

The best part ever! The presentations were so exciting. Many of the jammers were so focused on their work they were not aware of what other teams were up to. One by one teams went up and presented their games in whatever the current game state was at the deadline.

Most were pretty innovative, experimental and funny. Titles such as The Ouroboros Hangover and Hoop Snake had the jammers in stitches. Fire farting dragons, Hoop Snakes, drunk Ouroboros and so on were big hits. Unity, HTML 5, Flash, Flex, XNA, Comma8 and Flixel were used to create the great games in under 48 hours.

Take Aways -

My teammates and I consider the game we made, Walking Backwards, to be a success.   We accomplished our goals:

Walking Backwards Team - Global Game Jam 2012- Photo courtesy Michael Carriere

  • Experimental game
  • A compelling narrative
  • Awesome audio composition
  • Most functionality we wanted we achieved
  • Runs on an original Commodore 64 with Joysticks
  • Can be played with a Java emulator
  • Got to work together under pressure and have a blast

Would have liked-

  • Avatar to animate properly (we had bi-directional sprites made but not implemented)
  • More audio for sound effects

The final take away I had, besides feeling simultaneously exhilarated and exhausted, is how essential networking at the game jam is for greater success. Beyond just meeting new people, networking at the jam made or broke some games. Some teams didn’t take time to walk around and talk to other teams. In one instance, a team didn’t figure out a essential ghost mechanic by the end of the jam. They realized at presentation time another team had implemented the same mechanic they failed to nail down in the same engine. Networking also provided mutual feedback, play testing, critique, advise, friendships and rounds of beer after the event ended. Many of the jammers now have a better sense of each other’s strengths and weaknesses, their performance under stress, their abilities to collaborate, lead and follow.

I, for one, will be a life long game jammer, ready to collaborate while pushing into both familiar and new territories of game development with various teams, themes and dreams.

Follow this link to see all the games created at my site hosted by the Singapore-MIT GAMBIT Game Labs

——

Elliott Mitchell

Technical Director- Infrared5

Twitter: @ Mrt3d

, , , , ,

Creating 2nd UV sets in Maya for Consistent and Reliable Lightmapping in Unity 3d

January 11th, 2012 by Elliott Mitchell

Lightmaps in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

Have you ever worked on a game that was beautifully lit in the Unity editor but ran like ants on molasses on your target device? Chances are you might benefit from using lightmaps. Ever worked on a game that was beautifully lit with lightmaps but looked different between your Mac and PC in the Unity editor? Chances are you might want to create your own 2nd UV sets in Maya.

Example of a lightmap

Example of a lightmap

If you didn’t know, lightmaps are 2D textures pre-generated by baking (rendering) lights onto the surfaces of 3D objects in a scene. These textures are additively blended with the 3D model’s original textures to simulate illumination and fine shadows without the use of realtime lights at runtime. The number of realtime lights rendering at any given time can make or break a 3D game when it comes to optimal performance. By reducing the number of realtime lights and shadows your games will play through more smoothly. Using fewer realtime lights also allows for more resources to be dedicated to other aspects of the game like higher poly counts and more textures. This holds true especially when developing for most 3D platforms including iOS, Android, Mac, PC, Web, XBox, PS3 and more.

Since the release of Unity 3 back in September 2010, many Unity developers have been taking advantage of Beast Lightmapping as a one-stop lightmapping solution within the Unity editor. At first glance Beast is a phenomenal time saving and performance enhancing tool. Rather quickly, Beast can automate several tedious tasks that would have needed to be preformed by a trained 3D technical artist in an application like Maya. Those tasks being mostly UV related are:

UVs in positive UV co-ordinate space

  • Generating 2nd UV sets for lightmapping 3D objects
  • Unwrapping 3D geometry into flattened 2D shells which don’t overlap in O to 1 UV co-ordinate quadrant
  • Packing UV shells (arranging the unwrapped 2D shells to optimally fit within a square quadrant with room for mipmap bleeding)
  • Atlasing lightmap textures (combining many individual baked textures into larger texture sheets for efficiency)
  • Indexing lightmaps (linking multiple 3D model’s 2nd UV set UV co-ordinate data with multiple baked texture atlases in a scene)
  • Additively applies the lightmaps to your existing model’s shaders to give 3D objects the illusion of being illuminated by realtime lights in a scene
  • Other UV properties may be tweaked in the Advanced FBX import settings influencing how the 2nd UVs are unwrapped and packed which all may drastically alter your final results and do not always transfer through version control

Why is this significant? Well your 3D object’s original UV set is typically used to align and apply textures like diffuse, specular, normal, alpha texture maps, etc, onto the 3D object’s surfaces. There are no real restrictions on laying out your UVs for texturing. UV’s may be stretched to tile a texture, they can overlap, be mirrored… Lightmap texturing requirements in Unity, on the other hand, are different and require:

  • A 2nd UV set
  • No overlapping UVs
  • UVs and must be contained in the 0 to 1, 0 to 1 UV co-ordinate space

Unwrapping and packing UVs so they don’t overlap and are optimally contained in 0 to 1 UV co-ordinate space is tedious and time consuming for a tech artist. Many developers without a tech artist purchase 3D models online to “save time and money”. Typically those models won’t have 2nd UV sets included. Beast can Unwrap lightmapping UVs for the developer without much effort in the Unity Inspector by:

Unity FBX import settings for Lightmapping UVs

Advanced Unity FBX import settings for Lightmapping UVs

  • Selecting the FBX to lightmap in the Unity Project Editor window
  • Set the FBX to Static in the Inspector
  • Check Generate Lightmap UVs in the FBXImporter Inspector settings
  • Change options in the Advanced Settings if needed

Atlasing multiple 3D model’s UVs and textures is extremely time consuming and not always practical especially when textures and models may change at a moment’s notice during the development process.  Frequent changes to atlased assets tend to create overwhelming amounts of tedious work. Again, Beast’s automation is truly a great time saver allowing flexibility in atlasing for iterative level design plus scene, object and texture changes in the Unity editor.

Sample atlases in Unity

Beast’s automation is truly great except for when your team is using both Mac and PC computers on the same project with version control that is. Sometimes lightmaps will appear to be totally fine on a Mac and look completely messed up on PC and vise versa. It’s daunting to remedy this and may require, among several tasks, re-baking the all the lightmaps for the scene.

Why are there differences between the Mac and PC when generating 2nd UV sets in Unity? The answer is Mac and PC computers have different floating point precisions used to calculate and generate 2nd UV sets for lightmapping upon importing in the Unity editor.  The differences between Mac and PC generated UVs are minuet but can lead to drastic visual problems. One might assume that with version control like Unity Asset Server or Git, the assets would be synced and exactly the same, but they are not. Metadata and version control issues are for another blog post down the road.

What can one to do to avoid issues with 2nd UV sets across Mac and PC computers in Unity? Well, here are four of my tips to avoid lightmap issues in Unity:

Inconsistent lightmaps on Mac and PC in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

  1. Create your own 2nd UV sets and let Beast atlas, index and apply your lightmaps in your Unity scene
  2. Avoid re-importing or re-generate 2nd UV assets if the project is being developed in Unity across Mac and PC computers when your not creating your own 2nd UV sets externally
  3. Use external version control like Git with Unity Pro with metadata set to be exposed in the Explorer or Finder to better sync changes to your assets and metadata
  4. Use 3rd party editor scripts like Lightmap Manager 2 to help speedup the lightmap baking process by empowering you to be able to just re-bake single objects without having to re-bake the entire scene

Getting Down To Business – The How To Section

If your 3D model already has a good 2nd UV set and you want to enable Unity to use it:

  • Select the FBX in the Unity Project Editor window
  • Simply uncheck Generate Lightmap UVs in the FBXImporter Inspector settings
  • Re-bake lightmaps

How to add or create a 2nd UV set in Maya to export to Unity if you don’t have a 2nd UV set already available?

Workflow 1 -> When you already have UV’s that are not overlapping and contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window Menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs Menu -> UV Set Editor
  5. With your model selected click Copy in the UV Set Editor to create a 2nd UV set
  6. Rename your 2nd UV set to whatever you want
  7. Export your FBX with it’s new 2nd UV set
  8. Import the Asset back into Unity
  9. Select the FBX in the Unity Project Editor window
  10. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  11. Re-bake Lightmaps

Workflow 2 -> When you have UV’s that are overlapping and/or not contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs menu -> UV Set Editor
  5. With your model selected click either Copy or New in the UV Set Editor to create a 2nd UV set depending on whether or not you want to try to start from scratch or to work from what you already have in your original UV set
  6. Rename your 2nd UV set to whatever you want
  7. Use the UV layout tools in Maya’s UV Texture Editor to layout and edit your new 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (another tutorial on this step will be in a future blog post)
  8. Export your FBX with it’s new 2nd UV set
  9. Import the Asset back into Unity
  10. Select the FBX in the Unity Project Editor window
  11. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  12. Re-bake Lightmaps

Workflow 3 -> Add a second UV set from models unwrapped in a 3rd party UV tool like Headus UV or Zbrush to your 3D model in Maya

  1. Import your original 3D model into the 3rd party application like Heads UV and layout your 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (tutorials to come)
  2. Export your model with a new UV set for lightmapping as a new version of your model named something different from the original model.
  3. Import and select your original Model in Maya (be sure not to include import path info in your namespaces)
  4. Go to the Polygon Menu set
  5. Open the Window Menu -> UV Texture Editor to see your current UVs
  6. Go to Create UVs Menu -> UV Set Editor
  7. With your model selected click New in the UV Set Editor to create a 2nd UV set
  8. Select and rename your 2nd UV set to whatever you want in the UV Set Editor
  9. Import the new model with the new UV set being certain to have no overlapping UV’s all contained in the 0 to 1 UV co-ordinate space
  10. Make sure your two models are occupying the exact same space with all transform nodes like translation, rotation and scale values being the exactly the same
  11. Select the new model in Maya and be sure it’s UV is set selected in the UV Set Editor
  12. Shift select the old model in Maya (you may need to do this in the Outliner) and be sure it’s 2nd UV is set selected in the UV Set Editor
  13. In the Polygon Menu Set goto the Mesh Menu -> Transfer Attributes Options
  14. Reset the Transfer Attributes Options settings to default by File -> reset Settings within the Transfer Attributes Menus
  15. Set Attributes to Transfer all to -> Off except for UV Sets to -> Current
  16. Set Attribute Settings to -> Sample Space Topology with the rest of the options at default
  17. Click Transfer at the bottom of the Transfer Attributes Options
  18. Delete non-deformer history on the models or the UVs will break by going to the Edit menu -> Delete by Type -> Non-Deformer History
  19. Select the original 3D model’s 2nd UV set in the UV Set Editor window and look at the UV Texture Editor window to see it the UV’s are correct
  20. Export your FBX with it’s new 2nd UV set
  21. Import the Asset back into Unity
  22. Select the FBX in the Unity Project Editor window
  23. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  24. Re-bake Lightmaps

Once you have added your own 2nd UV sets for Unity lightmapping there will be no lightmap differences between projects in Mac and PC Unity Editors! You will have ultimate control over how 2nd UV space is packed which is great for keeping down vertex counts from your 2nd UV sets, minimize mipmap bleeding and maintain consistent lightmap results!

Keep an eye out for more tutorials on UV and Lightmap troubleshooting in Unity coming in the near future on the Infrared5 blog! You can also play Brass Monkey’s Monkey Golf to see our bear examples in action.

-Elliott Mitchell

@mrt3d on Twitter

@infrared5 on Twitter

, ,

Android Graphics and Animation Part III – Handling the Accelerometer

September 19th, 2011 by Keith Peters

It’s been a while, but we finally come to part 3 of this series.

In part 1 we learned the basics of setting up an Android project in Eclipse and drawing to the canvas.

In part 2 we covered animation and threading.

In this episode, we will look at handling the accelerometer, allowing you to control the animation by tilting your Android device.

Set up

We’ll continue on with the same project we created last time, which had a circle moving from left to right across the screen. Since we’ll be allowing the user to tilt the phone in any direction, we want to disable the auto-rotation feature that will change the orientation when the phone is tilted. This is done in the Android manifest xml file. We want to add the following line to the activity tag:

This will force the device to remain in portrait orientation no matter how it is tilted. The whole manifest will now look like this:

Listening for Accelerometer Events

The next thing we want to do is listen for accelerometer events, which will occur whenver the device is tilted. In reality, it’s not like you will only get events when the device is moving. You will get a steady stream of accelerometer events even if the device is sitting by itself on a table. But different types of applications may need to get these events more or less often. For example, a game may need to be very responsive and be very up to date on the events coming in, whereas in another type of application, it may not be so vital and you can opt to get the events less often to save on processing. Android allows you to listen for these events using the following values to control how often you will get them:

These are all static values on the SensorManager class. We’ll see how they are used in a moment.

First we’ll go into our AnimView class which starts out like this right now:

In addition to implementing the SurfaceHolder.Callback interface, we now want to implement the android.hardware.SensorEventListener interface. So import that interface and add it to the class signature:

This, of course, will cause the compiler to complain that you have not implemented the required methods of that interface. Triggering a quick fix will add the following method stubs:

Before we do anything with these, we need to write the code that listens for the sensor events. We’ll do that right in the constructor.

First we get an instance of the SensorManager class. This is done by calling the getSystemService on the context that is passed into the view’s constructor. We tell it which service we want: the Context.SENSOR_SERVICE.

Once we have the SensorManager, we need to check if there is indeed an accelerometer on this device. If so, we get the first available one. I’m not sure if any existing device has more than one accelerometer, but the api leaves that possibility open.

Finally, we register this class as a listener to the accelerometer, passing in the sensor delay you need, as covered earlier. At this point, we should start receiving events in the two methods we just added.

All we are interested in now is the onSensorChanged method, which will give us the data on how the device is currently oriented. As you can see, this method gets passed an instance of SensorEvent. This object has a property called values, which is a simple array of floats. For accelerometer events, the tilt on the x, y and z axes are represented by the first 3 elements of the array. i.e.:

event.values[0] is the degree of tilt on the x axis
event.values[1] is the degree of tilt on the y axis
event.values[2] is the degree of tilt on the z axis

All of these values will be in the range of -9.81 to +9.81. For a thorough explanation of why these values are used, see the SensorEvent class documentation here:

http://developer.android.com/reference/android/hardware/SensorEvent.html

For our purposes, we only care about the x and y axes. We’ll pass those values to the AnimThread class with a method called setTilt. This method doesn’t exist yet, but we’ll create it soon.

Note that the method will probably be called before and/or after the thread is created, so we’ll test to make sure it exists before calling any methods on it. For this simple demo, we won’t do any high or low pass filtering as described in the SensorEvent documentation, but if you wanted to do so, this would be a good place to do it.

Handling the Tilt Values

Now we need to create the setTilt method in AnimThread, but first let’s create a few properties there. We’ll need something to hold the raw tilt values, and some properties to hold the current position and velocity of the ball. The top of that class should now look like this:

Now we can create the setTilt method, which will be pretty simple:

Now all we have to do is make use of those values. The strategy is to add the tilt values (or at least a part of them) to the velocity values, then add the velocity to the position values, and finally draw the circle at the final x, y point. Here’s the run method in full:

We’re now at a point where you can test the app. Hold the phone flat when you start it. The ball should “roll” in the direction you tilt the phone. Of course, it will roll out of sight if you’re not careful, so we need to fix that next.

Handling Screen Edges – Bouncing

The following stuff I’ve covered in a number of books and tutorials on my personal site, www.bit-101.com, so I’m not going to belabor the point. We’re just going to see if the ball has gone past any edge of the screen and if so, place it on the edge and reverse the velocity on that axis. Here’s the final AnimThread class in full:

Now as you tilt the device, the ball will bounce off the edges of the screen, with just a little less force than it hit.

Summary

We now have a working, accelerometer-based interactive animation. This isn’t meant to be a perfect example in terms of best practices. I’d probalby pull out a lot of hard coded values into variables, extract some of the code into separate methods, etc. I’d also get a time delta between updates so that devices running at different speeds would run the animation at the same rate. Perhaps I’ll be able to cover some of that in a future tutorial. But this gives you a good idea of the structure and what happens where and how to get started.

« Previous Entries