How-To: Converting Recorded Files to MP4 for Seamless Playback via Red5 Pro

December 18th, 2015 by Chris Allen

A lot of developers have been asking us detailed questions about recording streams using Red5 Pro. The one thing that comes up most frequently is how to create an MP4 file for easy playback on mobile devices. While you can use our SDK to playback the recorded file as is (Red5 records in FLV format), this requires that the user always use your native app, and sometimes it would just be nice to provide a link to an MP4 for progressive download.

So with that, here’s a quick guide to get you up and running with a custom Red5 application that converts recorded files to MP4 for easy playback. You can find the source for this example on GitHub.

Install ffmpeg

The first step is to install ffmpeg on your server. ffmpeg is a great command line utility to manipulate video files, such as converting between formats like we want to do for this example.

You can find precompiled packages for your platform from ffmpeg’s website, although if you need to convert your recordings to formats that need an extra module on top of ffmpeg, you might need to compile those from scratch. Once you have ffmpeg on your server, just issue a simple command to make sure it’s working. The simplest thing you could do is just called ffmpeg with no params like this: ./ffmpeg . That should output something like this:

FFMPEG_output.png

If you see it output the version number and other info, it means you have a working version installed on your machine. Note that I’m running on my local Mac, but the same concept applies to putting it on any other OS. Also note, for production use, you would want to put the binary in a better place than ~/Downloads , and you would want to update your PATH  to point to the ffmpeg application from anywhere.

 

Start Your Own Red5 Application

Now that you have ffmpeg on your server, the next step is to build your own custom server side application for Red5, extending the live example here. You will use this as the beginning of your custom app that will convert your recorded files to MP4.

 

Overwrite MultithreadedApplicationAdapter.streamBroadcastClose()

Now on to calling ffmpeg from your custom Red5 Application.

In your MultiThreadedApplicationAdapter  class you will now need to override the streamBroadcastClose()  method. As is pretty obvious from the name, this method is called automatically by Red5 Pro once a stream broadcast has finished and closed. Even though the Broadcast is finished, it doesn’t mean that the file has finished being written. So, in order to handle that, we need to create a thread to wait and allow the file writer to finish the flv write-out. Then, we access the file and convert it to an MP4 using ffmpeg.

 

Let’s see what that looks like in code:

1. Overwrite the method.

 

2. Check to make sure there is a file that was recorded.

 

3. Setup some variables to hold the data about the recorded file.

 

4. Create a thread to wait for the FLV file to be written. Five seconds should be plenty of time.

 

5. Next we need to get a reference to the FLV file.

 

6. Make sure that there is a file before continuing.

7.  Get the path to the file on the file system and the path to where you want the MP4 version saved.

 

8. Delete any previous file created, otherwise FFMPEG  will fail to write the file.

 

9. Tell your application where to find FFMPEG . Note in this case we’ve added the app to our PATH so we can call it from anywhere.

10. Create the FFMPEG  command.

 

11. Run the process.

 

12. Read the output from FFMPEG  for errors and other info.

 

13. Create threads for each of these and wait for the process to finish.

 

14. Make sure that the process created the file and that it didn’t log any errors.

 

15. Finally catch any potential exceptions and close out the thread you created.

 

That’s it! You’ve now created an MP4 file from the recording. Rather than typing that out line by line we’ve provided the example project here:

https://github.com/red5pro/red5pro-server-examples/tree/develop/FileConversion

Now that you know how to intercept one of the methods of the MultiThreadedApplicationAdapter  in Red5 Pro you can start to look at other ones you could implement to do custom behavior for your app. Many people who are recording their files want to move them to an Amazon S3 bucket for storage. Adding in a few lines using Amazon’s API to do that wouldn’t be too challenging.

What other things would you want to do with a custom Red5 application? We would love to hear from you.

-Chris

 

Build Periscope in 10 Minutes

November 30th, 2015 by Chris Allen

Create_header_2

With live streaming becoming increasingly prevalent in 2015, developers are focused on creating applications to address the public’s fascination with streaming media. Periscope is the prime example of such an application and the sheer size of Periscope’s user base and class-leading engagement metrics validate its dominance in the space.

But what does it take to build a live streaming and communication platform such as Periscope, with the capability to broadcast to one hundred thousand or even one million subscribers? What if I told you that you could build a live streaming application with Periscope-like functionality and scalability in just 10 minutes?

Before we created Red5 Pro it took some serious effort to build this kind of server-side infrastructure and tackle the high level of complexity to build a native Android and iOS video encoder/decoder that works with the server. We saw this trend of a new kind of mobile app that connects people in real-time, and we saw these early adopters cobble together inefficient software wasting tons of time and energy. We couldn’t allow this to happen anymore, so we decided to make it easy for developers. With Red5 Pro, you truly have the ability to build the guts of the next live streaming phenomenon in a matter of minutes, and here’s how:

Let’s first start with all the pieces, and what you would need to build if you were to do this from scratch.

The Fundamentals

1. Publish from the mobile client:

  • Access the camera

  • Encode the video

  • Encode microphone data

  • Negotiate a connection with a media server

  • Implement a low-latency streaming protocol for transport

  • Stream the data to the server

2. Intercept with a media server

  • Intercept the stream

  • Relay to other clients

      and/or

  • Re-stream to a CDN (adds latency)

  • Record the stream (optional)

3. Implement client side subscribing:

  • HLS in WebView (even more latency)

and/or

  • Setup connection with media server

  • Implement streaming protocol

  • Mix the audio and video

  • Decode video/audio

  • Render video and play the audio

 

*Note-this is actually a simplified list of all the tasks involved. Try doing this on multiple threads and getting it to perform well; it is complicated! It’s truly a rabbit hole that most developers don’t want to venture down. Given the awesome tools and libraries that exist for us developers, we thought that it was ridiculous that an easy-to-use and extensible live streaming platform just didn’t exist. That’s why we built Red5 Pro.

 

Red5 Pro to the Rescue

Let’s uncomplicate this. The Red5 Pro Streaming SDKs provide what we think is an intuitive and flexible API to remove the complexity while retaining tremendous control if you need it. Let’s take a look at the classes our SDKs provide. (note that they are the same on Android and iOS).

Graph

Let’s step through an example using these classes, piece by piece.

The Publisher

R5Configuration:

Red5Pro_tools

The first step is to create an R5Configuration. This class holds the various data used by your streaming app. It contains things like the address of your server, the ports used, protocols, etc. In this example we are connecting to a server running at 192.168.0.1 on port 8554 via the RTSP protocol. This Red5 Pro server has an app called “live” running on it, and that is what we want to connect to based on the context name. And finally we are setting the buffer time to half a second.

iOS

 

Android

 

R5Connection:

Red5Pro_links

Next, you create an R5Connection object, passing in your configuration. This establishes a connection to the Red5 Pro media server.

 

iOS

 

Android

 

 

R5Stream:

Red5Pro_stream

Now you create a stream object passing in the connection object you just created. Note that the R5Stream is also used for incoming streams, which we will get to in a bit.

 

iOS

 

 

Android

 

R5Camera:

Red5Pro_camera

Next we create a camera object and attach it to the R5Stream as a video source.

 

iOS

 

 

Android

 

 

R5Microphone:

Red5Pro_microphone

Then we create a microphone object and attach it to the stream as an audio source.

 

iOS

 

 

Android

 

 

R5VideoView:

 

Red5Pro_view

While it’s not a requirement to publish a live stream, we find it useful to provide a preview for the user of their video being streamed from the camera. This is how you set that up.

 

iOS

 

 

Android

 

R5Stream.publish():

Red5Pro_publish

Finally the last step for the publisher is to publish the stream using a unique name that the subscriber can subscribe to.

 

iOS

 

 

Android

 

The record type parameter tells the server the recording mode to use on the server. In this example we are setting it to live, meaning it won’t record the stream on the server.

 

Here are your other choices.

R5RecordTypeLive – Stream but do not record

R5RecordTypeRecord – Stream and record the file name. Replace existing save.

R5RecordTypeAppend – Stream and append the recording to any existing save.

If you compiled and ran this app with it configured to point to a running Red5 Pro server, you would be able to see it running in your browser. Open a browser window and navigate to –> http://your_red5_pro_server_ip:5080//live/streams.jsp to see a list of active streams. Click on the flash version to subscribe to your stream.

 

LiveStreaming.png

The Subscriber

Now that we’ve built the publisher we have established a live stream being published to the server. Yes, we did see the stream in Flash, but in order to consume that stream on mobile we need to build the subscriber client. Let’s dig into that now.

 

R5Configuration:

Red5Pro_tools

Just as before, we setup a configuration object holding the details of our connection and protocols.

 

iOS

 

 

Android

 

 

R5Stream:

Red5Pro_stream

Then, like in the publisher, we set up a stream by passing in the configuration into the constructor.

 

iOS

 

 

Android

 

R5VideoView:

Red5Pro_view

This is the step where things deviate just a little from the publisher. We still set up an R5View, but this time we want to use it to display the incoming stream

 

iOS

 

 

Android

 

 

R5Stream.play():

Red5Pro_play

Finally, we tell the stream to play by using the play method and passing in the unique stream name that the publisher is using.

 

iOS

 

Android

 

Voila, you can now build your own one-to-many live streaming experience, all within minutes with the help of Red5 Pro. What do you think, are there ways we could make this even easier? We love hearing feedback, so let us know in the comments or email us directly. Happy Coding!

, , , , ,

Boid Flocking and Pathfinding in Unity, Part 3

July 23rd, 2012 by Anthony Capobianchi

In this final installment, I will explore how to set up a ray caster to determine a destination object for the Boids, and how to organize a number of different destination points for your Boids so that they do not pile on top of each other.

Organizing the Destinations -

The idea is to create a marker for every Boid that will be placed near the destination, defined by the ray caster. This keeps Boids from running past each other or pushing each other off track.

For each Boid in the scene, a new Destination object will be created and managed. My Destination.cs script looks like this:

This is very similar to the Boid behaviors we set up in Boid.cs. We create coherency and separation vectors just as before, except this time we use a rigid body that has the two vectors being applied to it. I am using rigid body’s velocity property to determine when the destination objects are finished moving into position.

Positioning and Managing the Destinations -

Now we create a script that handles instantiating all the destination objects we need for our Boids, placing each one in relation to a Boid, and using each destination’s Boid behaviors to organize them  I created a script called DestinationManager.cs where this will be housed.

First off we need to set up our script:


We need to create our ray caster that will tell the scene where to place the origin of our placement nodes. Mine looks like this:


The ray caster shoots a ray from the camera’s position to the ground, setting the Boid’s destination where it hits.

Next, we take the destinations that were created and move them together using the Boid behaviors we gave them.


The Boid system is primarily used for the positioning of the Destination objects. This method ensures that the Boid system will not push your objects off of their paths, confusing any pathfinding you may be using.

, , , ,

Boid Flocking and Pathfinding in Unity, Part 2

July 5th, 2012 by Anthony Capobianchi

In my last post, we worked through the steps to create a Boid system that will keep objects together in a coherent way, and a radar class that will allow the Boids to detect each other. Our next step is to figure out how to get these objects from point “A” to point “B”, by setting a destination point.

Pathfinding -

For this example, I used Aron Granberg’s A* Pathfinding Project to handle my pathfinding. It works great and uses Unity’s CharacterController to handle movement, which helps with this example. A link to download this library can be found at http://www.arongranberg.com/unity/a-pathfinding/download/ and a guide to help you set it up in your scene can be found here: http://www.arongranberg.com/astar/docs/getstarted.php

In Boid.cs I have my path finding scripts as such:

Applying the Forces -

Once we have the calculated force of the Boid behaviors and the path finder, we have to put those together and apply that to the character controller. We use Unity’s Update function on Boid.cs to constantly apply the forces.

In my next post, we will look at using a ray caster to set a destination point in the scene for the Boids, as well as how to organize a number of different destination points for the Boids to keep them from piling on top of each other.

Read part one here.

Boid Flocking and Pathfinding in Unity

June 20th, 2012 by Anthony Capobianchi

The quest for creating believable, seemingly intelligent movement among groups of characters is a challenge many game developers encounter. A while back, I had an idea for an app that required units of moveable objects to be able to coordinate and get from point ‘A’ to point ‘B’., I’ve never built anything like an RTS before, so this concept was something new to me. I searched forums and articles looking for the answers on how people achieve this sort of behavior. The majority of the help I could find was in a system referred to as “Boid” or “Flocking”, A Boid system can be set to simulate flocks or herds, allowing moving units to animate independently, giving the illusion of artificial intelligence. Over the next three blog posts, I will outline solutions to the following problems:

  1. Boid System – Creating a system to keep objects together in a coherent way.
  2. Radar Class – Creating a class that will detect if any Boids are within a certain distance from another Boid.
  3. Path and Obstacle Avoidance – Getting an object to follow a path as well as avoid obstacles that keep them from getting to their destination.
  4. Ray Caster – Setting up a ray caster that will be used to place a destination object in the scene for the Boids.
  5. Destination Points – Organizing a number of different destination points for the Boids that will prevent   them from piling on top of each other.

BOID SYSTEM

Normally, I would split up the functions into different scripts depending on their function, for instance, a script for calculating the Boid behavior force, a script for the radar, and a script for  path calculating. However, to keep the count down and to avoid possible confusion of not always knowing which script a function goes into, I consolidated it into only a few scripts -

I.    Boid.cs
II.    Destination.cs
III.    DestinationManager.cs

NOTE: All properties and variables should go at the top of your classes, in my code examples I am putting the properties above the methods to show you which properties you need in your script and why.

•    Boids

The Boid system is accomplished by creating a script (Which was named Boid.cs) that controls the behaviors of their basic movement. This included coherency, which is the will to stick together, and also separation, which is the will to keep apart. If you are unfamiliar with the concept of Boids or flocking, a great article about it can be found at http://www.vergenet.net/~conrad/boids/pseudocode.html and a C# Unity example can be found here: http://virtualmore.org/wiki/index.php?title=SimpleBoids

To set up my Boid.cs script, I set up these properties:

My Boid behaviors are set up like this:

• RADAR CLASS

We need a way for every Boid to know if there are other Boid objects surrounding it within a certain radius. In order to create this effect, we will create functions that will handle radar scans and what the scanner is looking for. The radar will be called to scan a few times every second. It is not using the Update function to get called. If it was, every frame would be making a collision check using Physics.OverlapSphere. This could cause frame rates to drop, especially if you have a lot of Boids in the scene.

In my Boid.cs script my Radar functions are set up like this:

In my next post, I will explain how I solved the problem of getting an object to follow a path while avoiding obstacles.In addition, I will explain what will be needed to apply the forces that are calculated by the Boid and pathfinding systems to get our characters moving.
Anthony Capobianchi

, , , ,

Red5 Authentication

May 7th, 2012 by Paul Gregoire

How to implement CRAM authentication in Red5

In this post we will setup a challenge-response authentication mechanism (CRAM) in a Red5 application using two different methods; the first one being very simple and the other utilizing the powerful Spring security libraries. A basic challenge-response process works like so:

  • Client requests a session
  • Server generates a unique, random ChallengeString (e.g. salt, guid) as well as a SessionID and sends both to client
  • Client gets UserID and Password from UI. Hashes the password once and call it PasswordHash. Then combines PasswordHash with the random string received from server in step 2, and hashes them together again, call this ResponseString
  • Client sends the server UserID, ResponseString and SessionID
  • Server looks up users stored PasswordHash based on UserID, and the original ChallengeString based on SessionID. Then computes the ResponseHash by hashing the PasswordHash and ChallengeString. If its equal to the ResponseString sent by user, then authentication succeeds.

Before we proceed further, it is assumed that you are somewhat familiar with Red5 applications and the Flex SDK. For those who would like a quick set of screencasts to get up-to-speed, we offer the following:

Implementation

Implementing a security mechanism is as simple as adding an application lifecycle listener to your application. Red5 supports a couple types of CRAM authentication via an available auth plugin. The first one implements the FMS authentication routine and the other one is a custom routine developed by the Red5 team. In this post we will use the Red5 routine. An ApplicationLifecycle class implementing the Red5 routine, may be found in the Red5 source repository; this code only validates against the password “test”. While this class would not be useful in production, it may certainly be used as a starting point for a real implementation. Red5AuthenticationHandler Source

To enable the usage of your Red5AuthenticationHandler class or any other ApplicationLifecycle type class for that matter, you must add it to the listeners in your Application’s appStart method.

The reason for putting it in the appStart method is to ensure that the handler is added when your application starts and before it starts accepting connections. There is no other code to add to your application adapter at this point since the lifecycle methods will fire in your handler. Putting the authentication code within a lifecycle handler serves to keep the adapter code cleaner and less confusing. The authentication handler is defined in the red5-web.xml like so:


At this point, your application would require authentication before a connection would be allowed to proceed beyond “connect”. Entering any user name and the password of “test” or “password” (depends on class used in demo) would allow a client to be connected. As stated previously, this first “simple” implementation is not meant for production but is offered as a means to understand the mechanism at work.

Adding Spring Security

Once we add security layers such as Spring security, the application and authentication features become much more robust. The first thing we must do is to add the Spring Security namespace to our applications red5-web.xml.

Replace this node:

With this node:


Add the authentication manager bean and configure it to use a plain text file. The users file contains our users, passwords, and their credentials.

To demonstrate how users are handled, we will create three users: 1 admin, 1 regular user, and 1 user without a role. The plain text users file follows this pattern: username=password,grantedAuthority[,grantedAuthority][,enabled|disabled] A user can have more than one role specified; granted authority and role are synonymous. Below are the contents of our users file for this demo.


In addition to the authentication handler, a authentication manager must be added when using Spring security. The manager should be added to the appStart method prior to adding the handler, as shown below.

The Spring security version of the authentication handler will replace the simple version in your red5-web.xml like so:


Lastly, an aggregated user details service is used for storage and look ups of user details; this is essentially an interface to the internal in-memory datastore holding the user details or credentials. The user details may be configured to retrieve details from our local properties file, databases, ldap, or active directory. Our aggregated service is fairly simple as you can see below.

It should be noted that Spring security makes use of an additional Spring framework library that is not included in Red5; the transaction library provides DAO and transaction implementations which do not require an external database or related dependencies. All the libraries required for the demo are included in the project zip file.

Client code

Creation of an authentication enabled client will require a single library not included in Flex / Flash builder called as3crypto. The AS3 cryptography library will provide the hashing functions nessasary for authentication in our demo. Download the as3crypto.swc from: http://code.google.com/p/as3crypto/ and place it in the “libs” folder of our client project.

The following functions will be needed in your client support authentication:

The sendCreds method is called “later” from the sendCredentials method to prevent issues in the event thread.

These are the imports that need to be added before beginning.

In your connect function you will need to determine which authentication mode to employ. The following block will show how to set up the connect call based on the type selected.

You may notice that the type is simply a string in the url denoting which mode to use.

In your net status event handler, you will need to add handling for authentication routines. The following block demonstrates how to do so when an NetConnection.Connect.Rejected is received.

Once you are successfully connected, there are two methods in the demo code for testing access. The helloAdmin method will return a positive result if the authenticated user has the admin permission. In the helloUser method the routine is similar, but only the user permission is required. The included users file contains an admin user and two regular users, the second user is set up to have no permissions. The user without permissions may only connect application and call unprotected methods.

Protected methods in the application contain an isAuthorized check against preselected permissions.

If the user does not qualify, the call fails.

In a future post, I will explain how to add Java Authentication and Authorization Service (JAAS) to Red5.

Download
Project Source Client and server

, , , , , , , , , ,

Android – Live Wallpaper part 2

March 22nd, 2012 by Paul Gregoire

Let’s make the star that we created in part 1 rotate in just a few simple steps. Although before we start, I’d like to note that we are technically rotating the entire canvas and not just the star itself; adding more than one star to the display will clearly expose this minor detail. I’ve done some research on the rotation of multiple individual items at-once, but I have not yet found a solution that fits; feel free to comment if you are aware of how to accomplish it.

The code modifications below are being in the LiveWallpaper.java source file, we are not using the only other source file GoldstarActivity.java at this time.

1. Add a variable where we keep track of the first frame drawn. This will prevent some calculations from being executed more than once per instance.

2. To rotate the star, we will use degrees. In steps 2 and 3, two methods of generation are shown. In this section we use a float counter for degrees of rotation.

3. In this section we use a fixed array for degrees of rotation. To do so, we also must create an index counter and an array of 360 floats; again, its up-to-you as to which option to use.

4. If using the array method, the onCreate method is modified to pre-fill our array when the application is first initialized.

Note: I’ve tried both methods for degree cycling and they seem equally fast on my device.

4. Modify our drawing code from part 1. The code block is here for reference.

Before

After (using float counter)

After (using float array)

6. As an aside to what we done thus far, you can easily add another star with a couple extra lines like so:

The additional star will be down and to the right of the primary star. This star will not rotate per-se but instead will “orbit” the primary star.

7. Last step, build and run in the emulator. Right click on the goldstar project in Package Explorer and select Run As -> Android Application. If you haven’t created any AVD (virtual devices), you’ll be prompted to create one. Creation of an AVD is covered here.

When everything works, you’ll see this in the Eclipse console:

I recorded the emulator running the apk in this screencast

Running on my Galaxy Nexus

In the sample code, I’ve refactored the original class alittle to make things more clear.
Project Source

End of part two.

, , , ,

Android – Live Wallpaper

March 17th, 2012 by Paul Gregoire

Herein I shall walk you through the steps for creating a live wallpaper in Android. Before we begin any Android development, the SDK and ADT plugin for Eclipse will need to be installed. The best installation guide is here; Disregard this if you already have the SDK and plugin installed. For this first part, we will simply display a graphic and in the followups we will do some animation. Without further ado, let’s get started.

1. The first step, is to create the new project
File -> New -> Android Project

We will call it “goldstar” and target Android 2.1 (API 7); this version of Android was the first to support Live Wallpapers.

2. Open up the AndroidManifest.xml file and add the nodes that we will need to support our application. Here is the manifest before our additions were
made:

This is the “after” version, where we added our feature, permission, and service nodes:

3. Create a metadata file for our service. This is accomplished by making an xml directory within the res folder of the project. Create a new file named “metadata.xml” in this folder with these contents:

4. Add a description for our application. Open the strings.xml file and add a string with a name of “wallpaper_description” and a value of “Goldstar Live”. You may actually use whatever value suits you, this one is just for the example.

5. Get the svg library and place it in the “libs” folder; this folder must be created manually, if it does not already exist in the project.
We are using svg-android library from http://code.google.com/p/svg-android/ for this example. This library was also used in the popular “Androidify” application.

6. Locate an SVG image file to use in our application, preferrably one that is not copyrighted. Remember that google is your friend

https://www.google.com/search?q=star%20svg&orq=star+filetype:+svg

Here’s a gold star on wikimedia that you can use: https://upload.wikimedia.org/wikipedia/commons/9/9c/Golden_star.svg

Once you have a suitable file, save it into the “raw” directory within the “res” directory of the project. Note that your resource may only contain this range of characters in its name: a-z0-9_.

7. Now for some code; create a new class in the wallpaper package and name it LiveWallpaper. Set the super class to android.service.wallpaper.WallpaperService and click Finish. Your new class should appear like this:

8. Create an internal class named StarEngine which extends Engine. The result of this should appear like so:

9. Right-click on StarEngine and select “Source -> Override/Implement Methods”. Now select the following methods:

onCreate
onDestroy
onVisibilityChanged
onSurfaceChanged

then click ok. This will create the method stubs that we are interested in.

10. Modify the onCreateEngine method to create a new instance of our engine.

We have also added to static variables for the frame rate and scene width.

11. Load our svg asset. Create a local engine variable and modify the onCreate method like so:

This will read the file resource and parse it to create an SVG image object.

12. Thread and handler must now be setup to take care of drawing on the canvas. We modify the engine like so:

13. Drawing on the canvas. In our drawFrame method we will use our svg asset and draw it into view.

14. Build and run in the emulator; you should see something like this:

15. Lastly, if you want to have nicer launcher images for your application there are free services to utilize such as this one:

http://android-ui-utils.googlecode.com/hg/asset-studio/dist/icons-launcher.html

Just upload your image and do a little configuration and you get a zip containing all the launcher images you need.

Project Source

End of part one; for part two we will cover animation.

, , ,

Creating 2nd UV sets in Maya for Consistent and Reliable Lightmapping in Unity 3d

January 11th, 2012 by Elliott Mitchell

Lightmaps in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

Have you ever worked on a game that was beautifully lit in the Unity editor but ran like ants on molasses on your target device? Chances are you might benefit from using lightmaps. Ever worked on a game that was beautifully lit with lightmaps but looked different between your Mac and PC in the Unity editor? Chances are you might want to create your own 2nd UV sets in Maya.

Example of a lightmap

Example of a lightmap

If you didn’t know, lightmaps are 2D textures pre-generated by baking (rendering) lights onto the surfaces of 3D objects in a scene. These textures are additively blended with the 3D model’s original textures to simulate illumination and fine shadows without the use of realtime lights at runtime. The number of realtime lights rendering at any given time can make or break a 3D game when it comes to optimal performance. By reducing the number of realtime lights and shadows your games will play through more smoothly. Using fewer realtime lights also allows for more resources to be dedicated to other aspects of the game like higher poly counts and more textures. This holds true especially when developing for most 3D platforms including iOS, Android, Mac, PC, Web, XBox, PS3 and more.

Since the release of Unity 3 back in September 2010, many Unity developers have been taking advantage of Beast Lightmapping as a one-stop lightmapping solution within the Unity editor. At first glance Beast is a phenomenal time saving and performance enhancing tool. Rather quickly, Beast can automate several tedious tasks that would have needed to be preformed by a trained 3D technical artist in an application like Maya. Those tasks being mostly UV related are:

UVs in positive UV co-ordinate space

  • Generating 2nd UV sets for lightmapping 3D objects
  • Unwrapping 3D geometry into flattened 2D shells which don’t overlap in O to 1 UV co-ordinate quadrant
  • Packing UV shells (arranging the unwrapped 2D shells to optimally fit within a square quadrant with room for mipmap bleeding)
  • Atlasing lightmap textures (combining many individual baked textures into larger texture sheets for efficiency)
  • Indexing lightmaps (linking multiple 3D model’s 2nd UV set UV co-ordinate data with multiple baked texture atlases in a scene)
  • Additively applies the lightmaps to your existing model’s shaders to give 3D objects the illusion of being illuminated by realtime lights in a scene
  • Other UV properties may be tweaked in the Advanced FBX import settings influencing how the 2nd UVs are unwrapped and packed which all may drastically alter your final results and do not always transfer through version control

Why is this significant? Well your 3D object’s original UV set is typically used to align and apply textures like diffuse, specular, normal, alpha texture maps, etc, onto the 3D object’s surfaces. There are no real restrictions on laying out your UVs for texturing. UV’s may be stretched to tile a texture, they can overlap, be mirrored… Lightmap texturing requirements in Unity, on the other hand, are different and require:

  • A 2nd UV set
  • No overlapping UVs
  • UVs and must be contained in the 0 to 1, 0 to 1 UV co-ordinate space

Unwrapping and packing UVs so they don’t overlap and are optimally contained in 0 to 1 UV co-ordinate space is tedious and time consuming for a tech artist. Many developers without a tech artist purchase 3D models online to “save time and money”. Typically those models won’t have 2nd UV sets included. Beast can Unwrap lightmapping UVs for the developer without much effort in the Unity Inspector by:

Unity FBX import settings for Lightmapping UVs

Advanced Unity FBX import settings for Lightmapping UVs

  • Selecting the FBX to lightmap in the Unity Project Editor window
  • Set the FBX to Static in the Inspector
  • Check Generate Lightmap UVs in the FBXImporter Inspector settings
  • Change options in the Advanced Settings if needed

Atlasing multiple 3D model’s UVs and textures is extremely time consuming and not always practical especially when textures and models may change at a moment’s notice during the development process.  Frequent changes to atlased assets tend to create overwhelming amounts of tedious work. Again, Beast’s automation is truly a great time saver allowing flexibility in atlasing for iterative level design plus scene, object and texture changes in the Unity editor.

Sample atlases in Unity

Beast’s automation is truly great except for when your team is using both Mac and PC computers on the same project with version control that is. Sometimes lightmaps will appear to be totally fine on a Mac and look completely messed up on PC and vise versa. It’s daunting to remedy this and may require, among several tasks, re-baking the all the lightmaps for the scene.

Why are there differences between the Mac and PC when generating 2nd UV sets in Unity? The answer is Mac and PC computers have different floating point precisions used to calculate and generate 2nd UV sets for lightmapping upon importing in the Unity editor.  The differences between Mac and PC generated UVs are minuet but can lead to drastic visual problems. One might assume that with version control like Unity Asset Server or Git, the assets would be synced and exactly the same, but they are not. Metadata and version control issues are for another blog post down the road.

What can one to do to avoid issues with 2nd UV sets across Mac and PC computers in Unity? Well, here are four of my tips to avoid lightmap issues in Unity:

Inconsistent lightmaps on Mac and PC in the Unity Editor - Courtesy of Brass Monkey - Monkey Golf

  1. Create your own 2nd UV sets and let Beast atlas, index and apply your lightmaps in your Unity scene
  2. Avoid re-importing or re-generate 2nd UV assets if the project is being developed in Unity across Mac and PC computers when your not creating your own 2nd UV sets externally
  3. Use external version control like Git with Unity Pro with metadata set to be exposed in the Explorer or Finder to better sync changes to your assets and metadata
  4. Use 3rd party editor scripts like Lightmap Manager 2 to help speedup the lightmap baking process by empowering you to be able to just re-bake single objects without having to re-bake the entire scene

Getting Down To Business – The How To Section

If your 3D model already has a good 2nd UV set and you want to enable Unity to use it:

  • Select the FBX in the Unity Project Editor window
  • Simply uncheck Generate Lightmap UVs in the FBXImporter Inspector settings
  • Re-bake lightmaps

How to add or create a 2nd UV set in Maya to export to Unity if you don’t have a 2nd UV set already available?

Workflow 1 -> When you already have UV’s that are not overlapping and contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window Menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs Menu -> UV Set Editor
  5. With your model selected click Copy in the UV Set Editor to create a 2nd UV set
  6. Rename your 2nd UV set to whatever you want
  7. Export your FBX with it’s new 2nd UV set
  8. Import the Asset back into Unity
  9. Select the FBX in the Unity Project Editor window
  10. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  11. Re-bake Lightmaps

Workflow 2 -> When you have UV’s that are overlapping and/or not contained within the 0 to 1 co-ordinate space:

  1. Import and select your model in Maya (be sure not to include import path info in your namespaces)
  2. Go to the Polygon Menu Set
  3. Open the Window menu -> UV Texture Editor to see your current UVs
  4. Go to Create UVs menu -> UV Set Editor
  5. With your model selected click either Copy or New in the UV Set Editor to create a 2nd UV set depending on whether or not you want to try to start from scratch or to work from what you already have in your original UV set
  6. Rename your 2nd UV set to whatever you want
  7. Use the UV layout tools in Maya’s UV Texture Editor to layout and edit your new 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (another tutorial on this step will be in a future blog post)
  8. Export your FBX with it’s new 2nd UV set
  9. Import the Asset back into Unity
  10. Select the FBX in the Unity Project Editor window
  11. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  12. Re-bake Lightmaps

Workflow 3 -> Add a second UV set from models unwrapped in a 3rd party UV tool like Headus UV or Zbrush to your 3D model in Maya

  1. Import your original 3D model into the 3rd party application like Heads UV and layout your 2nd UV set being certain to have no overlapping UV’s contained in the 0 to 1 UV co-ordinate space (tutorials to come)
  2. Export your model with a new UV set for lightmapping as a new version of your model named something different from the original model.
  3. Import and select your original Model in Maya (be sure not to include import path info in your namespaces)
  4. Go to the Polygon Menu set
  5. Open the Window Menu -> UV Texture Editor to see your current UVs
  6. Go to Create UVs Menu -> UV Set Editor
  7. With your model selected click New in the UV Set Editor to create a 2nd UV set
  8. Select and rename your 2nd UV set to whatever you want in the UV Set Editor
  9. Import the new model with the new UV set being certain to have no overlapping UV’s all contained in the 0 to 1 UV co-ordinate space
  10. Make sure your two models are occupying the exact same space with all transform nodes like translation, rotation and scale values being the exactly the same
  11. Select the new model in Maya and be sure it’s UV is set selected in the UV Set Editor
  12. Shift select the old model in Maya (you may need to do this in the Outliner) and be sure it’s 2nd UV is set selected in the UV Set Editor
  13. In the Polygon Menu Set goto the Mesh Menu -> Transfer Attributes Options
  14. Reset the Transfer Attributes Options settings to default by File -> reset Settings within the Transfer Attributes Menus
  15. Set Attributes to Transfer all to -> Off except for UV Sets to -> Current
  16. Set Attribute Settings to -> Sample Space Topology with the rest of the options at default
  17. Click Transfer at the bottom of the Transfer Attributes Options
  18. Delete non-deformer history on the models or the UVs will break by going to the Edit menu -> Delete by Type -> Non-Deformer History
  19. Select the original 3D model’s 2nd UV set in the UV Set Editor window and look at the UV Texture Editor window to see it the UV’s are correct
  20. Export your FBX with it’s new 2nd UV set
  21. Import the Asset back into Unity
  22. Select the FBX in the Unity Project Editor window
  23. Uncheck Generate Lightmap UVs in the FBXImporter Inspector settings.
  24. Re-bake Lightmaps

Once you have added your own 2nd UV sets for Unity lightmapping there will be no lightmap differences between projects in Mac and PC Unity Editors! You will have ultimate control over how 2nd UV space is packed which is great for keeping down vertex counts from your 2nd UV sets, minimize mipmap bleeding and maintain consistent lightmap results!

Keep an eye out for more tutorials on UV and Lightmap troubleshooting in Unity coming in the near future on the Infrared5 blog! You can also play Brass Monkey’s Monkey Golf to see our bear examples in action.

-Elliott Mitchell

@mrt3d on Twitter

@infrared5 on Twitter

, ,

Aerial Combat in Video Games: A.K.A Dog Fighting

September 27th, 2011 by John Grden

A while back, we produced a Star Wars title for Lucas Film LTD. called “The Trench Run” which did very well on iPhone/iPod sales and later was converted to a web game hosted on StarWars.com.  Thanks to Unity3D’s ability to allow developers to create with one IDE while supporting multiple platforms, we were able to produce these 2 versions seamlessly!  Not only that, but this was one of our first releases that included the now famous Brass Monkey™ technology, which allows you to control the game experience of The Trench Run on StarWars.com with your iPhone/Android device as a remote control.  [Click here to see the video on youtube]

Now, the reason for this article is to make good on a promise I made while speaking at Unite2009 in San Francisco.  I’d said I would go over *how* we did the dog fighting scene in The Trench Run, and I have yet to do so.  So, without further delay…

Problem

The problems surrounding this issue are a few fold:

  1. How do you get the enemy to swarm “around you”?
  2. How do you get the enemy to attack?
  3. What factors into convincing AI?

When faced with a dog fight challenge for the first time, the first question you might have is how do you control the enemy to keep them flying around you (engage you), thus one of the most important ones is how to achieve the dog fight AI and playability.  The issue was more than just simple mechanics of how to deal with dog fighting, it also included issues with having an “endless” scene, performance issues on an iDevice and seamlessly introducing waves of enemies without interrupting game flow and performance.

Solution

In debug mode - way point locations shown as spheres

The solution I came up with, was to use what I would call “way points”.  Way points are just another term for GameObjects in 3D space.  I create around 10-15 or so, randomly place them within a spherical area around the player’s ship and anchor them to the player’s ship so that they’re always relatively placed around the player ( but don’t rotate with the player – position only ).  I use GameObjects and parent them to the a GameObject that follows the player, and this solves my issue of having vectors always positioned relative to the player.  The enemies each get a group of 5 way points and continually fly between them.  This solves the issue of keeping the enemies engaged with the player no matter where they fly and allows the enemy the opportunity to get a “lock” on the player to engage.   Since the way points move with the player’s position, this also creates interesting flight patterns and behavior for attacking craft, and now we’ve officially started solving our AI problem.

Check out the Demo.  Get the files

Check out the demo – the camera changes to the next enemy that gets a target lock on the player ship (orange exhaust).  Green light means it has a firing lock, red means it has a lock to follow the player.

Download the project files and follow along.

Setting up the Enemy Manager

The Enemy manager takes care of creating the original way points and providing an api that allows any object to request a range of way points.   Its functionality is basic and to the point in this area.  But it also takes care of creating the waves of enemies and keeping track of how many are in the scene at a time (this demo does not cover that topic, I leave that to you).

First, we’ll create random way points and scatter them around.  Within a loop,  you simply use Random.insideUnitSphere to place your objects at random locations and distances from you within a sphere.  Just multiply the radius of your sphere (fieldWidth) by the value returned by insideUnitSphere, and there you go – all done.

Now, the method for handing out way points is pretty straight forward.  What we do here is give our enemy craft a random set of way points.  By doing this, we’re trying to avoid enemies having identical sets and order of way points given to each enemy.

NOTE:  You can change the scale of the GameObject that the way points are parented to and create different looking flight patterns.  In the demo files, I’ve scaled the GameController GameObject on the Y axis by setting it to 2 in the IDE.  Now, the flight patterns are more vertical and interesting, rather than flat/horizontal and somewhat boring.  Also, changing the fieldWidth to something larger will create longer flight paths and make it easier to shoot enemies.  A smaller fieldWidth means that they’ll be more evasive and drastic with their moves.  Coupled with raising the actualSensivity, you’ll see that it becomes more difficult to stay behind and get a shot on an enemy.

Setting up the enemy aircraft

The enemy needs to be able to fly one their own from way point to way point.  Once they’re in range of their target, they randomly select the next way point.   To make this look as natural as possible, we continually rotate the enemy until they’re within range of “facing” the next target and this usually looks like a nice arc/turn as if a person were flying the craft.  This is very simple to do thankfully.

First, after selecting your new target, update the rotationVector (Quaternion) property for use with the updates to rotate the ship:

Now, in the updateRotation method, we rotate the ship elegantly toward the new way point, and all you have to do is adjust “actualSensitivity” to achieve whatever aggressiveness you’re after:

As you’re flying, you’ll need to know when to change targets.  If you wait until the enemy hits the way point, it’ll likely never happen since the way point is tied to the player’s location.  So you need to set it up to see if it’s “close enough” to make the change, and you need to do this *after* you update the enemy’s position:

Enemy flying to his next way point



You can also simply change the target for an enemy on a random timer – either way would look natural.

NOTE:  Keep the speed of the player and the enemy the same unless you’re providing acceleration controls to match speeds.  Also, keep in mind, that if your actualSensitivity is low (slow turns), and your speed is fast, you will have to make the bufferDistance larger since there is an excellent chance that the enemy craft will not be able to make a tight enough turn to get to a way point, and will continue to do donuts around it.  This issue is fixed if the player is flying around, and is also remedied by using a timer to switch way point targets.    You can also add code to make the AI more convincing that would suggest that way points are switched very often if the enemy is being targeted by the player (as well as increasing the actualSensitivity to simulate someone who is panicking).

Targeting the Player

Red means target aquired : Green means target lock to fire

The next thing we need to talk about is targeting, and that’s the 2nd part of the AI.  The first part is the enemy’s flight patterns, which we solved with way points.  The other end of it is targeting the player and engaging them.  We do this by checking the angle of the enemy to the player.  If that number falls within the predefined amount, then the currentTarget of the enemy is set to the player.

The nice part about this is that, if the player decides to fly in a straight path, then eventually (very soon actually) all of the baddies will be after him and shooting at him because of the rules above.  So, the nice caveat to all of this is that it encourages the player fly evasively.  If you become lazy, you get shot 0.o

You can also change the property “actualSensitivity” at game time to reflect an easy/medium/hard/jedi selection by the player.  If they choose easy, then you set the sensitivity so that the enemy reacts more slowly to the turns.   If it’s Jedi, then he’s a lot more aggressive and the “actualSensitivity” variable would be set to have them react very quickly to target changes.

Firing

And finally, the 3rd part to the AI problem is solved by having yet another angle variable called “firingAngle”.  “firingAngle” is the angle that has to be achieved in order to fire.  While the angle for changing targets is much wider (50), the ability to fire and hit something is a much tighter angle ( >= 15 ).  So we take the “enemyAngle” and check it against “firingAngle” and if it’s less, we fire the cannons on the player.  You could also adjust the “firingAngle” to be bigger for harder levels so that the player’s ship falls into a radar lock more frequently.

In the sample, I added an ellipsoid particle emitter/particle animator/particle renderer to the enemy ship object set the references to the “leftGun / rightGun” properties and unchecked “Emit” in the inspector. Then, via the Enemy class, I simply set emit to true on both when its time to fire:

Conclusion

So, we’ve answered all 3 questions:

  1. How do you get the enemy to swarm “around you”?
  2. How do you get the enemy to attack?
  3. What factors into convincing AI?

With the way point system, you keep the enemy engaged around you and the game play will be very even and feel like a good simulation of a dog fight.  The way points keep the enemy from getting unfair angles and provide plenty of opportunity for the player to get around on the enemy and take their own shots, as well as provide flying paths that look like someone is piloting the ship.  And adjusting values like “actualSensitivity”, “fieldWidth” and “firingAngle” can give you a great variety of game play from easy to hard.  When you start to put it all together and see it in action, you’ll see plenty of room for adjustments for difficulty as well as getting the reaction and look you want out of your enemy’s AI.

Have a Bandit Day!

« Previous Entries