Android Graphics and Animation: Part II – Animation

July 19th, 2011 by Keith Peters

Android Animation

In the first part of this series, we covered basic graphics in Android – starting a new Android project, creating a custom view and displaying it, and using that view to draw custom graphics in its onDraw method. To recap, the drawing occured only when the onDraw method was called by the system when it determined that the app needed to refresh its display. This generally occurs once when the app starts and only occasionally, if ever, thereafter. For animation, we need to be able to trigger redraws on a regular basis. This is quite a bit more complex than drawing a static image, but not horribly so, so let’s dive in.


In the last example, we extended View for our custom view class. That was fine for the purpose, but will not be adequate for drawing multiple times like we need to do for animation. For View, the onDraw method is triggered by the system when it knows that the Canvas is safe to draw on. It can set things up for us before calling onDraw, and then clean up when it is done executing. Since we need to do drawing on our own schedule, we need a view class that will let us do this set up and clean up ourselves. That class is called SurfaceView. So that’s what our new view will extend.

To get started, create a new Android project the same way we did last time. Call the project and activity “Animation”. Again, in the main activity class replace the call to setContentView with a custom view. We’ll call this AnimView:

Of course, AnimView does not exist, so we’ll get an error. Trigger a quick fix, which will offer to create the AnimView class. Before accepting the defaults in the New Java Class dialog, change the superclass field to “android.view.SurfaceView”. When the class is created, trigger another quick fix to create the constructor. You should end up with the following:

At this point, the app should compile and run, but naturally will show just a black screen.


Again, since we will be a lot more in control of when things get drawn, we need to go a little more low level in what we are doing. When using onDraw, you are automatically passed a Canvas object that you are safe to draw on. When using SurfaceView though, you need to get your canvas from something called a SurfaceHolder. This can be retrieved by simply calling getHolder() from the SurfaceView instance. That’s easy enough, but there’s another bit of complexity coming up.

You can’t draw to a surface of a SurfaceView/SurfaceHolder until the surface is created. And you should not draw to it after it has been destroyed. So we need to know when these things happen. To do that, we can let the holder know that we want to handle related events. To do this, we call surfaceHolderInstance.addCallback(viewInstance). But one more catch – the object you pass to this method must implement an interface defined as SurfaceHolder.Callback. So our class definition starts out as:

public class AnimView extends SurfaceView implements SurfaceHolder.Callback {

When you do that, you’ll be informed that you are not implenting the required methods of that interface. Use a quick fix to add them. With all that done, you should have the following:


Now we can start animating. In animation, you need generally have some kind of model of what you are animating, with some kind of rules on how that model changes. you need to update the model, and then render that model to the display, then update the model again, render again, and so on.

If you are used to animating in Flash you’re familiar with doing this via enterFrame, or perhaps with timers. Timers are also used in JavaScript animation. In Android though, we generally use threads.

Threads can be a bit scary as they are a bit more complex than a simple timer. If you’re not familiar with threads, the concept is just that you are starting another process that runs independently from the main process. This is useful for operations that might take a long time or will not return immediately. The new thread does its own thing in its own time frame, and the main process of your app continues to do what it needs to do, remaining responsive, etc.

The scary part of threads is that they run separately, but are able to access the same variables and objects in a non-synchronized way. Thus, one thread might be performing some procedure on a given object, and right in the middle of tht procedure, the other thread might step in and change the state of that object or even delete it. So you have to take some extra steps to guard against these types of situations.

Our view will use a separate thread to perform its animation. We will create and start the thread running in the surfaceCreated method, and we will stop the thread in the surfaceDestroyed method. There are a number of different ways to use threads. The way we’ll do it is to subclass the Thread class and put the custom functionality in that class.

Here’s the start of our custom thread class:

In order to draw to a canvas, we’ll need the surface holder to get the canvas from, so we’ll pass that in in the constructor and save it. We’ll also need a variable that will indicate whether or not the thread is currently running and a way to set that.

When we create an instance of this thread class and call start() on it, its run method will be executed in a separate process. We’ll actually use a while loop to do our animation. This may seem odd if you’re coming from the Flash or JavaScript world, where in infinite while loop would just lock things up. But because this is in a separate thread, it works out fine.

The pseudocode for what we will do is like this:

This will just run forever. Well, until we set running to false anyway. As you might have guessed, we’ll create and start the thread in the surfaceCreated method and we’ll set running to false in the surfaceDestroyed method. There’s a few more details to it, but we’ll get there eventually.

Locking and Unlocking

To get a canvas from a surface holder, we actually call holder.lockCanvas(). This prevents anything from happening to the canvas while we are using it. When we are done with our drawing, we call holder.unlockCanvasAndPost(canvas), passing in the canvas instance we just drew to. This frees it up and displays what was just drawn.

Here is the final code with some actual animation going on:

Here you can see we declare the canvas variable, then we enter a try block where we get the canvas and do the drawing. This allows us to unlock the canvas in a finally block, so that even if an exception is thrown while drawing, we won’t leave the canvas in locked state.

Note that the drawing is done in a synchronized block. This puts a lock on the holder so that nothing else can change it from another thread while we are using it. In this block we set the background to black and draw a white circle. The x value will be incremented on each loop, moving the circle across the screen.

Starting and stopping the thread

All we have to do now is create, start, and stop this thread. We’ve already said that we’ll do that in surfaceCreated and surfaceDestroyed methods. So let’s see what this looks like. First the created:

Simple enough. We create the thread, passing in the suface holder, set running to true, and start it. This will wind up executing the run method, which will run that for loop in a separate process.

The destroyed method is a bit more complex:

First of all, we set running to false. This will allow the while loop in the run method to exit. But since that’s happening in another thread, we don’t know exactly when that’s going to happen. So we want to make sure that it’s really fully complete before we leave here. We do that with the join method of the thread. That will cause execution to stop and wait for that thread to end. However, this will sometimes result in an InterruptedException. So we throw that whole thing in a try/catch statement and keep retrying it until the join finally successfully returns. Here’s the final AnimView class:

, , , ,

Android Graphics and Animation: Part I

June 27th, 2011 by Keith Peters

This is the start of a series of tutorials on graphics and animation on the Android platform. There is plenty of information out there on how to create general form-based, controls-and-layout type of Android apps, but very little on how to do more creative drawing and animation. So this series will cover the following topics:

1. Android graphics.
2. Android animation.
3. Android input: Accelerometer.
4. Android input: Touch.

Today we’ll get started with simple graphics. There are actually a few different ways to draw graphics on the screen in Android.

First, there is the Canvas class, which gives you a nice basic drawing API to create lines, circles, rectangles, fills, strokes, deal with bitmaps, etc.

Then there’s OpenGL. If you’re going to do 3D or just need more raw graphics and animation power, you’ll probably want to use OpenGL, or more likely use one of the various 3rd party libraries that make it a bit easier to use.

And then there is something called RenderScript, which was introduced in Android 3.0 (which, at the time of this writing is supported by only a few devices).

For this set of articles, we’ll be using the simplest and most widely available option, Canvas.

Setting up an Android coding environment

Of course, before we can even get started, you’ll need to have an Android coding environment set up and a connected Android device. You could use the Android simulator, and you should use it for testing different device resolutions and capabilities, but in general day-to-day dev, you’ll probably find it faster and easier to deploy and test on a device.

I’m not going to go into very deep detail about this, only because Google has covered it in far more depth than I ever could. So I’ll just point you to the right place.

Here you’ll find links to the SDK, Developer’s Guide, References, Resources, Videos, and a blog. Within all that, you’ll find step by step instructions on how to set up your environment. But in a nutshell, you’ll need to:
1. Install Eclipse (or another editor of your choice, but this tutorial will assume you’re using Eclipse).
2. Download the Android SDK. This is just a folder of files and tools used in developing Android apps.
3. Install the ADT Plugin, Android Development Tools. This is an Eclipse plugin that will set up your Eclipse install to build Android apps.
4. Add Android platforms and components.

These steps are all covered in more detail here:

Connecting a device or creating a virtual device (emulator)

Next you’ll need to have someplace to run your code. Again, I recommend using a real device as much as you can. Setting up a device for development is covered here:

If you don’t have a physical device, or are at a point where you need to test some different resolutions or features your device doesn’t have, this link will walk you through setting up a virtual device on the emulator:


OK, let’s make an app. Assuming you have everything installed and working, and are using Eclipse as your editor, fire it up and create a new workspace. Then create a new project by using the menu File -> New -> Android Project. This will bring up the “New Android Project” dialog.

Give your project a name, “Drawing” and choose a Build Target. We’ll stick with Android 2.2 since that’s a pretty common one.

Going further down, we need an application name, package name, and activity name. The application name is what will show up on the device. For now, think of the activity name as the name of the main class of the app. The package is the class package as in any Java project. Finally we need to specify the minimum SDK version. We’ll choose 8 here to coincide with the Android 2.2 SDK. The whole numbering system for SDKs and SDK versions is a bit confusing. I’ll leave it to you to figure it out more on your own. But the above settings will work for now.

Now we can click “Finish” and our project will be created. Your package explorer view should look like this:

There you can see your src folder with your package and main activity class. Opening that class you should see the following code:

Since this is the only activity in this application, this class will be instantiated when the app is run, and the onCreate method will be called. This is where you want to hook into to initialize pretty much everything.

Right now, onCreate calls super.onCreate and then setContentView, passing in something called R.layout.main. If you’re curious what this is, look in the folder res/layout and you’ll see main.xml, which will look like this:

If you’ve done any work with Flex, Silverlight, or any other XML-based layout systems (or even HTML) this will look pretty familiar. It creates a layout with a single child that is a TextView. The TextView’s text property is set to “@string/hello”. If you want to see what that is, look in res/values/strings.xml.

The Android compiler will compile all the stuff in the res folder into classes or embeddable assets as appropriate. So res/layout/main.xml becomes the R.layout.main, which is an instance of a class that extends View and can be set as the activity’s content view using setContentView.

Now, if you’ve set everything up correctly, you should be able to run or debug this project on your device and/or in the emulator and see something like the following:

If this is not working, stop here and get it debugged. This is the bare bones of project setup, and everything else depends on this.

Custom Views

OK, that’s all very interesting, but we’re not going to use much in the res folder or any of that xml-based layout stuff here. We’re going down to the metal and writing our own drawing code.

But since we aren’t relying on the compiler to create a view from xml for us, we’ll have to make our own view class. We can even use some of the ADT plugin’s shortcuts to let it do a bunch of the work for us. Change to look like this:

Here we’ve created a new class member, drawingView of type View, instantiated it as a new DrawingView, passing in this to the constructor, and set it as the content view.

Of course, Eclipse will complain because DrawingView does not exist yet. But if we click on that error it will offer to create the class for you. It will even know that it should extend View. So go ahead and let it create that class. It should look like this:

Now it’s going to complain again because it wants a constructor that takes an argument. Again, use the quick fix feature to let it create the constructor it wants. Now you’ll have this:

We’re at a stable point here, so go ahead and run that on your device/emulator and make sure it launches. You shouldn’t see anything but a black screen with the app name at the top, but it should compile and deploy.

OK, now we have a view we can draw in. The View class is designed so that all the drawing will be done in an onDraw method. This method will be automatically called whenever the view needs to be redrawn. To create this method, type “onDraw”, trigger auto-complete, and accept the first choice. You should wind up with an onDraw method like you see below (or you could go all old school and actually type it by hand).

You see this method has given us a Canvas to draw on. If you trigger autocomplete on canvas, you’ll see that it has all kinds of drawing methods. Let’s add a call to drawLine right after the super.onDraw call:

As you probably guessed, the first arguments for this are the x, y values of an initial and an ending 2d point. The last argument, paint, is a Paint object that tells the system what to make this line look like (color, width, etc.). Since we haven’t defined paint yet, it will give you an error. Trigger a quick fix to create a field named paint. Then in the constructor we’ll instantiate it and give it some properties. Here’s the result:

Don’t forget the imports for Color and Style. You can run or debug this now and you should have an utterly fascinating diagonal white line on your device’s screen. When you’ve calmed down and gotten yourself under control, we’ll move on.

Setting the Background Color

Perhaps you want to change the background color. You can do that will canvas.drawColor, passing in the color you want to use. Note that this will actually clear the screen, so you’ll want to do this before drawing anything important.

Specifying Colors

In addition to the constants on the color class, like Color.BLACK, Color.WHITE, Color.RED, etc. you can specify exact colors with Color.rgb(red, green, blue) where each parameter is an int from 0 to 255, or Color.argb(alpha, red, green, blue) if you need transparency.

So to set the background to a kind of light purple, do something like this:

Other Shapes

As mentioned, there are lots of other options on Canvas for drawing various things. A few examples:

Here cx and cy are the center point to draw a circle with the given radius.

Here rect is a Rect object or a RectF object (which would use floats rather than ints for its measurements).

Pretty obvious.

Then there are drawOval, drawArc, drawRoundRect, and many others.

Putting it all together

Just to implement a few things all at once, we’ll do something like this for a final demo:

Here we’ve set the style to FILL instead of STROKE, then use some fancy math and a couple of for loops to draw a grid of squares, each with a random color. Nothing amazing, but assuming you have some previous experience with any kind of drawing API from any other language, this should set you up to create all kinds of custom graphics in your Android app or game.


Here we’ve seen how to set up a new Android project and create a custom view that we can draw into. The view class is instantiated and added as the activity’s main content view, and the onDraw method is called when it’s ready to display.

Of course, since generally speaking this is only called the one time near the start of the app, it’s just a static drawing. In the next installment of this series, we’ll dive into animation and making things move in Android.

, , ,

Beast Lightmapping in Unity3D

March 22nd, 2011 by John Grden

One of the coolest features of Unity3D is the addition of Beast Lightmap Engine! In short, you can do global illumination (bake shadows/light) right there in the Unity IDE.  And if you haven’t heard about this feature yet, then you’re about to have a “moment” (get some tissues, for slobbering/weaping etc).

Here’s the basic video for Beast in Unity3D:

And check out their in-depth explanation of the Lightmapping interface here

This is very very cool indeed! With some basic settings, you can really increase the appeal of your game’s scenes using Beast within Unity3D. Not only that, but in terms of performance, especially on an iPad/iPhone, it’s invaluable. Ok, great – so now that I have your attention, what’s this post about? It’s about lightmapping, haven’t you been paying attention?!? Ok, more directly, this post focuses on how to get quick bakes and what has the most impact on a bake time.

The *why*

When you first jump into lightmapping, you really just want to “see” something immediately to get a sense of what you can adjust to get what you want out of it.

Without Litmapping

With Lightmapping

I’m going to do a simple scene with a helicopter from my new game called “Stunt-Copter” to give you an idea of what impacts your wait time, and what gives you the quality you might be after.

What takes so long?

There are two things that affect the bake time most:  1) Resolution and 2) Final Gather Rays.  I’ve personally found that Resolution affects the bake time more than Final Gather Rays does.  Obviously, the higher the resolution, the better quality you’ll get with the shading – but you’ll also wait longer. ;)  Waiting longer is fine for final game touches, but during the development time, it’s necessary to get a scene with some lighting going so that you can make your best decisions as you go along.  Or maybe you’re just tired of looking at your unlit scene, like the one above. ;)

Getting a Quick Bake

let’s start off with how this scene is setup, then we can take a look at some basic settings. First, the models you’ve imported have to have “Generate Lightmap UVs” checked and reimported. What this does is add a second UV channel to your model’s existing set of UVs. For lightmapping to work properly, the faces of the model can’t have any overlapping or shared areas in the UVs Second, the building, ground, landingPad and helicopter are all marked as “static”. This is how the light map engine identifies what will be baked and what won’t. Now, in this scene, I’ve marked the helicopter as static so that we can see the nice shadow on the ground and the ambient occlusion on the heli itself. In one of the other screenshots, you’ll also see how the color of the body and the green from the grass is baked into the under side of the blades on top, which is extremely cool – but that’s another discussion.

Now, the other thing we need to do is put a directional light into the scene and mark it as “BakedOnly” in the Lightmapping selection at the bottom of the light’s property inspector panel. Then, you’ll need to select the “Shadow Type” and set it to “soft shadows”. The only other thing I changed here was the quality – instead of using the settings in the quality settings, I changed it to “Low Resolution”. This actually saved me 8 seconds in the bake time and I couldn’t tell a difference in the shadows. See below:

High Quality - 2:12

Low Quality - 2:04

Bake Settings

Now we need to set the lightmapping settings. Here’s a screenshot (which I always appreciate) of the settings I used in this example. I’ve set the Final Ray Gather to 200 and the Resolution to 10. I’ve also set my skylight intensity to .25 and changed the color from the blue tint to a gray tint. I know *why* they put it as blue, I just don’t think it looks good, so I set it to a shade of light gray. That’s probably just me though :) Ambient Occlusion is set all the way to 1 as is bounces. Other than that, I didn’t touch anything else. At this point, if you’ve set everything up correctly, you should be able to get a quick and dirty bake in a very reasonable amount of time.

Saving time like this is a big deal when you’re trying to make “best guess” decisions on your project in the earliest stages.

Now, in the final output, you’ll notice the ambient occlusion on the pillars of the building as it meets the floor and ceiling. Event at 10 texels this looks fairly decent and certainly gives us a good enough hint about how the final render will look. While the building looks good, the helicopter doesn’t look nearly as good unfortunately.

Resolution : 10 texels

Let’s take a look at the texels first, then take a look at the helicopter closely. In this next shot, we see the building in the background and the helicopter up close with the resolution squares showing. The building looks to have many more than the helicopter. The helicopter’s texels are much larger across it’s faces as well. So when this scene is baked, the building’s shadows actually look fairly decent, but the helicopter’s really pretty terrible. If you look closely, there’s no sign of ambient occlusion and the shadows are not distinct at all. Which, is what we asked for with all of our low quality settings for the sake of speed, right?

This really is ok for now since all we really needed to get was a fair indication of how the scene was going to look lightmapped. One other example I’ll show you is “Copteropolis” from Stunt-Copter. This was a perfect example of needing to get a quick bake on a very large city scene. In all, this bake took 1 hour. It was well worth the wait so that I could continue working on other aspects of the game, especially considering that one “high quality” bake took well over 6hrs! I may have gone off the deep end with some of the settings, but you get the point ;)

Copteropolis, the city of Stunt-Copter (iPad Game)

Higher Quality

200 Final Gather Ray count

Ok, so now that we know how to get quick one-off, let’s look at this scene with a bit more focus on the Helicopter and it’s details. In this next shot, we’re much closer to the helicopter so we can see how big the texels are as we go along. Note also, the light/color emission of the yellow body onto not only the white blades above, but it’s own body where the tail meets the main part of the body.

But one thing we’re missing as I said before is the ambient occlusion on the joints where there are hard angles, as well as fairly clear shading on the body and shadows from things like the blades and foils in the rear.

Now, just to prove my point about “Final Gather Rays” not being the culprit in the amount of time taken as Resolution, I went ahead and bumped the ray count to 1000 from 200 and did another bake.

The time was only 30 seconds longer than 2:04, and it looks identical if you ask me:

2mins, 34secs

In this final lightmap attempt, the Final Gather Rays is set to 2000, and the Resolution is set to 250 texels. The total time was 28:54, but as you can see, the affect it has on the helicopter is very nice indeed. Notice the ambient occlusion on the hard angles as well as the yellow / green emission from the helicopter body and grass on the body where the tail and body come together as well as on the top rotors. The rotors from a top view look incredible as well, although, you’ll never see them during the game ;)

Final Lightmapping - 28:54

Top view of rotors

250 texels

Next Entries »