New Red5 Pro Release With Rebuffering!

January 20th, 2016 by Chris Allen

We wanted to let you know that we have just released some big updates to the Red5 Pro Server and Streaming SDKs!

I’ve included the release notes, the Server and SDK downloads, and the link to our Github repos with the updated examples and sample apps.

We hope this will help overcome some of the issues you were seeing, and we are here if you have any questions!

Best,

Chris

Release Notes:

Server 0.3.0 (Server)

Re-buffering streams – allows for graceful degradation of streams for client connections that have too low of bandwidth to support the publisher’s stream quality

  – Provide messaging to the clients about the state of the stream

  – Frame dropping for congested streams

General stabilizing of the server and performance gains including new threading model

HLS stream improvements

Metadeta – stream events now reported for HLS streams over WebSockets

 

iOS Streaming SDK 0.8.44.1 (SDKs)

Re-buffering -

  – drops extremely late packets.

  – resets stream if it becomes too latent

  – notification API from server on connection state

  – supports audio only in the case that the video bandwidth is too high

  – stream play back speed is adjusted automatically to keep it in realtime

Ability to capture a UIImage from stream upon request

Bug fix for A9 based chipset encoding (iPhone 6S, iPad Pro, etc. )

Memory leaks fixed

Scaling mode – ability to scale video stream view in multiple modes:

  – Scale to fit, Crop to fit, or Stretch to fit

PixelBuffer support – ability to support custom video sources other than the device’s Camera (bringing this feature in sync with Android)

 

Android Streaming SDK 0.8.44.1 (SDKs)

Re-buffering

  – drops extremely late packets.

  – resets stream if it becomes too latent

  – notification API from server on connection state

  – supports audio only in the case that the video bandwidth is too high

Ability to capture a Bitmap from stream upon request

Memory Leaks fixed

Scaling mode – ability to scale video stream view in multiple modes:

  – Scale to fit, Crop to fit, or Stretch to fit

Bug fix to set Microphone to 32 kbps by default and added ability to set custom bitrate on Microphone.

,

What YOU Need to Know About HLS: Pros and Cons

January 15th, 2016 by Chris Allen

R5P_HLS blog post

HLS, or HTTP Live Streaming is a video streaming format first introduced by Apple in May, 2009. It’s a format that breaks streams into small file-based segments made available for download over HTTP. It now is a widely supported format for viewing streams in almost real time. I say almost only because the protocol, by its very nature, introduces a lot of latency.

We’ve been getting a lot of questions and inbound interest from our customers about HLS. What are the advantages of using it? What are the disadvantages? How does HLS compare to WebRTC? Will Apple approve apps that don’t use HLS for streaming? This post is an attempt to address some of these questions.

Advantages

There are many reasons you would want to use HLS for your live streams, and this is why we recently added the support to Red5 Pro.

Ubiquity

First of all, HLS is widely supported. Although originally conceived by Apple for Quicktime, iOS and the Safari browser, it’s now implemented on virtually every browser out there. As you can see, the leading browsers support it. Now of course most of them support a comparable standard called MPEG DASH as well, but since Apple Safari and iOS devices don’t support it, we think HLS is currently a better choice.

Adaptive Bitrates

Another huge advantage of using HLS is that it allows for clients to choose from a variety of quality streams depending on the available bandwidth.

 

Disadvantages

So if it does all that, why wouldn’t I want to use HLS for my live streaming app?

Terrible Latency

It turns out that while HTTP Live Streaming was designed to deal efficiently with multiple quality streams, it wasn’t built for delivering video quickly. Practically speaking, HLS introduces at least 20 seconds of latency (often more!) in your streams.

Here’s why; HLS requires three segments in queue before it will allow playback, and the segments are divided by the keyframes in the video. The only way to create a super low latency stream (let’s say under one second) with HLS is to encode the video going out with a keyframe happening every 250 ms. The HLS playlist moving-window would be four items long with a segment duration of quarter of a second.  This would of course create high bandwidth video, add to the number of HTTP calls happening (at least four per second), and put additional load on the server.

The whole point of keyframes in video protocols like h.264 is to reduce the number of times you need to send a full frame of image data. With the above scenario, you might as well be sending the frames as a series of JPEG images in sequence. There’s a lot more to this, like the fact that media has to be packaged in 188 byte packets which creates added overhead when you do it too much, but hopefully now you’ve got the gist of it: HLS is a poor choice when it comes to low latency video streaming.

No Publishing

HLS is a subscriber-only protocol. Unlike WebRTC, which has a spec for publishing from a browser, HTTP Live Streaming only supports playing streams. If you want to publish a live video stream for a device, you simply have to look for other technology to do this. Luckily with Red5 Pro, we have alternative SDKs for mobile that allow you to create publishing apps that utilize RTP; you can then relay those streams over HLS for folks to view these streams right in their browsers. You can check out our HLS player example using video.js on GitHub. We are also in development on full WebRTC support with Red5 Pro that will include a JavaScript SDK. This implementation will feature out-of-the-box tools like a WebRTC Publisher and a player that supports WebRTC, HLS and RTMP (as a fallback), so stay tuned for that update as well.

Apple iOS HLS App Rules

Another question that comes up when our customers get ready to submit their iOS apps to the App Store is: will Apple reject my app if it’s not using HLS? As many of you know, our SDK uses RTP streaming for iOS, and Apple has some strange requirements that all apps must use HLS for streaming. That’s not quite true however. Apple states the following in their App Store Submission Guidelines:

  • “9.3 Audio streaming content over a cellular network may not use more than 5MB over 5 minutes.”

  • “9.4 Video streaming content over a cellular network longer than 10 minutes must use HTTP Live Streaming and include a baseline 192 kbps or lower HTTP Live stream.”

What we’ve also found is that if the app is a communication app–meaning that you have some form of two-way communication like Periscope has with live chats–then they tend to group the app in a different category. Apple also considers video calling apps like Skype to be in a different category, and the live streaming restrictions of having to use HLS don’t apply. The other good news is that as the popularity of apps like Periscope and Meerkat continues, Apple is getting used to the idea of live real-time streaming apps, and is gradually becoming more and more flexible with the restrictions.

So with that in mind, because of HLS’s high latency, Apple will approve apps that use other protocols if there’s a need for real-time communication. We simply recommend making a note of why you can’t use HLS when you submit your app.

Summary

As you can see, the ubiquitous support of HLS on a variety of browsers, mobile phones, and operating systems makes it a great choice for distributing your streams to the most amount of viewers. However, since it’s a rather slow protocol, if you are building any kind of app that relies on near real-time communication, you should look at other options. Finally, while Apple’s rules do seem quite rigid when it comes to their iOS streaming requirements, they are actually flexible when the need for something else is justified. What are you thoughts; are you currently using HLS in your apps? Have you submitted a non-HLS based streaming app to Apple’s app store? How did that go? Let us know in the comments.

 

, ,

How-To: Converting Recorded Files to MP4 for Seamless Playback via Red5 Pro

December 18th, 2015 by Chris Allen

A lot of developers have been asking us detailed questions about recording streams using Red5 Pro. The one thing that comes up most frequently is how to create an MP4 file for easy playback on mobile devices. While you can use our SDK to playback the recorded file as is (Red5 records in FLV format), this requires that the user always use your native app, and sometimes it would just be nice to provide a link to an MP4 for progressive download.

So with that, here’s a quick guide to get you up and running with a custom Red5 application that converts recorded files to MP4 for easy playback. You can find the source for this example on GitHub.

Install ffmpeg

The first step is to install ffmpeg on your server. ffmpeg is a great command line utility to manipulate video files, such as converting between formats like we want to do for this example.

You can find precompiled packages for your platform from ffmpeg’s website, although if you need to convert your recordings to formats that need an extra module on top of ffmpeg, you might need to compile those from scratch. Once you have ffmpeg on your server, just issue a simple command to make sure it’s working. The simplest thing you could do is just called ffmpeg with no params like this: ./ffmpeg . That should output something like this:

FFMPEG_output.png

If you see it output the version number and other info, it means you have a working version installed on your machine. Note that I’m running on my local Mac, but the same concept applies to putting it on any other OS. Also note, for production use, you would want to put the binary in a better place than ~/Downloads , and you would want to update your PATH  to point to the ffmpeg application from anywhere.

 

Start Your Own Red5 Application

Now that you have ffmpeg on your server, the next step is to build your own custom server side application for Red5, extending the live example here. You will use this as the beginning of your custom app that will convert your recorded files to MP4.

 

Overwrite MultithreadedApplicationAdapter.streamBroadcastClose()

Now on to calling ffmpeg from your custom Red5 Application.

In your MultiThreadedApplicationAdapter  class you will now need to override the streamBroadcastClose()  method. As is pretty obvious from the name, this method is called automatically by Red5 Pro once a stream broadcast has finished and closed. Even though the Broadcast is finished, it doesn’t mean that the file has finished being written. So, in order to handle that, we need to create a thread to wait and allow the file writer to finish the flv write-out. Then, we access the file and convert it to an MP4 using ffmpeg.

 

Let’s see what that looks like in code:

1. Overwrite the method.

 

2. Check to make sure there is a file that was recorded.

 

3. Setup some variables to hold the data about the recorded file.

 

4. Create a thread to wait for the FLV file to be written. Five seconds should be plenty of time.

 

5. Next we need to get a reference to the FLV file.

 

6. Make sure that there is a file before continuing.

7.  Get the path to the file on the file system and the path to where you want the MP4 version saved.

 

8. Delete any previous file created, otherwise FFMPEG  will fail to write the file.

 

9. Tell your application where to find FFMPEG . Note in this case we’ve added the app to our PATH so we can call it from anywhere.

10. Create the FFMPEG  command.

 

11. Run the process.

 

12. Read the output from FFMPEG  for errors and other info.

 

13. Create threads for each of these and wait for the process to finish.

 

14. Make sure that the process created the file and that it didn’t log any errors.

 

15. Finally catch any potential exceptions and close out the thread you created.

 

That’s it! You’ve now created an MP4 file from the recording. Rather than typing that out line by line we’ve provided the example project here:

https://github.com/red5pro/red5pro-server-examples/tree/develop/FileConversion

Now that you know how to intercept one of the methods of the MultiThreadedApplicationAdapter  in Red5 Pro you can start to look at other ones you could implement to do custom behavior for your app. Many people who are recording their files want to move them to an Amazon S3 bucket for storage. Adding in a few lines using Amazon’s API to do that wouldn’t be too challenging.

What other things would you want to do with a custom Red5 application? We would love to hear from you.

-Chris

 

New Red5 Pro Server 0.2 Release Featuring Clustering and HLS

December 11th, 2015 by Jamie Maynard

R5P_HLSrelease

We have been working hard on a number of features that we are super excited about including clustering and HLS support. You can get the new release here. Make sure to be signed in with your account first. Also, expect new SDKs to follow soon as well. Cheers!

RED5 PRO SERVER 0.2 RELEASE NOTES

This release has two new features our customers have been requesting.

CLUSTERING
This version of the server allows for configuring servers together to achieve infinite scale. This is the first version of the feature, and is limited to round robin style load balancing of Edge servers to an Origin server. There are many configurations possible with this setup including support for multiple origins, and Edges being able to connect across Origins. If there’s a setup you are looking for but our solution doesn’t seem to support it, please let us know.

OUT OF THE BOX HLS SUPPORT
In addition to clustering, the new server now supports HLS. Where as before you had to install the open source Red5 HLS plugin, Red5 Pro now comes with the feature out of the box. With the feature we also have created an HTML5 example using Video.js that connects to our server via HLS.

Let us know if you have any questions. We look forward to seeing what you do with this!