The Basics of HTTP Live Streaming [u]

Posted on by Larry

UPDATE – Oct. 11, 2020 is cloud-based Digital Asset Management software. It handles storage, processing and delivery of images, videos & rich-media files for modern web apps. With it’s URL-based API, you can resize, format, transform, watermark your media files and more. Recently, they released an article explains HLS Streaming, a necessary codec for mobile media. This is a good backgrounder. Here’s the link.

This provides a good follow-on to the information in my article, below.

UPDATE – Dec. 2, 2022

The Porch published “Live Stream Setup and Tech Guide for Beginners” – which covers getting gear together for live streaming at the consumer level. Their article includes different tips on how to take advantage of live streaming, including how to set up your studio at home and a comprehensive overview of cameras and mics. Here’s the link.

UPDATE – April 29, 2023

Updated Apple Compressor screen shots to version 4.6.3 and made text corrections for current practices in the Compressor section. The rest of this article was not touched.

The research for this article started when some of my subscription users started complaining that they could only see a few minutes of one of my longer webinars before they needed to reset their browser. At first, I thought this was caused by bad programming on our part. But, further research made me realize that iOS devices only stream about 10 minutes of continuous video content when they are connected to a cellular data network, then they stop.


This article explains why. (If you want a more technical explanation, read this Apple Support Note.)

NOTE: If any of the following conditions are true, you can ignore this article:

Understanding Live Streaming isn’t easy, but it isn’t impossible, and this article provides a cookbook you can follow which makes a lot of it fairly simple.


There are two types of web video:

Progressive downloads are single files that are stored on a website. When someone wants to view them, the files are downloaded via a browser to a computer or mobile device for viewing. Downloads can be stored locally on the device doing the watching.

Streaming video is fed from a server and watched via a browser without the file actually being transferred from the server to the viewer. YouTube and Vimeo are two excellent examples of streaming video.

In the early days of web video, in the mid-1990’s, creating a high-quality progressive download video required rocket science. Today, though, we have the process nailed. However, as I am discovering, this is not the case for streaming video, which is what this article talks about.


The problem is mobile devices. Here are two scenarios involving a 30-minute training video, which is stored in a 120 MB file:

Scenario 1: Watch on a computer. When you fire up your browser to watch the program, the request goes out over your Internet connection and the file starts downloading. Except, you get a phone call about five minutes in, so you put the program on hold. Realizing that the call is going to take a while, you quit the browser and put your computer to sleep.

Scenario 2: Watch on a mobile device. Same program, same five minutes, same phone call.

The problem is that your computer has unlimited access to the Internet, while your phone most often has a data plan that charges by the megabyte transferred.

You only watched 5 minutes, therefore your phone should have downloaded only 20 MB of the program.  But it didn’t. It downloaded the whole thing because it was a progressive download. You were just charged for downloading 100 MB of data that you never used.

Worse, the phone network just carried 100 MB of data that was never used. Now, multiply that by the number of users on your cell phone data network watching video and you begin to see the size of the problem. MASSIVE amounts of data are downloaded and not being used.

To prevent this problem, iOS devices will only stream the first ten minutes of video material, then they stop. If your video is shorter than ten minutes, no problem. Everything is fine. But if your video is longer than ten minutes, the user needs to open the video again in their browser and navigate to where it left off. Annoying.


The solution lies in compressing and segmenting your video into very short chunks, about ten seconds each, then streaming those chunks sequentially to your phone. Because you are not downloading the entire file, when you stop viewing, or simply pause the video, no additional data is transferred. This frees up huge amounts of network bandwidth. When you restart the stream, the chunks pick up where you left off.

All the heavy lifting for this to work happens at the web server, so end-users don’t need to do anything different, provided they have a reasonably current browser. Even better, this new streaming protocol senses the speed of your data connection and sends files that are optimized for that speed. This minimizes stuttering and choppy playback.

This new video protocol is called HTTP Live Streaming and was invented by Apple. Google and Microsoft each have something similar, though with a different name. This streaming protocol is supported by any HTML/5-compliant browser and requires no changes on the part of the end user.

NOTE: It is more accurate to call HTTP Live Streaming a “media streaming communications protocol.” First, you compress your files, then the Live Streaming protocol defines how to segment them and generates a playlist to adaptively choose a stream based on available bandwidth. Apple has publicly documented the protocol so it is available to all browsers and devices.

While all the streaming is handled by the web server supporting this format requires major changes in how we compress videos.

NOTE: For those that want a technical briefing, here’s the WikiPedia link:

The good news is that we can use Apple Compressor to create the files required for HTTP Live Streaming and the quality is potentially indistinguishable from a progressive download file. Another good thing is that this protocol can be viewed by both computers and mobile devices so that once you’ve created media files to support this protocol, you can use it for all your viewers.

But there are a lot of changes you need to make to get this to work:


You need to recompress your master file using Apple Compressor for HTTP Live Streaming. This creates the small chunks that get streamed by the server.

You need to transfer the folder containing these files to the website that is hosting your videos, then, following the instructions that Apple provides as part of the compression process, change the links on the webpage to point to a specific file in this compressed files folder.

For videos hosted and streamed from a local website, this works great. For videos hosted on The Cloud, for example, Amazon servers, I haven’t figured out how to get this to work yet.


  1. HTTP Live Streaming does not work with video-only, or audio-only files. You must have audio in your clips, even if it is silence. The compression process fails when compressing video-only clips.
  2. HTTP Live Streaming can also be used for live events which this article does not cover.


There are two types of files that need to be created:

While any compression software can create the MP4 files, the only application that I’ve found that also creates the segment files – called “HTTP Live Streaming” (HLS) – is Apple Compressor 4.

Here’s how this works in Apple Compressor 4.6.3.

Start Compressor 4 (Compressor 3 does not support this HLS).

Here, for example, I’ve loaded one of my recent webinars into Compressor for compression.

In the Settings tab, open the BUILT-IN folder and locate the HTTP Live Streaming folder.

NOTE: There are two versions. The one indicated by the red arrow (above) compresses using H.264. The one displaying “(HEVC)” compresses using HEVC. H.264 is appropriate for most situations because of its wide compatibility. Only use HEVC compression if you are instructed to do so. HEVC will also take longer to compress.

Drag the entire folder on top of the name of clip you want to compress (red arrow above). Seven different settings are applied to the master clip.

NOTE: These seven versions of your file range from extremely high-quality, large frame size high-bandwidth files to small frame size, low-bandwidth audio only. This allows the server to instantly vary the quality of video playback based on the connection speed of your phone. If your viewer suddenly drives into a dead spot, the server switches to low-bandwidth chunks until the data speed improves; at which point the server automatically switches back to the higher-quality version. Each chunk runs ten seconds.

The compression settings range from audio only – at the top – to high-speed, high-quality broadband – second from the top – then into other image resolutions and data rate options.

NOTE: You MUST use all seven of these settings, you can not select between them.

EVEN BIGGER NOTE: Do not alter any of these settings in any way. I’ll have more on that in a bit. They are specifically optimized for HTTP Live Streaming.

This process generates seven different compressed versions of your movie in 10 second chunks. Each version starts with the same name, but adds a specific numbered extension to differentiate between the different versions.

Each of these seven formats creates a folder containing hundreds of chunks which need to be stored in a master folder.

NOTE: Yup, you guessed it. DO NOT change these names!

So, the next step is to create that master folder. I long ago standardized on storing these in a folder I named the same as the webinar. You can name this folder anything and store it anywhere. However this folder will be what you ultimately transfer up to the web, so make sure it doesn’t contain any spaces and try to use exclusively ASCII characters.

Create one master folder for each source video.

The files displayed in Compressor are all work files. Compressor needs to create these in order to create the HLS folder. But it doesn’t need these once the HLS folder is complete. So I store these to my customary compression location, but delete them as soon as the HLS folder is processed.


Before you click the Start Batch button, you have one more setting to configure. You need to tell Compressor the name and location of the HLS master folder.

In Compressor, click the Job tab at the top of the Inspector, then scroll all the way to the bottom to the Action section. All these defaults are fine, except click Choose to display the standard Mac file picker. Navigate to and open the HLS master folder.

With the HLS master folder open, enter a file name at the top which becomes the master file name for all the thousands of chunks you are about to create. For my webinars, I name the master file name the same as the folder.

Remember, no spaces and only use ASCII characters. Click the Save button when you’ve entered a file name and selected the destination folder.

This file name and destination will now be used for the job. The rest of the settings in this section are fine.

NOTE: If you don’t complete this step, all the little chunks won’t be created and you’ll need to start all over again. I learned this from personal experience.

Then, as usual, click the Start Batch button to start compression. On my M1 Pro MacBook Pro, it takes a couple of hours to create all these files. So be patient.


Once compression is complete, if you were to open the HLS master folder (which you don’t need to do unless you are curious) you would see a folder for each compression format.

Inside each of those subfolders, you will see hundreds of clips – each 10 seconds long – spanning the entire duration of your video.


Also, shown in the screen shot above, a special file is added to the master folder, called the “.m3u8” file. This is the playlist controller that tells the server which files to play and where they are located.

(Click to see larger image.)

Here’s what the m3u8 file looks like. Notice, on the right side, how the image size varies allowing your movie to play even when cell bandwidth drops.

Also in this folder is a ReadMe file that provides very specific directions on how to post this file to your website. For example:

In case you are curious, this is what this looks weblinks look like for my movie. This Read Me file should be given to your webmaster so they know exactly what links to use when posting.


Once all compression is complete, it is time to upload the HLS version of your movie. However, in this case, you upload the ENTIRE FOLDER that contains all these files to wherever you store videos on your website.

BIG NOTE: Just a reminder that this process is NOT necessary if you are using YouTube or other commercial video streaming services for your videos. Only for video streamed from a local website.


So far, everything is working great. However, remember that I said not to change any of the compression settings? Well, ah, that causes me a problem. Because I want to add a watermark, maybe scale the image. And none of that is possible given the default settings in Compressor.

In the past, I would use Job Chaining, which allows the output of the first job to become the input of the second job. The problem is that while this is a great idea, it isn’t reliable. I’ve struggled with Job Chaining for years and finally given up.

Instead, I create an intermediate file of my movie with all watermarks added; plus any other effects. I save this as a ProRes 4444 file, which is, essentially, lossless. Then, I use this intermediate file to create the HLS files.

The good news is this works perfectly. The bad news is that Compressor takes a LONG time to create that intermediate file. I don’t know why. But it takes as long, or longer, to create an intermediate ProRes 4444 file as it does to create the HLS chunks.

So, I start the job and do something else for a few hours until it is complete.


HTTP Live Streaming is specifically designed to reduce network bandwidth required for users of cellular mobile devices to watch videos on their phones, while at the same time allowing video quality to fluctuate to compensate for changing data rates across the network.

If your viewers watch your videos on a computer with a high-speed internet connection, they will see top-quality images without worrying about data rates or any other technical issues.

However, like all web videos, compression is only half the story. The other half is working with your web team to post the videos. The best place to start, after the files are transferred, is the ReadMe created by Compressor when the job is complete. When I followed its instructions and posted the video to my website, everything worked great.

As always, I’m interested in your comments.

Bookmark the permalink.

30 Responses to The Basics of HTTP Live Streaming [u]

← Older Comments
  1. Egon Freeman says:

    Judging from the amount of comments, it seems that many a person have had this problem in the past. And will probably continue to do so in the future.

    Great article, I learned a few things. Thanks!

    But I do have my own fly in the proverbial ointment. Many people fall into these pitfalls because we’re trying to obscure the fact that web/streaming video is, in essence, HARD. And I mean Nintendo Hard. Not every scenario can be known, and not every situation can be prepared for. The Internet, in all its glory, is a fickle and unpredictible thing. Connections drop, routes get switched… This all is, IMO, a bit of a crutch, and the average naive Citizen is setting him-/herself up to fall into many holes by assuming complete understanding of the subject. Not to mention that we’re still waiting on a unified streaming experience… I wouldn’t hold my breath, though.

    I guess, what I’m trying to say is – prepare for the unpredictable, and expect the unexpected, as much of an oxymoron that may seem like. There are as many hardware/software combinations out there as there are pebbles on a rocky beach, and unlike pebbles, that number is growing with each second.

    And as far as the “won’t work without the audio” – it is my (possibly wrong) educated guess that it has to do with sync operations. An audio track is a very convenient clock source, You see (all AVI files, to my knowledge, sync on audio), and even without this – it’s still the common point between all these files. For Flash video, for instance, I’ve ran into the problem of not being able to seek through .flv files – at all – if I set my audio output to Null Device. It just won’t seek, period, and this goes for stand-alone players like Media Player Classic (on the PC) as well.

    Using a single-sync point in the form of the audio file is convenient, though – it doesn’t need recompressing for various bitrates (in this case), and on top of that, it can be kept streaming even if video can’t (that would be the “sound-only fallback”). It also likely has the smallest footprint of the bunch. So it’s actually easier, technically, to do. Benefits all around.

  2. bigCDN says:

    HLS is useful for delivery to iOS, iPhone, iPad etc..
    HDS is useful for delivery to the web, you can even use a flash player (OSMF).

    We now offer HLS and HDS so for help and more info just ask the team at

  3. WTE Project (Camera Video Analysis)…

    Basics of HTTP Live Streaming…

  4. voodoo tactical backpack…

    Final Cut Pro Training | The Basics of HTTP Live Streaming | Final Cut Pro Training & Classes…

  5. Rich N says:

    Thank you for the tutorial. 🙂

    I am wondering how to do a real time live broadcast using HLS. In this tutorial , it seems it only work on an pre-existing .mov. But I use Quicktime Broadcaster to record the a video to a .sdp file, it works fine with the RTSP protocol, But how do I broadcast live using http?

    Thanks again.

    • Larry Jordan says:


      This is a great question, but the wrong question.

      Live video streaming is not the same as downloading a file. By definition, a live stream only exists in real-time, whereas downloads transfer an entire program faster than real-time. Therefore HTTP Live Streaming is not needed, because there is no download going in the background.

      A good place to go to learn more about live video streaming is Telestream Wirecast.


    • John West says:

      [disclaimer: paid employee of Wowza Media Systems]


      I take a slight issue with what Larry said. HLS is required because the HTML5 tag and Apple’s native video+audio multimedia rendering widgets on iOS can’t receive RTSP, RTMP, or other earlier network protocols.

      As Larry notes, you can take a look at the Wirecast application software sold by our friends at — some related tutorials to use Wirecast with our Wowza Streaming Engine software are at (an alternative is to go from Wirecast directly to a CDN company’s service such as the one mentioned above).

      However, if you’re a big fan of QuickTime Broadcaster, you can run the RTSP signal into our Wowza Streaming Engine software to produce the proper HLS output (and HDS, MPEG-DASH, and various other network protocol outputs). You can also activate our product’s real time transcoding feature to produce ABR compliant streams at multiple lower bitrates. Tutorials at

      Note: There’s also a variety of other technology which can be used to deliver a live video+audio signal to your favorite types of viewing client. Some examples include:
      -Your mobile phones & tablets can run our GoCoder app for this.
      -You can purchase stand-alone IP cameras.
      -You can also purchase a dedicated hardware solution with a video input and an ethernet output.
      Our website has some of these options listed in our partners section. If you have a specific need that isn’t addressed there, send us an email and I’ll try to point you in the best direction.

      Yes, as Mr. Rickard notes, we do charge for our software (however, the price he quotes is outdated, it didn’t include the live stream transcoding feature).

      Regarding charges, as Mr. Freeman notes, in our niche, one must prepare for inevitable, unexpected industry “evolution.” It’s our goal as a company to offer commercial software & services which evolve to help you meet the challenge of the next great feature in video and audio delivery (ie: MPEG-DASH, h.265, vp9, and etc). It’s our goal as employees of Wowza to help you meet your own business goals while continuing to be active participants in the economy.


      John West

      • Larry says:


        Thanks for a detailed and helpful reply. I’m always happy to hear other opinions.

        As I’ve discovered, web streaming is easy to grasp in concept and very tricky as you get into the details.


        • John West says:

          Larry, I couldn’t agree more!

          My customers who handle live events as a professional service tell me that web streaming from different venues each week is even more tricky and requires nerves of steel.


  6. […] Larry Jordan | The Basics of HTTP Live Streaming – This article is from Larry Jordan, the godfather of Apple Final Cut training. he has a great style and is easy to pay attention to. […]

  7. rencontres sérieuses says:

    It’s actually a great and helpful piece of info.
    I’m glad that you shared this useful info with us. Please stay us informed like this.

    Thank you for sharing.

← Older Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!


Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.