How I Create My Weekly Webinars – A Workflow

Posted on by Larry

I’m closing in on my 300th webinar! And I just realized I’ve never written about how I create these.

Let me show you the workflow behind each episode and go into some detail so you can see also see the tools and settings I use, in addition to the process.

AN OVERVIEW

My webinars provide training. I made the decision many years ago that the “star” of my training is the software, rather than me. This is why you don’t see my face in my webinars – it provides more room for the software. Each webinar runs an hour or less; the average is about 50 minutes.

While all my in-depth training movies are pre-recorded, all my weekly webinars are live. I spent my life creating live TV, so this is an environment where I’m comfortable. Having a real-time deadline also solves the issue of motivation with a clear reason to get my work done. I respond well to goals.

Another key decision was to create my training at a 1280 x 720 frame size. This provided several benefits:

My final deliverables include:

PRE-PRODUCTION

The planning for each show varies from one day to two weeks, depending upon the subject. However, by show time, I have:

PRODUCTION

Live shows originate on GoToWebinar. I like the registration, interface, and control it provides. While the audiences would be larger on social media, I am not a fan of those platforms.

Currently, my webinars originate on a 2017 27″ iMac, with a second LG monitor attached. The main screen is streamed, the second monitor holds my notes along with the GoToWebinar control interface.

Storage is a Synology 1517+ server because it is located in another room, decreasing noise when I’m recording. All my training media is stored on the server because the 100 MB/second bandwidth limit of Gigabit Ethernet is more than enough to support anything I want to do with 720p media.

My mic is an AKG C520 headset converted into a USB signal using a FocusRite Scarlet 2i2 A/D interface.

I record the screen using Telestream ScreenFlow. This program is far more than I need, because I don’t use it for editing. However, it is the only screen capture program I’ve found that records and outputs ProRes 4444 video.

ProRes 4444 is essential to precisely match the colors and grayscale of the original images on screen. H.264 recording creates too many artifacts and color shifts.

All my screen captures record at 30 frames per second.

I generally present two shows on Wednesday to allow flexibility in editing. Not that I make any mistakes. Nope, that never happens. Rather, let’s just say that it allows me to pick the clearest explanations from each show.

When recording is complete, I:

NOTE: I retain all editing source files for at least 18 months and all masters for ever.

POST-PRODUCTION

All video editing is done in Apple Final Cut Pro X. As the screen shot above illustrates, I import the slides and screen recordings into their own events in a new library in Final Cut. Each webinar has its own library.

NOTE: For years, I edited my weekly podcast – Digital Production Buzz – in Adobe Premiere, and my webinars in Final Cut. Different shows require different tools.

These are the project settings for a typical webinar.


(Click to see larger image.)

During editing I clean up mistakes and stumbles, add titles (I prefer to add my own titles rather than use ScreenFlow because I want to control both content and placement), and remove pauses. In general, I can generally remove about ten minutes of the presentation without altering the content by simply cleaning things up.

Also, during editing, I add chapter markers to the start of each section. This provides online navigation, as well as a screen shot I use in my store to indicate the range of the content in the program.

Editing a one-hour presentation generally takes me six to seven hours.

AUDIO MIX

When editing is done, I do an audio mix in Adobe Audition to make the audio sound as good as possible. I continue to be deeply disappointed with the audio tools in Final Cut, a fact that I’ve shared with Apple on many occasions.

NOTE: I used ProTools years ago, but over the last five years I’ve had an utter lack of success getting a ProTools dongle to work, so I gave up. Audition is optimized for mixing audio to picture and I like it a lot.

From Final Cut, I export an XML file to provide the instructions for the edit, without including any media.

This XML file is converted for Audition using Intelligent Assistance’s XtoCC utility. I’ve used this for six years or so and have never had a problem with it. It’s an essential tool.

For best results, I turn off the video options and keep all the audio default settings.


(Click to view larger image.)

Next, I import the converted XML file into Audition. All clips and tracks are preserved, allowing the greatest flexibility in smoothing edits and removing an clicks or pops that crept in.

As you can see in the screen shot, my voice is on Track 1, synced mouse clicks are on Track 2, and wild clicks, added during editing, are on Track 3. This track separation makes mixing easy, which is another reason I use ScreenFlow.


(Click to view larger image.)

For audio filters, I apply:

I mix all levels so that peaks are around -3 dB, with an average audio level of -16 LKFS, measured with the Loudness Radar filter on the Master track.

NOTE: Here’s an article on my settings.

When the mix is complete, I export a 48K, 16-bit Stereo WAV file. (Final Cut works better with stereo mixes than mono. The finished edit is converted to mono during compression.)

FINAL OUTPUT


(Click to view larger image.)

The finished audio WAV file is imported into Final Cut and a new Role – called “Final Mix” – is applied to the audio mix. (Indicated by the orange color.) I then play both the old and new audio together to make sure there isn’t any sync drift.

NOTE: I had a problem with sync drift a while back on a few shows, which I fixed by re-outputting the WAV file. Since then, audio sync has been perfect.

Using Roles, I turn off the dialogue and turn on the Final Mix.

I export the finished webinar as a ProRes 4444 Master file, with stereo audio and chapter markers.

COMPRESSION

The finished master file is then compressed into a variety of formats using two different compression tools: ffWorks and Apple Compressor

ffWorks creates:

Compressor creates:

FINAL STEPS

After the webinar is complete, I create excerpts from the main program for posting to YouTube and as articles for my website.

These are compressed as MPEG-4 files, with mono audio and watermarks, and posted to the appropriate website.

SUMMARY

Production and post, from first performance to final post, generally take two ten-hour days

This process, from start to finish, works smoothly, for which I’m grateful. It took a while to figure out the best tools, but now, the greatest challenge is figuring out what to talk about each week.

As always, let me know if you have questions.


Bookmark the permalink.

6 Responses to How I Create My Weekly Webinars – A Workflow

  1. Oscar Estrada says:

    Hi Larry,

    Thanks for the article! It was quite insightful. Why do you use 2 transcoding tools, rather than 1 for the final delivery?

    Best,

    Oscar

  2. Nick Rains says:

    Hi Larry. Have you ever used Handbrake? Seems to work magic on reducing file size for upload to Youtube and Vimeo etc.
    Cheers
    Nick
    Brisbane, Australia

    • Larry says:

      Nick:

      Handbrake, like ffWorks, is a front-end for ffMPEG. Both do an excellent job of compression, though Apple has deep concerns about their ProRes implementation.

      Larry

      • Nick says:

        Thanks Larry. For making MP4 files to upload to Vimeo etc it seems to do a good job.A 1 hour Mv4 out of FCPX (720p) is about 900MB, run through Handbrake it comes out at about 1/6 of that, not broadcast quality, but totally acceptable (IMO) for webinar recordings and tutorials.

        • Larry says:

          Nick:

          When it comes to video compression, file size is TOTALLY dependent upon bit rate. The lower (smaller) the bit rate, the smaller the file.

          Image quality, however, is dependent upon:

          * Bit rate
          * Codec
          * Frame size
          * Frame rate
          * The amount of movement between frames

          FCP X defaults to a high bit rate for MP4, while HandBrake uses a lower bit rate. This would principally account for the differences in file size and quality.

          Larry

Leave a Reply to Oscar Estrada Cancel reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!

JOIN NOW

Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.