[ This article was first published in the October, 2009, issue of
Larry’s Final Cut Pro Newsletter. Click here to subscribe.
Updated December 2009. ]
Earlier this week, Apple announced a new video format – iFRAME – specifically to improve the import and editing of HD images.
Apple’s support site states:
The iFrame Video format is designed by Apple to speed up importing and editing by keeping the content in its native recorded format while editing. Based on industry standard technologies such as H.264 and AAC audio, iFrame produces small file sizes and simplifies the process of working with Video recorded with your camera.
Currently, only Sanyo has announced cameras that support the new format:
SANYO North America Corporation (SANYO) today introduces its high-end Dual Cameras, the VPC-HD2000A and the VPC-FH1A, as the world’s first camcorders to offer compatibility with iFrame, a next generation video format designed specifically to allow users to easily import, edit and share high quality videos.
There are several important notes about this announcement:
I am very curious to see if other camera manufacturers adopt the format – and whether Apple decides to support it within Final Cut Pro.
For me, the big news is that this is the first of a long series of steps needed to reconcile the differences between video and computer formats to reduce the conversions we need to make when editing video on our Macs.
UPDATE – DECEMBER, 2009
A blogger ostensibly covering the video industry took Apple to task for this format with a diatribe that was both ill-informed and unnecessarily harsh. (Note: If you don’t know what you are talking about, standing around yelling doesn’t make you more believable.)
Anyway, JJ Semple sent me his take on this new format (I’ve edited this a bit for space reasons):
I think he (the blogger) is very shortsighted. The point here, it seems to me, is to break with legacy holdover habits that have complicated film, video, and computer editing for years. SIMPLIFY.
I come from a film editing background, worked for NBC News in the 60s and features in the 70s on 16 and 35mm. It was labor intensive, but much simpler workflow-wise. Cut up the scenes, hang them in barrels. Throw everything on a synchronizer and a Moviola. Edit, Match and Mix.
In 1995, I directed a short film in 35mm. I had not cut film myself for many years and wanted to work with a young editor who knew and used all the new techniques. Suffice it to say I was appalled at the convoluted process he had to use. Transferring the 35mm to videotape via telecine and digitizing it to work with on a computer, 23.96 and all the makeshift work arounds: pulldown, progressive vs. interlaced, bit rate. Heck, even back then I saw that digital video would eventually replace film. And if it did, computers would rule the day. What’s more, because everything would be handled by computers, there would be no reason to retain oddball frame rates like 23.96 and 29.97. You could set a computer to do anything you wanted so why not, right from the beginning, set it to some sensible, logical standard that makes it easy in terms of resolution and frames per second. Computers can work at any speed you choose (11 fps, why not?); all you need to do is to be consistent down the line, as long as it offers quality and simplicity. Here is an opportunity for fewer standards, not more.
The first issue is not resolution, but frame rate consistency. Make a camera that would shoot at 60p frames or whatever frame rate is the best, capture on HDs, drag the files over to computers that understood 60p. If you wanted other resolutions, just make sure they had logical ties to the base standard.
If it’s done correctly, in sixty years, people will no longer remember all the mess, they’ll be happy working with simple drag-and-drop tools. What separates the pros and the amateurs won’t be technical knowledge, but experience and ability.
Case in point. I do most of my work for the web ( http://www.jordansemple.net/my_2008-09_video_highlights.html). I love the video quality of my Canon HV30 and XH-A1, but I hate tape-based capture. Makes me hark back to my 16mm days. As soon as the workprint came back from the lab, I’d sync it up and start separating the clips in a barrel. It was actually much easier than tape capture. Why? Greater reliability.
So if you could capture to the optimal resolution on a removable HD, insert said HD into a computer slot, drag the files into FCP and not worry about frame rates and pulldown and interlacing, wouldn’t that make everyone’s life and workflow a lot easier? And speaking of workflow, where did the term come from? It only appeared since the dawn of the digital age. Back then, we didn’t need no stinkin’ workflow; we just watched what the editors did and when we got a chance to do it, we copied them. We didn’t have to think about what am I going to do next. Now I have notes to myself all over the place with workflow diagrams, trying to simplify my every process.
Moreover, eventually, if we were able to shoot and edit iFrame at any resolution, not only would capture cards become extinct, but FCP wouldn’t need to be as complicated. The whole process would consist of a reliable shooting device, reliable storage, editing tools that featured editing over technology, and much simpler distribution. Imagine a utility that would break a file into clips. Drag the file off the SD card onto the utility and it put the clips into bins because in the camera you were able to set logical scene-based markers. You open the software (FCP or something better) and you start arranging the clips on the timeline. Whole thing is over in seconds…
Make it simple and the rest will follow.
Larry replies: Thanks, JJ, for your thoughts.