[Updated May 21, 2018, with more details about specifying hard drives and links to other servers.]
If you are a one-person shop, the best storage system to use for audio or video editing is a RAID that’s directly connected to your computer; this is called “Direct-Attached Storage.” The benefits of direct-attached storage are, generally, that it’s the fastest, cheapest and easiest to use.
If you are an editor in a large shop, their IT department has already configured both hardware and software to save and access both media and projects on the corporate server. The benefit to the corporate server is that all you need to do is edit, it’s someone else’s job to keep the system running.
However, if you are part of a small- to medium-sized workgroup who needs to share media between multiple editors, there has never been a better time to migrate to a server. The purpose of this article is to showcase some best practices for integrating shared storage with Final Cut.
NOTE: Here’s an article that covers how to integrate a server with Adobe Premiere Pro CC.
SERVER BACKGROUND
There are two types of servers: SAN and NAS. “Storage area networks (SANs) and network attached storage (NAS) both provide networked storage solutions. A NAS is a single storage device that operates on data files, while a SAN is a local network of multiple devices.” (LiveWire.com) SAN devices tend to be found in the enterprise, while NAS devices tend to be found in smaller workgroups. Also, in general, NAS devices are much less expensive than SAN systems and easier to set up.
NOTE: Servers today can include spinning hard disks, SSDs or a combination of both. For what we do, spinning hard disks (called “spinning media”) offer the best performance with the best capacity at a reasonable price. Network speeds are so slow, compared to the speed of an SSD, that we aren’t able to take advantage of the speed SSDs provide. They are best used in direct-attached storage.
When you start integrating a server into your editing workflow, you need to be concerned about four things:
Storage capacity is the number we are most familiar. It measures how much data the server can hold in terabytes (TB).
Bandwidth is the speed that data transfers between the computer and server. This is measured in megabytes per second.
Latency is the amount of delay between the time you press the spacebar inside your NLE and when the clip starts playing. Less latency is better, and, in general, we want it to be less than a quarter-second. (While I can’t measure the precise latency on my server, I have not found it objectionable during editing.)
The fourth point is one we’ll discuss more during this article.
NOTE: One other point, when you invest in a server, be sure to also get hard drives that are rated for NAS or server use. These tend to be 5400 rpm units, which is fine for a server. Slower speed drives still deliver great performance and they last longer than 7,200 or 10,000 rpm drives.
CONNECTIVITY AND BANDWIDTH
Designed by natanaelginting / Freepik
How you connect to the server has a significant impact on the bandwidth. Here are some examples:
To attain these speeds, three key pieces of hardware must all support the same bandwidth:
As with all things, the faster the speed, the greater the cost. Most buildings are wired with Cat 5e cables, which makes 1 Gigabit Ethernet the default network speed for many of us.
DRIVES ARE IMPORTANT [Update]
Designed by www.slon.pics / Freepik
It wasn’t until I published this report that I realized I left out a critical step in any server: hard drives. Most of the servers in the market ship without drives, which means we need to add them ourselves. And determining which drives to buy, I discovered, can be very confusing.
Here are some suggestions:
NOTE: Several readers took issue with my recommending 5400 RPM drives, feeling that these were too slow for media work. Instead, they recommend 7200 RPM drives, especially as the number of users on the server increases. The difference in price is minor. If I were to do this again, I’d probably get 7200 RPM drives.
These are the criteria I used to determine which drives to buy:
I ended up buying five Western Digital 8 TB RED NAS drives, which spin at 5400 RPM. The Western Digital 8 TB RED Pro NAS versions spin at 7200 RPM. In either case, I formatted these into a RAID 5 to provide 32 TB of online storage. They’ve been running continuously for seven months, so far, with no problems. And, I haven’t noticed any issues with latency.
NOTE: Servers should always be formatted as RAID 5 or 6, not RAID 0 or 1. Here’s an article that explains RAID levels.
A SPECIAL CONNECTION
Designed by Victor217 / Freepik
Bandwidth is fixed. For example, if I have a single Ethernet cable between the server and a 1 Gigabit switch, that means that the maximum data transfer rate is about 120 MB/second. If I have two users accessing the server at the same time, each user gets 60 MB/second (120 / 2 = 60). If three users access the server at the same time, each user gets 40 MB/second (120 / 3 = 40).
Suddenly, that single Ethernet cable become a serious bottleneck. To avoid this, many servers provide multiple Ethernet connections on the back of the server. Each connection on the back of the server acts as a separate “port,” each with its own IP address and providing the full bandwidth for that port. This allows different computers to access different ports on the server, thus avoiding the bottleneck of trying to squeeze all those data requests through a single Ethernet cable. Spreading the load decreases performance bottlenecks.
NOTE: While I could connect the server to the switch using a 10-Gigabit connection, that would require getting a new switch and additional ports on the server. When budgets are tight, that may not be a good option. Separate ports are cheaper and achieve similar results.
For example, here at the office, I’m using a NAS server from Synology. The back of the Synology has four Ethernet ports. I connect each of these to the switch then, using the switch control software, I assign a different port – with its own IP address to each connection. Now, when editor 1 needs to access the server, they use a different IP address than editor 2.
The internal bandwidth of the server is FAR faster than a single Ethernet connection, so this provides maximum performance to each member of the editorial team.
NOTE: Even though computers connect thru different ports, they all have access to the same data. This server provides file-level sharing, which is what you want for video editing, not separate volumes for each editor.
We can take this one step further using “port aggregation,” also called “port bonding” and “port doubling.” Rather than limit myself to the speed of a single Ethernet connection, I can “tie” or “bond” two of the ports together to improve the file transfer speed between the server and the switch. This means, under a heavy load, I’m using two connections to completely fill the Ethernet “pipe” between the server, the switch and my computer.
NOTE: The specific switch configuration settings vary by manufacturer and switch. Consult the user manual for guidance.
Even with this setup, I still can’t exceed the speed of 1 Gigabit Ethernet, but I can make sure it goes as fast as possible. Port aggregation combined with a server that provides multiple Ethernet ports is a very effective way to make sure your editors have the bandwidth they need.
NOTE: WiFi speeds are improving, but for video editing, I don’t recommend using a WiFi connection. Speeds fluctuate based upon the load through the wireless receiver and interference can also slow things down. If you need to edit, it is much faster and more reliable to connect a wire between the server and your computer.
HOW MUCH BANDWIDTH?
Different codecs require different amounts of bandwidth. For example:
NOTE: Here’s a table that goes into bandwidth requirements for a variety of codecs
The best way to determine how much bandwidth you need is to measure it. And Activity Monitor (Utilities > Activity Monitor) is a great tool for doing exactly that.
Open Activity Monitor, then open Final Cut and play a typical project. Click the Disk tab at the top of Activity Monitor and watch the graph at the bottom. Data received (in blue) shows the amount of data playing from the server to the computer. Data sent (in red) shows the amount of data being sent from the computer to the server.
In this screen shot, I’m measuring the bandwidth while playing a four image split screen in camera native format without first rendering the scene. While the bandwidth fluctuates, at its most intense, FCP only needs 28 MB/second of data in this example. However, I’ve done other projects that need close to 80 MB/second. Every project is different and some video codecs require hundreds of megabytes per second!
These stats are from my current network, as measured using AJA System Test Lite. Given my setup of multiple server ports and port bonding, I can fully “saturate,” or fill, a 1 Gigabit Ethernet connection. While the theoretical maximum bandwidth is 125 MB/second, we can only expect about 108 – 110 MB/second in real life, due to overhead in the Ethernet protocol.
As you can see from the screen shot above, my network, switch and server support both reads and writes close to that practical maximum of 110 MB/second.
So, what video formats will this bandwidth support? A lot, actually, as you can see from this table from Blackmagic Disk Speed test. A properly configured 1-Gigabit Ethernet network can support virtually all camera native formats, frame sizes and frame rates – including all ProRes variations – up to 2K frame sizes.
NOTE: The “How Fast?” column describes the maximum frame rate supported for different frame sizes and codecs at this network bandwidth.
For frame sizes larger than HD, you will need to configure your computers and network to support 10-Gigabit Ethernet. While there are excellent 10-gig converters that connect to the Thunderbolt port on both current and older MacBook Pros and iMacs, you’ll also need to change your network cabling, switch and connections on the back of the server to support this faster protocol.
Still, though more expensive, 10-bit Ethernet provides 10 times the bandwidth of 1-Gigabit Ethernet. This allows you to support more editors from that single server or work with more complex video formats.
SHARING IN FINAL CUT PRO X
My workgroups tend to be small – two to three editors with a fourth computer system reserved solely for video compression. Given that, let me set expectations.
The current version of Final Cut Pro X (10.4.2) does not allow two editors to work in the same library or project at the same time. However, FCP X DOES allow multiple editors to share the same media at the same time; up to the bandwidth limit of your storage system and network.
Final Cut supports editing libraries directly from a server IF the server supports the SMB3 protocol and is configured as an XSAN. This is not an easy hurdle to achieve. I have been able to configure my Synology to support SMB3, mostly, but not XSAN. So, I can’t edit libraries directly on the server.
NOTE: There are custom servers that support this capability; for example servers from LumaForge, 1Beyond, as well as others.
Media, on the other hand, can be shared between editors for any storage system that can be mounted to the Mac desktop. This is VERY easy to achieve with virtually all servers.
For example, here in the Media Import window, you see four devices:
Selecting a server and importing media is as easy as working with a local hard disk.
HOW THIS WORKS IN PRACTICE
Here are some more things you need to know about Final Cut:
NOTE: On a 1-Gigabit network, copying a 10 gigabyte file takes less than two minutes.
So, here’s my workflow:
This allows me to keep the library small, while maximizing my use of the server.
By default, FCP X stores all generated media in the Library. You can change this by selecting the library, then, in the Inspector, click Modify Settings for Storage Locations.
On the server, create a folder that you want to use to hold all generated media; this means optimized and proxy files. Then, change the Media setting from In Library to the folder you just created. (You can name the folder anything that makes sense to you and your project.) As long as you import media into Final Cut with Leave files in place checked on, the only thing this folder will store is generated media from Final Cut.
Because these generated files are referenced in the library and stored on the server, all other editors can use these same files, without having to re-create them or copy them to their local storage.
NOTE: If you want render files stored on the server, as well, set the Cache to the same server folder. (Not to worry, FCP X will keep all these different formats safely separate.) If you have existing render files, FCP X will move them to the new location. This option is a good idea if you have enough bandwidth, as it saves other editors from having to re-render the same footage.
I’ve found this provides excellent performance, while maximizing what both Final Cut and the server do best.
SUMMARY
The first time you setup a server is VERY intimidating. I know, it took me a long while to figure this out. But the benefits of sharing media between multiple editors make the work worthwhile. However, once a server is setup and mounts to the desktop, using it is as easy as using any “normal hard disk.” Even better, once you understand how this system works, creating a new library takes just a few seconds.
As with all things in tech, experiment with this new workflow to see how it works before jumping into a deadline-driven paying project. And let me know what you discover or if I left anything out. There’s still a lot here for all of us to learn.
LARRY’S SERVER SYSTEM
Server: Synology 1517+ (32 TB)
Drives: Western Digital 8 TB RED (a set of 5)
Switch: Cisco SG200-18 18-port Gigabit Smart Switch
Cabling: Cat-5e
14 Responses to Server-Based Video Editing with Final Cut Pro X [u]
Glad to have read further that you discovered the WD Red Enterprise drives which I use on my server, as well as my desktop Nas. I edit to the NAS and backup to the server which is an IBM 3560.
I work as a network engineer so it was easy for me to setup my racks in the garage and run cat5e.
Which file system are you using on hour Synology NAS? If I copy my media onto the Synology server, it’s telling me to use a SAN or SMB file system.
Synology 918+, Hybrid RAID, EXT4
Markus:
My main Synology system is configured as a single ext4 storage pool. Computers running Mojave access the system as SMB and AFP for High Sierra. I have not been able to configure this as an XSAN, which is necessary to store FCP X Libraries on it, but I can use the system for storing media for both FCP X and Premiere.
One of these days, I’ll spend more time learning about Synology to see if I need to optimize more settings.
Larry
Have a look on that video if you want to access librairies located on a NAS:
https://www.youtube.com/watch?v=pb6F5lqcg5I
Salva.
Very cool. But not for the faint of heart.
My standard recommendation is keep FCPX libraries stored locally, and store media on the server. You’ll get better performance.
UNLESS you have a 10Gb Ethernet connection, at which point editing off the server makes sense.
Larry
Hey Larry – thanks for such incredible detail in all of your posts/videos.
Have you considered trying a QNAP system? Ive been using the qnap tvs-871t for 3-4 years now and have been able to keep/edit with my FCPX project files on the server using their NFS (For FCPX) connection – plus you can take advantage of the Thunderbolt speeds using it as a thunderbolt ethernet connection. I tend to get 600-800mbps r/w with the 8bay in RAID 5.
Its been pretty amazing.
JJ:
Thanks for the tip.
I bought into Synology a few years ago, and was not aware of these latest features from QNap. Both companies make excellent products – but these features of QNap are definitely worth considering.
Larry
I know I am commenting under an aged article. Thank you for it, is is packed with valuable information.
There is however one thing that needs to be corrected – NEVER use RAID as a “data protection”. Especially not RAID 5. The problem with RAID 5 is with the big disk sizes we can buy right now. It is almost certainty that if you experience one disk failure, the whole array will fail because of unrecoverable error. RAID 6 nowadays seems to have a similar issue (URE).
What is URE and how it happens? Imagine you have a RAID 5 array consisting of 3 16TB drives. One of them fails, so you replace it and now the NAS has to recalculate the parity (which is the method of “protection” used with RAID 5,6,7). It needs to read all the data blocks on all disks left from the original disk array and write what is missing (be it data or parity information). The disks have a parameter named “Non-recoverable read errors per bit read”. Typically it is <1 in 10^14 – <1 in 10^16 reads in higher quality disks. There is a calculator here, where you can calculate your odds of successful rebuild of the array.
You are recommending against RAID 1 (mirror) which doesn't have this issue. I looked up the WD RED 8TB disks datasheet and put it in there. In case you are still running the RAID 5, your probability of successfully completing the rebuild of your array is 7.7%.
That being said, it may still be ok, if you have a proper backup plan (ideally following the 3-2-1 rule). But in any case, you should be aware that if one drive fails for you, you will end up with a new storage array build and you will need to restore your data from the backup.
The proper storage setup depends really on the usecase, home user will have different needs than production studio obviously (although I think that no one creating home videos of his children would say it is ok for him to lose the data).
If you need to store bigger volumes of data, the obvious option is to use RAID-6 (when using at least 6 disks). This should be combined with a filesystem that is checking the data validity and is performing something called "data scrub", which is in general a way of finding if the data on the disks are still valid and readable. From the top of my head – something like ZFS/BTRFS, now mostly represented by QNAP with their QTS hero and Synology
For smaller usage I would advise to actually obey the RAID6 and just use RAID 1 (Mirror of 2 disks) or Tripple Mirror. Both give you nice read boost and the data is safe from one drive failure as there is no fancy parity to calculate which takes ages.
No matter what solution it is – the drives fail, the enclosures fail (especially with these cheap SOHO units loaded with cheap SOHO drives), so BACKUP BACKUP BACKUP. Ideally 3-2-1 (three copies of your data, two local (on-site) but on different media (read: devices), and at least one copy off-site.
Taildrop:
Thanks for your comment. I’m not sure I agree with you. But you are absolutely correct on the value of backups. I will investigate the parity issue you write about.
Larry
You will find what you need if you google around for RAID 5 URE rebuild issues. I mean, RAID 6 should generally be more OK for HDD, but one should always look at the drive array as something that will fail eventually and think about what he’s going to do when it happens. It is a risk calculation in the end, and the asset that has value is the data.
What I don’t like about these parity RAIDs is the rebuild time and the stress it puts on the drives and CPU. It may be OK for home users to have a useless NAS for a week or two, but is it OK for a small studio? Not sure. I mean, everyone’s use case is different -> “it depends! 🙂
I just found your page looking for the setup on the FCPX side of things, and I like the content, so I will browse around for a nice while. I am not a professional editor; it is just my hobby, but I am an IT architect, so I thought it would be good to share some knowledge.
Vladislav
I forgot to add the link to the calculator. It is here:
https://www.raid-failure.com/raid5-failure.aspx
Taildrop:
Just a note. This calculator specifies the risk of another drive in a RAID dying before a rebuild is complete. Which is not the same as calculating URE. For example, a 4-drive RAID containing 8 TB drives, has a 99.4% chance of completely rebuilding before a drive fails. This, to me, is totally acceptable risk.
Also, RAIDs are still usable while a rebuild is reintegrating a drive. The speed of the RAID is slowed, but you can still access your data.
Larry
Larry, I am not sure how you got to that number.
If I put in 4 drives, 8000GB drive size and REDs <1 in 10^14 error rate, I get 14,7% probability of success.
Taildrop:
Ah… my error. I misread the table and enter “8,” when I should have entered “8000.”
Assuming this formula is correct, which I don’t – yet, that indicates a 14% chance of a successful rebuild. I have already contacted engineers that know far more about this than I do to learn more. I appreciate you bringing this to our attention.
Larry
Larry