[ This article was first published in the May, 2010, issue of
Larry’s Monthly Final Cut Studio Newsletter. Click here to subscribe. ]
NOTE: Here’s an earlier article I wrote that also discusses 64-bit memory addressing.
With the release of Adobe’s CS5 suite of products, there followed a flood of upgrades from a variety of other vendors, all of whom were touting their new support for 64-bit RAM memory addressing.
NOTE: 64-bit memory addressing affects how you use the RAM you have installed on your computer. Memory refers to RAM. Storage refers to hard disks. Memory disappears when you turn the power off; storage does not.
While Final Cut Studio does not yet support 64-bit RAM addressing, I thought a basic primer on why 64-bit is such a big deal might be helpful. My understanding has come from reading Apple’s website and many discussions on The Buzz and in person with a variety of software engineers.
To get started, we need to understand that the operating system determines how much RAM an application can access. Up until recently that limit was 4 GB. Engineers used the term “32-bit memory allocation.” 32-bits is a short-hand way of saying 2 to the 32nd power.
A 32-bit limit means that any Macintosh application that does not support 64-bit RAM addressing (which is everything except very recent applications) is limited to 4,294,967,296 bytes of RAM. (This is where the 4 GB term came from.)
You can take advantage of installing more RAM in your system when you run multiple applications, because each application can live in its own 4 GB section of RAM, but no single application can use more than 4 GB. This means that for applications that need lots of RAM – think Photoshop or video processing – the applications need to spend a lot of time swapping temporary files between RAM and the hard disk. Swapping files works fine, OS X is especially good at it, but it takes time and decreases performance.
However, when we move to 64-bit RAM addressing, these limits are VASTLY increased.
The new limit means that a single application can theoretically access 18,446,744,073,709,600,000 bytes of RAM! (I, ah, calculated this in Excel.) That translates to 18 EXAbytes of data! Huge!! Huge BEYOND HUGE!!! So vast that RAM access is essentially unlimited for the foreseeable future.
NOTE 1: Storage goes: Bytes, Kilobytes, Megabytes, Gigabytes, Terabytes, Petabytes, Exabytes and more beyond that…
NOTE 2: Having ACCESS to all this storage is only part of the solution. The next step is that you need to have more RAM installed on your computer system. Currently, the maximum amount of RAM that a MacPro can hold is 32 GB. So, this means that even though we can access 18 Exabytes of RAM, we can’t yet install anywhere close to that much in our systems!
With this expansion in addressable memory comes two other benefits: By increasing the amount of the application that is stored in memory, the operating system needs to do fewer disk swaps, loading in different parts of the program depending upon what you are doing. And, second, many plug-ins that emphasize video processing are also writing in support for rendering using the Graphic Processing Unit, rather than the CPU. These two changes have the potential for applications to run significantly faster.
Additionally, according to Apple, 64-bit addressing allows CPUs to crunch twice the data per clock cycle, which means numeric calculations speed up considerably.
So, 64-bit RAM addressing is a good thing. Its here now with Adobe CS5 and Avid Media Composer 5; and I feel pretty confident that it will be coming in the next release of Final Cut Studio; whenever Apple decides the new version is ready to release.
8 Responses to What Does "64-bit" Really Mean?
Mr Jordan. Do you know a simple way to discover if my MacBook Pro is a 32 or 64 version? Where I can see this?
This isn’t as much of a hardware issue as it is an operating system issue. If you are running OS X 10.6.x, you are running in 64-bit mode.
Larry
You are right. I made an upgrade (hard and software). My old 2006 MacBook Pro now runs 10.6.8 with fcp 7 and 4GB RAM. This is as high as possible. Nothing more (like my support said) can be done with this notebook to improve better results. So, now I think work one year more and after this, all this stuff will be recycle garbage (sniff.)
I’m still grappling with the 64 bit thing. Intellectually, I understand that widening the data path from 32 to 64 bits can boost speed because you are executing more (double) the instructions in the same time if the hardware is built to handle it.
So, I’m running a MAC 4.1 (early 2009 Nehalem) 8 core with 14 gig of ram. I am also running Snow Leopard 10.6.8. When I look in “about this Mac” under extensions, the last column shown is “64 bit” meaning capable. The only place it says “no” is next to BSD kernal 6.0, and yet it says “yes” in the line above it that says BSD Kernal.
From your answer to Walter (above) that would mean I’m running in 64 bit mode? I also recall that I can start the machine in either 64 bit or 32 bit.
Sorry for my confusion, but does this mean I still can’t access more 4 gig of memory when I’m running FCP 7?
@Ken Ackerman: I’m sorry, but it does. FCP 7 was just not designed with 64 bits in mind (it’s a 32-bit-only app). “64-bit compatible” means that something can understand values larger than 32-bit. In order to take advantage of 64 bits, three requirements MUST be met:
a) the HARDWARE must support 64-bit
b) the OS must support 64-bit, and
c) the PROGRAM must support 64-bit.
As long as any one of these is not met, 64-bit addressing is NOT available – either for that program (c) or at all (a or b).
@ Larry:
“This isn’t as much of a hardware issue as it is an operating system issue. If you are running OS X 10.6.x, you are running in 64-bit mode.”
It is very much a hardware issue, just not on the Mac. In the PC world, it is VERY much a hardware issue.
Also, I know that on Windows/Unix systems, RAM is divided into Kernelspace and Userspace (in 32-bit, both of these are 2GB/each). On top of that, all hardware memory (like GPU memory) ALSO factors into this limit, because hey – it HAS to be addressed SOMEWHERE on the system to be useful (the 32 bits is an “address limit”, not just a RAM one). Ditto for the drivers. I’m not sure how it is for the Mac, but on 32-bit Windows/Linux a program is typically limited to 2GB of RAM (or 3GB with a special switch on boot-up).
Egon,
Thanks for confirming that. I think that leaves me with either going with a 64 bit NLE or upgrading to an SSD to increase performance.
Ken
@Ken:
There’s really so much one can do to increase performance before jumping to 64-bit. An SSD drive may well help, but I’d really suggest looking into 64-bit NLE options. I know first-hand that Final Cut Pro X doesn’t support many, many things we’re used to, but it’s getting there and could become a viable option. And one thing I can tell You – it’s blazing fast, even with all its flaws (say 20min vs 6hrs render-time).
And yes, the main advantage of going 64-bit is the fact that it can handle more RAM, but there are other aspects to consider. For one, 16-bit has been dead and buried for years now – the same will happen to 32-bit at some point, it has to. With the way GPUs are increasing in VRAM size, for example, there’s really no getting out of this – it would mean falling behind the competition. Like I said, FCPX isn’t up to snuff yet, but for $300 You can probably only gain (there’s no reason not to use both FCPX and FCP7 on the same machine – just not at the same time).
Also, really keep this in mind: 32-bit applications just plain CAN’T take advantage of 64 bits, and there’s no setting for that – it’s just not possible. And even in 64 bits, with loads of RAM and the like, most programs will run out of free RAM eventually (FCPX is esp. good at this, so that hasn’t changed).
As for SSD drives… Don’t have first-hand experience, but I’d guess that disk speed isn’t a major bottleneck as much as it used to be. Most of us already have RAID arrays (even small, cheap ones, like the SmartStor DS4600) which deliver nice performance. And what most SSD drives gain in read speed – they lose in write speed, so I’m not sure an expense like that is justified, at least until the technology matures some more.
There’s a Final Cut Pro X trial available from Apple. The download should be around 1.5GiB, or thereabouts. Give it a spin, see how it behaves and what it can and can’t do. Maybe it DOES have what You need?
One last thing – it’s not “executing double the instructions”, it’s doing calculations on larger data chunks at a time. And that’s extremely subjective by its very nature. For one, I don’t see the apparent benefit when most of our data is at most 10 or 12 bits in size. But maybe we should look at it this way: it can’t calculate much faster, but it certainly can move more data around. And then, most 64-bit CPUs have integrated memory controllers nowadays (less per-cycle lag, higher possible clocks), so there’s that…
Also, @Larry:
You wrote that “according to Apple, 64-bit addressing allows CPUs to crunch twice the data per clock cycle, which means numeric calculations speed up considerably.”
Like I said in my earlier comment – I think that comment is very iffy. For starters, most of the programs I know are totally comfortable with 32-bit sizes and less (like 16-bit). I mean data sizes, of course – variables and the like. Even if we assign 8 bits to each of the 4 channels (R, G, B and Alpha), we still end up with 32 bits. With 12 bits per channel of RGB, that’s still 36 bits, and it can easily be handled in three 16-bit numbers (and probably is, even on 64-bit – using 64-bit variables on that data would be wasteful). An interesting fact to support my theory: Compressor is stil 32-bit, even though the comment makes it seem that it could benefit the most from this transition… So there’s much more to 64-bit than moving things around in larger chunks, wherever possible.
One thing that DOES benefit outright from 64-bit processing is floating-point operations. These can now be much more precise, and a tad faster (question is – do these get truncated back to 32-bit for compatibility reasons?). Cryptography is another thing that comes to mind, but that’s perfectly fine with 32 bits of bus width OR LESS – we’ve had 256-bit strong crypto on 32-bit machines, these people’ve solved the bit-shuffling problem on their own.
It’s worthy to note that there are a few technological advances that came with 64-bit and multi-core CPUs. One of them is virtualisation, or – the ability to run different pieces of code simultaneously (for example – a secondary operating system, but not on a virtual machine).
Let’s be quite honest – a lot of our work still gets done 8 or 16 bits at a time (consider: text, communications and the like). 64-bit processing is a great thing, but let’s not get mislead by J. Jones the Marketing Guy. It’s not the color TV of computing.
I think the most underhyped event of the recent decade is the arrival of DDR memory, with it being twice as fast as SDR due to being able to send/receive data on BOTH the rising AND falling clock edge (not to mention simultaneous reads/writes to two memory chips, possible with Dual-Channel). It effectively doubled the speed of memory ops, and nobody ever mentions this…