David Hallberg

Fragmentation’s Biggest Rift? The Generation Gap

My last blog covered deleting old files from your Millennium® application nodes. This week I’ll discuss the second step for keeping Millennium’s speed consistent over time when you’re tuning filesystems and directories: defragmentation. Just like users disagree over the need to purge unneeded files, they also disagree over the need to defragment files.

I’ve had some interesting conversations over this issue this past year. People weaned on mainframes and older operating systems universally say, “Defragment!” Those who grew up on Linux, HP-UX or AIX virtually to a person say, “Defragmentation is only needed for the old and slower operating systems.” With the latter group, I have to remember to stay calm and remind myself that their ideas probably come from the websites and authors who claim that, except with Windows, fragmentation is a thing of the past. But I also remember the documentation from the vendors who wrote the operating systems and the supporting filesystems. Plus I remember my work on the Advanced Technology Group at Cerner and what we benchmarked on AIX, VMS and Windows. Only then do I begin the conversation.

Filesystem and directory fragmentation occurs on all of the operating systems supporting Millennium®: VMS, AIX, HP-UX, Linux and Windows. There are generally two types of fragmentation: file fragmentation and space fragmentation.

In the first type, a file, say dic.dat, becomes too large to fit into a single disk block, so it has to be split into two or more blocks. If the dic.dat file is then modified, it may need a third or fourth block to fit within. It’s likely that code upgrades or custom CCL changes will have added more files to the filesystem where dic.dat is located, and they will have been saved to the blocks right next to the dic.dat’s first two blocks. That means the third and fourth blocks for dic.dat will not be able to be written right next to (or contiguous with) the first two blocks. Now the dic.dat file is fragmented. With the file split into two chucks, the operating system has to perform two reads to completely read the dic.dat. When the disk drive or RAID set has to do more reads (disk I/O’s), performance suffers.

Space fragmentation occurs when you delete a file, typically because it was a temporary file. Once deleted, the space it originally occupied leaves a gap on the filesystem. Many deleted files create greater fragmentation, making it impossible to write a file contiguously.

The filesystems/directories I discussed in my last post ($CCLSCRATCH, $CCLUSERDIR, $cer_temp, $cer_fifo, $cer_usock, $cer_log, /var, /tmp) are all designed to hold temporary files, such as print files or logging files, that will be deleted as soon as they are no longer needed, e.g., daily or possibly hourly.

The downstream impact of this design is that all current operating systems need to have both types of fragmentation corrected. Vendors provide tools for this defragmentation, including defragfs for AIX, fsadm for HP-UX, and fsadm for CAMM (Millennium’s PACS) and the Vxfs filesystem that it uses. VMS has the DFU tool on the freeware CDs that has to be installed before it can be used. Also for VMS, HP has a DFO tool and Diskeeper offers Diskeeper for VMS, which has been around since 1986. You’ll have to pay for these latter two tools, but it might be worth investing in one of them if you’re trying to keep your VMS system. For Windows, you can use the built-in tool called Disk Defragmenter; a third-party tool like Diskeeper 2010 and its various versions or PerfectDisk 11 and its various versions; or one of the plethora of freeware/shareware tools, my favorite being Defraggler from Piriform. I encourage you to spend an hour or so looking through the numerous white papers, blogs and forums discussing these options.

My first team in Cerner’s Advanced Technology Group was tasked with successfully completing client conversions no matter what technical hurdles we faced. One of our biggest obstacles was severe fragmentation of both files and free space. We had such significant file and space fragmentation at several sites that we added a drastic step to our normal recommendations. In addition to deleting lots of files daily and defragmenting the filesystem at least once per week, the clients had to set up a schedule to point the filesystem, say $cer_print, to an entirely new filesystem every three months. Miraculously, their systems became much more stable.

Why did it help to move the filesystem? Each filesystem had an index that kept track of where each file started, the size of the file, where it ended, etc. As thousands of files were created and deleted every day, the operating system had to slow down in order to keep track of them. The periodic refresh of the Inode or indexf.sys (on VMS) brought performance back to a better level. With today’s improved drive technologies, this type of severe fragmentation is less likely, but it still needs to be reviewed. Some of you might find it necessary to recreate your filesystem on another set of disks and benefit from having contiguous files and free space.

Prognosis: Using defragmentation tools on a regular basis to reduce filesystem fragmentation on all operating systems will increase Millennium’s speed, consistency and reliability.