The best way to deal with problems is to stop them from happening in the first place. That’s where preventative maintenance comes in.
A good preventative maintenance program incorporates a comprehensive backup plan, measures to secure the system against malicious exploits, periodic hardware and software maintenance, and steps to maintain general system tidiness. The goals of preventative maintenance are to reduce the likelihood of hardware failures, extend the useful life of the system, minimize system crashes caused by outdated drivers and other software problems, secure the system against viruses and other malware, and prevent data loss.
The following sections outline a basic preventative maintenance program that you can use as a basis for developing a program that fits your own and your system’s needs.
Backing up the system
Maintaining a good set of backups is a critical part of preventative maintenance.
The availability of inexpensive hard drives and motherboards that support RAID 1 mirroring had led many people to depend solely on RAID 1 to protect their data. That’s a very bad idea. RAID 1 protects only against the failure of a hard drive, which is partial protection at best. RAID 1 does nothing to protect against:
- Data being corrupted by viruses or hardware problems
- Accidentally deleting, overwriting, or modifying important files
- Catastrophic data loss, such as fire or theft of your equipment
To protect against those and other threats, the only reliable solution is to make backup copies of your data periodically to some form of removable media, such as tapes, optical discs, or removable hard drives.
In the past, there weren’t any really good hardware choices for backing up home and SOHO systems. Tape drives were expensive, complex to install and configure, used fragile and expensive media, and were painfully slow. CD writers, although reasonably fast and inexpensive, stored such a small amount of data that many people who used them for backing up were reminded of the Bad Olde Days of swapping floppy disks. External hard drives were expensive and of dubious reliability.
Things have changed. Consumer-grade tape drives are still expensive and slow, although it’s easier to install a modern ATAPI tape drive than it was in the days when tape drives used SCSI or proprietary interfaces. CD writers are still reasonably fast and inexpensive, and are a good solution if your data fits on one or two CDs. The most significant change in consumer-grade backup hardware has been the introduction of inexpensive DVD writers and external or removable hard drives. Table 3-1 lists the important characteristics of the types of backup hardware used for home and SOHO backups.
Table 3-1: Important characteristics of backup hardware
In addition to cost considerations, you face two issues in choosing backup hardware: capacity and speed. Ideally, the hardware you choose should be capacious enough to store the entire contents of your hard drive or at least all of your user data on one disc or tape. Just as important, the backup hardware should be fast enough to complete a full backup and verify in whatever time you have available for backups. It’s easy to meet both those requirements if you have an unlimited budget, but most of us have to compromise one or the other to avoid breaking the bank.
For most home and SOHO users, a DVD writer is the best compromise. For $100 or less (possibly much less), you can buy an internal DVD writer and a supply of discs sufficient to implement a comprehensive backup plan. If you have multiple non-networked systems or notebooks to back up, you can use an external USB/FireWire DVD writer to back them all up individually.
The capacity of a writable DVD 4.4 GB for single layer and 8.5 GB for dual-layer suffices for many systems (we’ll explain why shortly). Writing and verifying a full disc takes only a few minutes, which makes it practical to back up frequently, even several times during a work day. The only downside to writable DVD is that optical discs have much less robust error correction than tapes, which means there’s a small chance that a file won’t be recoverable from a backup DVD. That’s an easy problem to solve, though. Simply back up more frequently and keep your older backup discs. If you can’t recover the file from the current disc, you’ll be able to recover it from the one immediately preceding.
DISC VERSUS TAPE
We’re belt-and-suspenders types when it comes to protecting our data. Before affordable DVD writers became available, we backed up our own systems every day with Travan and DDS tape drives. And we admit that the less robust error correction of optical discs initially gave us pause. But we converted a couple years ago to using DVD+R and DVD+RW for backups, and we haven’t looked back. We use top-quality discs (Verbatim premium) and have never had a problem recovering a file. Tape still has its place in corporate data centers, but as far as we’re concerned, it’s obsolete for home and SOHO users.
If DVD isn’t capacious enough, consider using external or removable hard drives, which store from 80 GB to 500+ GB. In either case, think of the hard drive as the media rather than as a drive. In other words, an external or removable hard drive is really just a funny-looking tape or disc, which you treat just as you would any other removable backup medium. Just as you need several discs or tapes for a good backup rotation, you’ll also need several external or removable hard drives. In terms of reliability, hard drives are intermediate between tapes and optical discs. Hard drives have more robust error detection and correction than optical discs, but less robust than tape. Once again, this needn’t be of concern if you back up to multiple external/removable hard drives. If you can’t recover a file from one, you’ll be able to recover it from another.
ADVICE FROM RON MORSE ABOUT BACKUPS
Make sure your latest hardware and software upgrades don’t leave your archived data behind. At one time I did most of my backing up to an external CDC SCSI hard drive. At 80 MB, it wouldn’t hold the system or application files (I had the original installation media for that) but it was big enough to hold my personal data until things got to the point where it wasn’t. The drive got demoted to archive status and fell out of regular service. Didn’t think about it too much.
One day I built myself a new machine that didn’t have an SCSI adapter because the new machine didn’t have any SCSI devices. The old machine got sold to some unsuspecting party. Then one day I needed to access the archive. I really needed to access the archive. Duh. Expensive lesson. This applies to software, too. If you have a lot of important data in a proprietary file format, possession of the files themselves is only half the challenge. You need to be able to read them, too. (Insert commercial for open file standards here.)
Organizing your data directory structure
If you back up to hard drives, you can back up your entire drive every time. If you use a DVD writer, you’ll probably do full backups infrequently, with routine backups only of your data files. In that case, it’s important to organize your data directories to make it as easy as possible to back up only your data while making sure that you back up all of your data. The trick here is to segregate your data into groups that can be backed up with different frequencies.
For example, our data, excluding audio and video files, totals about 30 GB. Obviously, it’s impractical to back that much data up to DVDs routinely. Fortunately, it’s not necessary to back it all up every time. Much of that data is historical stuff books we wrote years ago (and that we may update sometime), old email, and so on. That all needs to be backed up, but it’s not necessary to back it up every day or even every month. So we segregate our data into subdirectories of three top-level directories:
This top-level directory contains our current working data email, current book projects, recent digital camera images, and so on. This directory is backed up every day to DVD, and frequently throughout the day to mirror directories on other systems on our network. We never allow this directory to grow larger than will fit on one DVD.
This top-level directory contains all of our old data: files that we may not need from one month to the next, or even from one year to the next. This directory is backed up to multiple redundant sets of DVDs, two of which are stored off-site. Each backup set currently requires six DVDs. Every time we add data to the archive directories, which doesn’t happen often, we burn several new sets of backup DVDs. (We keep the old discs, too, but then we’re packrats.)
This top-level directory is intermediate between our working data directories and our archive directories. When the size of our working data directories approaches what will fit on one DVD, usually every two or three months, we sweep older files to the holding directory and burn new copies of the holding directory to DVD. By doing this, we can keep our working data directory at a manageable size, but not have to redo the archive directory backups very often. We also keep the size of this directory to what will fit on one DVD. When it approaches that size, we sweep everything in the holding directory to the archive directory and burn a new set of archive DVDs.
When you plan your data directory structure, it’s also important to consider these aspects:
- The importance of the data
- How difficult it would be to reconstruct the data
- How often the data changes
In combination, these three factors determine how often data needs to be backed up, how many generations of backup copies you’ll want to retain, and therefore where the data belongs in your directory structure. For example, your financial records and digital photographs are probably critically important to you, difficult or impossible to reconstruct if lost, and change frequently. Those files need to be backed up frequently, and you’ll probably want to maintain several generations of backup copies. Those files belong in your working data directories.
Conversely, if you’ve ripped your CD collection to MP3s, those files are neither important nor difficult to reconstruct because you can simply re-rip the CDs if necessary. Although these files might reasonably be classified as data, chances are you’ll categorize them as data that never needs to be backed up and therefore locate them somewhere in your directory structure outside the directories that are routinely backed up.
Developing a backup rotation scheme
Whatever backup hardware you use, it’s important to develop an appropriate backup rotation scheme. A good rotation scheme requires half a dozen or more discs, tapes, or drives, and allows you to:
- Recover a recent copy of any file easily and quickly
- Recover multiple generations of a file
- Maintain multiple copies of your data for redundancy and historical granularity
- Store at least one copy of your data off-site to protect against catastrophic data loss
The most popular backup rotation scheme, and the one most suitable for backups to DVD+RW discs, is called Grandfather-Father-Son (GFS). To use this backup rotation, label the following discs:
- Five (or six) daily discs, labeled Monday through Friday (or Saturday).
- Five weekly discs, labeled Week 1 through Week 5.
- Twelve monthly discs, labeled January through December.
Back up each working day to the appropriate daily disc. On Sunday, back up to whichever numbered weekly disc corresponds to the number of that Sunday in the month. The first (or last) of each month, back up to the monthly disc. This method gives you daily granularity for the preceding week, weekly granularity for the preceding month, and monthly granularity for the preceding year. For most home and SOHO users, that scheme is more than sufficient.
You can, of course, modify the standard GFS rotation in whatever way is suitable to your needs. For example, rather than writing your weekly or monthly backups to a DVD+RW disc that will eventually be overwritten, you can write those backups to DVD+R (write-once) discs and archive them. Similarly, there’s nothing to prevent you from making a second backup disc every week or every month and archiving it off-site.
If you’re backing up to external or removable hard drives, you probably won’t want to use the standard GFS rotation, which would require 22 hard drives. Fortunately, you can use fewer drives without significantly compromising the reliability of your backup system. Most removable hard drives have room for at least two or three full backups, if you back up your entire hard drive, or a dozen or more data-only backups.
You still don’t want to keep all your eggs in one basket, but it’s reasonable to limit the number of baskets to as few as two or three. The trick is to make sure that you alternate the use of the drives so that you don’t end up with all of your recent backups on one drive and only older backups on another. For example, if you decide to use only two external or removable hard drives for backup, label one of them M-W-F and the other Tu-Th-S, and alternate your daily backups between the two drives. Similarly, label one of the drives 1-3-5 and the other 2-4 for your weekly backups, and one drive J-M-M-J-S-N and the other F-A-J-A-O-D for your monthly backups.
Choosing backup software
There are four broad categories of software that can be used for backing up. Each has advantages and drawbacks, and which is best for you depends on your needs and preferences.
System utilities such as xcopy are free, flexible, easy to use, can be scripted, and create backups that are directly readable without a restore operation. They do not, however, typically provide compression or any easy means of doing a binary compare on each file that has been copied, and they can write only to a mounted device that’s visible to the operating system as a drive. (In other words, you can’t use them to write to an optical disc unless you’re running packet-writing software that causes that disc to appear to the operating system as a drive.)
CD/DVD burning applications
CD/DVD burning applications, such as Nero Burning ROM (http://www.nero.com) and K3b (http://www.k3b.org) are fast, can create directly readable backup copies, and generally offer robust binary verify features, but may not offer compression. Most also have little or no ability to filter by file selection criteria, such as, “back up only files that have changed today.” Of course, CD/DVD burning applications have other uses, such as duplicating audio CDs and video DVDs, and chances are that you already have a burning application installed. If so, and if the burning application suits your requirements, you can use it rather than buying another application just for backing up.
Traditional backup applications
Traditional backup applications such as BackUp MyPC (http://www.stompsoft.com) do only one thing, but they do it very well. They are fast, flexible, have robust compression and verification options, support nearly any type of backup media, and allow you to define standard backup procedures using scripting, detailed file selection criteria, and saved backup sets. If your needs are simple, the bundled Windows backup applet, which is a stripped-down version of Veritas Backup Exec (since sold and renamed BackUp MyPC) may suffice. Otherwise, we think the commercial BackUp MyPC is the best option for Windows users.
Disk imaging applications
Disk imaging applications, such as Acronis True Image (http://www.acronis.com) produce a compressed image of your hard drive, which can be written to a hard drive, optical disc, or tape. Although they are less flexible than a traditional backup application, disk imaging applications have the inestimable advantage of providing disaster recovery features. For example, if your hard drive fails and you have a current disk image, you needn’t reinstall Windows and all your applications (including the backup application) and then restore your data. Instead, you simply boot the disaster recovery disc and let ‘er rip. Your system will be back to its original state in minutes rather than hours.
We use three of these four software types on our own network. Several times a day, we do what we call “xcopy backups” even though we now run Linux instead of Windows to make quick copies of our current working data to other systems on the network. We use a CD/DVD burning application, K3b for Linux in our case, to run our routine backups to DVDs. And, when we’re about to tear a system down to repair or upgrade it, we run an image backup with Acronis True Image, just in case the worst happens.
YOU CAN NEVER BE TOO WELL BACKED UP
Whatever backup means and methods you use, keep the following in mind and you won’t go far wrong:
- Back up frequently, particularly data that is important or hard to reconstruct
- Verify backups to ensure that they are readable and that you can recover the data from them
- Maintain multiple backup sets, for redundancy and to permit recovering older versions of files
- Consider using a data-rated firesafe or media safe for on-site storage
- Store a recent backup set off-site, and rotate it regularly
Although online backup services (including using Google’s Gmail for ad hoc backup storage) are reasonable choices for supplemental backups, we suggest that you not use them as your primary form of backup. There are too many things that can go wrong, from your (or their) Internet connection being down to server problems at the hosting company, to the company going out of business with no notice. When you need your backups, you need them right now. Keep your primary backups within easy reach.
Software Security Isn’t
Although many people depend on software firewalls, such as ZoneAlarm (http://www.zonealarm.com) or Norton Internet Security (http://www.symantec.com), we think that’s a mistake. Among security experts, it’s a truism that software cannot protect the system that is running it. Any software firewall may be compromised by exploits that target it directly or the underlying operating system. In our opinion, a software firewall is better than nothing, but not much better.
Securing the system
The most important step that you can take to secure your system against worms and other malicious intruders is to install a hardware router/firewall between your system and the Internet. A properly configured router/firewall blocks malicious scans and probes, and makes your system effectively invisible to the millions of infected systems on the public Internet that are constantly trying to infect it. Hardware router/firewall devices typically sell for only $30 to $50, so they are cheap insurance against your system being compromised by malicious intruders.
We much prefer cable/DSL routers made by D-Link, such as the DI-604 (wired only) or the DI-624 (wired/wireless), but similar broadband routers made by NETGEAR and Linksys are also popular. All current models we are familiar with use default settings that provide adequate security, but it’s still worth taking a few minutes to study the manual to make sure that your router is configured to provide a level of security that is acceptable to you.
WEP Security Isn’t
If you install a wireless router and enable wireless networking, be sure to secure it properly. The standard used by early 802.11 wireless devices, called WEP (Wired Equivalent Privacy) is now hopelessly insecure. WEP can be cracked in literally minutes or even seconds using utilities that anyone can download. The newer WPA (Wi-Fi Protected Access) standard, when configured properly, is secure against all but the most sophisticated attacks. If your current wireless adapters and access points support only WEP, replace all of them immediately with devices that support WPA. Otherwise, you might as well run your wireless network with no security at all.
After you install and configure your firewall/router, visit the Gibson Research Corporation web site (http://www.grc.com) and use their Shields UP! service to test your security. Shields UP! probes your system and reports on the status of the ports most commonly attacked by worms and other malicious exploits. Figure 3-17 shows the results of running Shields UP! against one of our Windows XP testbed systems.
Figure 3-17: Gibson Research Shields UP! showing an (almost) fully stealthed system
Shields UP! flags open ports (very bad news) in red. Closed ports those that do not accept connections but when probed do acknowledge being present are flagged in blue. Stealthed ports those that do not respond at all to probes are flagged in green. Ideally, we’d like all our ports to be flagged green, because that in effect makes our system invisible to intruders. But for practical reasons, we’ve stealthed all ports except port 113 (ident), which responds to probes as being closed.
USE nmap FOR SERIOUS TESTING
For more rigorous testing, try using nmap (http://www.insecure.org/nmap/). Because you can run it within your network, you can use it to test individual systems, rather than just your network as a whole. It’s useful to test both your router and your computers so that you know what vulnerabilities exist. You might be surprised to learn that you somehow managed to enable a web server that you never use, or that you have an unpatched version of SQL Server (the vector for the well-publicized Slammer/Sapphire worm) running that got installed along with some other software package.
What Is Port 113?
Port 113 is used for ident requests, which allow remote servers to discover the user name associated with a given connection. The information discovered via ident is rarely useful, and not trustworthy. However, when a remote server tries to connect back to your computer and issue an ident response, a closed port tells it: “Sorry, I’m not running ident”. A stealthed port, on the other hand, may lead the remote server to conclude that your computer doesn’t exist. A remote server is more likely to permit your connection (such as FTP, HTTP, or TELNET) if it believes that you are really there.
Most hardware routers by default stealth all ports except 113, which they configure as closed. A few routers stealth port 113 as well. That’s usually a bad idea, because it can cause slow response or no response at all from some servers. If Shields UP! reports that port 113 is stealthed, we suggest using the router’s configuration utility to change port 113 to closed rather than stealthed.
Here are some other steps you should take to secure your Windows systems:
One of the most important steps you can take is to secure a Windows system is to replace the buggy, insecure Internet Explorer with a different default browser. The most popular alternative browser is Firefox (http://www.mozilla.org). We suggest that you install Firefox immediately and begin using it as your default browser. Ignore the Microsoft-inspired FUD that argues that Internet Explorer is just as secure as Firefox. It isn’t. Firefox is an order of magnitude more secure.
Install ad-blocking software.
Although most banner ads and pop-ups are not malicious, they are annoying. And some ads contain links to malicious sites where merely clicking on a link or even simply viewing the page may install malware on your system via a “drive-by download.” Using ad-blocking software minimizes the problem. We use Ad Block (http://extensionroom.mozdev.org), but there are many alternatives, including Privoxy (http://www.privoxy.org), WebWasher (http://www.cyberguard.com), and AdSubtract (http://www.intermute.com).
Secure Internet Explorer.
Unfortunately, it’s impossible to remove Internet Explorer completely from a Windows system. And IE is dangerous just sitting there on your hard drive, even if you never run it. You can minimize the danger by configuring IE to be as secure as possible. To do so, run IE, choose Tools Options Security tab. Select each security zones, click the Custom Level button, choose “High security” from the drop-down list, and click the Reset button. Repeat the process for each security zone. Once you have done that, Internet Explorer is pretty much unusable, but it is at least as secure as it’s possible for it to be.
Disable Windows Scripting Host.
Even if you secure Internet Explorer, Windows Scripting Host (WSH) remains installed and dangerous. For best security against VBS viruses, we recommend removing WSH entirely, although doing so means that Windows can no longer run any .vbs script. Depending on the version of Windows you run, you may be able to remove WSH by using the Add or Remove Programs applet in Control Panel.
If there is no option to remove WSH from Control Panel, you can remove WSH manually by deleting the files cscript.exe and wscript.exe, but you must do so in the proper sequence. Windows stores two copies of these files, the active copies in WINDOWS\system32, and backup copies in WINDOWS\system32\dllcache. Delete the backup copies first, and then the active copies. If you delete the active copies first, Windows immediately detects their absence and restores them automatically from the backup copies. After you delete both copies, Windows pops up a warning dialog that you can simply dismiss.
DE-WSHING THE EASY WAY
You can also use Noscript.exe from Symantec (http://www.symantec.com/avcenter/noscrip…), which removes WSH automatically.
Although recent versions are more secure than older versions, Outlook is still a virus magnet. If possible, we recommend replacing it with Mozilla Thunderbird or another alternative mail client.
The measures we’ve described thus far protect your system against being infected by worms and other exploits that do not require user intervention. Unfortunately, such automated exploits are not the only security dangers. Your system is also at risk from exploits that require your active (if unknowing) participation. The two major threats are viruses, which ordinarily arrive as attachments to email messages, and spyware, which often piggybacks on “free” software, such as P2P clients, that you install voluntarily.
New viruses are constantly written and released into the wild, so it’s important to run a virus scanner regularly and keep it updated with the latest virus signatures. Although Norton AntiVirus (http://www.symantec.com) and McAfee VirusScan (http://www.mcafee.com) are two of the most popular antivirus scanners; we don’t use either. Instead, we recommend installing Grisoft AVG Anti-Virus (http://www.grisoft.com), shown in Figure 3-18 . AVG is as effective as any competing product we’ve used, places few demands on system resources, and is free for personal use.
Figure 3-18: Grisoft AVG Anti-Virus Free Edition
Until a few years ago, viruses were the major security threat. Nowadays, malware is at least as great a threat. The least malicious form of malware is adware, which displays pop-up ads during browsing sessions, and may report your web browsing habits back to a central server usually anonymously and without reporting personal information that identifies you individually to help the adware display ads it thinks will be of interest to you. More malicious forms of adware, generally called spyware, collect and report information about you that may be of use to identity thieves and other malefactors. The most malicious forms of spyware go much further, using keystroke loggers and similar techniques to collect passwords, credit card and bank account numbers, and other critically sensitive information.
Even if you never install software that doesn’t come from a trusted source, you may be victimized by spyware. Sometimes, all it takes is visiting a malicious web page that invisibly downloads and installs spyware on your system. The only way to protect yourself against such malicious software is to install a malware scanner, keep it updated, and run it regularly. There are numerous malware scanners available, many at no cost. Unfortunately, some of them are actually spyware Trojans. If you install one of those, it will indeed scan your system and report on any “foreign” malware it detects. It may even be kind enough to remove that malware, leaving your system free to run the spyware that it installs itself.
Fortunately, there are two trustworthy malware scanners we can recommend, both of which are free for personal use. Spybot Search & Destroy (http://www.safer-networking.org), shown in Figure 3-19 , is donation-ware. Spybot is fast and extremely effective, and we use it as our first line of defense. (If you install it, please send the guy a few bucks; software this good should be encouraged.) We run Spybot daily on our Windows systems. As good as it is, even Spybot sometimes misses something. As a backup, we run AdAware (http://www.lavasoftusa.com) weekly. What Spybot doesn’t catch, AdAware does. (The paid version of AdAware includes a real-time ad-blocking and popup blocking application that works well.)
Figure 3-19: Use SpyBot Search & Destroy to detect and remove malware
By default, Windows runs many unnecessary background services. Disabling unneeded services has the dual benefit of reducing system resource consumption and eliminating potential entry points for security exploits. You can configure the startup behavior of Windows XP services using the services policy editor. To do so, click Start Run, type services.msc in the run dialog box, and press Enter. The Services policy editor appears, as shown in Figure 3-20.
Figure 3-20: Windows XP Services policy editor
Double-click the name of any service to display the property sheet for that service, as shown in Figure 3-21. Use the “Startup type” drop-down list to set the startup type to Automatic, Manual, or Disabled, as appropriate. If the service is currently running, click the Stop button to stop it. If other services depend on that service, Windows displays a warning dialog to tell you that stopping that service will also stop dependent services. Once you have reconfigured the startup settings for all services, restart the system to put your changes into effect.
Figure 3-21: Property sheet for the Alerter service
For a typical Windows XP system used routinely in residential or SOHO environment, we recommend enabling the following Microsoft services:
- Automatic Updates
- Cryptographic Services
- DHCP Client
- Event Log
- Help and Support
- HID Input Service
- Plug and Play
- Print Spooler
- Protected Storage
- Remote Access Auto Connection Manager
- Remote Access Connection Manager
- Remote Procedure Call (RPC)
- Remote Procedure Call (RPC) Locator
- Script Blocking Service
- Security Center
- Shell Hardware Detection
- Windows Audio
- Windows Image Acquisition (WIA)
- Windows Installer
- Windows Management Instrumentation
- Windows Management Instrumentation Driver Extensions
Disable all other Microsoft services, except possibly those listed in Table 3-2. Some of these services, particularly System Restore Service and Themes, use significant system resources, and are best disabled unless you require the functionality they provide.
Table 3-2: Recommended Windows XP Services startup settings
In addition to the scores of services that Microsoft includes with Windows XP, many systems run third-party services. Determining which services are non-Microsoft is difficult with the Services policy editor. Fortunately, there’s another alternative called the System Configuration Utility. To run it, click Start Run, type msconfig in the Run dialog box, and press Enter. Click the Services tab to display installed services. Mark the Hide All Microsoft Services checkbox to show only non-Microsoft services, as shown in Figure 3-22.
Figure 3-22: Windows XP System Configuration Utility displaying non-Microsoft services
In Figure 3-22, three non-Microsoft services are running. Two of them are a part of the AVG antivirus software we run on this system, and one is used by the NVIDIA video adapter. None of these are suspicious, so no action is needed. However, there are many other third-party services that may be malicious, including those installed by spyware. If you see a third-party service running and don’t recognize its purpose, investigate further. If in doubt, clear the checkbox to disable the service and test the system to see whether disabling that service breaks anything.
You can also view the System Configuration Utility Startup page to list the executable programs that Windows runs at startup, as shown in Figure 3-23.
Figure 3-23: Windows XP System Configuration Utility displaying programs run at startup
In this case, four of the five executable programs that Windows runs at startup on this system are clearly innocuous. NvCpl is the NVIDIA control panel utility. nwiz is the executable for WhizFolders Organizer Pro, a file management program we use. NvCpl and avgemc are the two executables for our AVG Anti-Virus software. But the highlighted item in the middle of the list concerned us because no executable program name is shown for it. That’s suspicious in itself behavior one might expect from a startup executable installed by a virus, worm, or spyware so it’s worth taking a closer look.
To do so, fire up the Registry Editor by clicking Start Run, typing regedt32 (or regedit, if you prefer a simpler editor) in the dialog, and pressing Enter. Navigate through the registry structure to view the key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Runwhere startup executables are listed. Figure 3-24 shows the contents of that key, which was obviously installed by the program Registry Mechanic, and is no cause for concern. If the startup executable is clearly a malicious program, simply delete it with Registry Editor. If you’re unsure about it, use Google to search for the executable name rather than simply deleting it.
Figure 3-24: Viewing startup programs in the Registry Editor
IT’S OKAY TO PLAY
Don’t hesitate to experiment with your startup configuration. There’s nothing you can disable here that will harm the system. At worst, a program may not work properly with a startup executable disabled. Unless you’re sure you need a particular startup program to be running such as your antivirus and malware scanners and your PIM go ahead and disable it. Reboot the system and see if anything is broken. If so, re-enable whatever it was you disabled, and play around some more.
Finally, we recommend periodically running a registry cleaner, such as CleanMyPC (http://www.registry-cleaner.net) or Registry Mechanic (http://www.pctools.com), shown in Figure 3-25. We include registry maintenance as an element of securing the system, because registry exploits are becoming increasingly common. Even if your system is never infected by any malicious software, though, it’s still worth pruning and compacting the registry periodically to increase performance and reliability of the system.
Figure 3-25: Use Registry Mechanic or a similar product to scan and clean the registry
There are numerous registry tools available. Most are commercial or shareware products, although many are available as crippled demos for free downloading. Some perform only one aspect of registry maintenance, such as enhanced registry editing, removing unused entries, or defragging the registry heaps. Others combine many registry-related functions into one product. We suggest you download and try one or both of the two products we mention first. If neither suffices, a Google search for “registry cleaner” turns up dozens of other possibilities.
Hard drive housekeeping
As we started to write this section, we checked one of our hard drives. It had 185,503 files in 11,607 folders. It’s anyone’s guess as to what they all are. Some are programs and system files, of course. We know there are hundreds of documents and spreadsheets, and thousands of audio files, images, and so on. But the majority of those 185,503 files are probably temporary and backup files, duplicates and older versions of current datafiles, browser cache files, and similar garbage. All they do is clutter up the hard drive, wasting space and harming disk performance. They need to be pruned from time to time, if only to keep them from eating you out of house and home.
ORGANIZING YOUR TEMP(ORARY) FILES
You can set a few environment variables to cause TEMP files to be stored in one location rather than be buried in a hidden folder under your Documents and Settings directory. To do so, create the folder C:\TEMP and then do the following:
- Right-click on My Computer Properties Advanced tab.
- Click the Environment Variable button and change the TEMP and TMP values to C:\TEMP by highlighting them, choosing the Edit button, and replacing the ridiculously long path to C:\TEMP.
- Use the New button to add another value called TMPDIR and set its path to C:\TEMP as well.
- Do the same thing in the System variable in the box below the User variable, again adding a variable called TMPDIR and setting its value to C:\TEMP.
No matter what you’ve set these environment variables to, you can navigate quickly to any of them by opening Windows Explorer, typing the name surrounded by percent signs (such as %TEMP%) into the Address field, and pressing Enter or Return. You should periodically visit this directory and delete any files and folders that are more than a few weeks old. Windows installer programs are notoriously bad about leaving large temporary files behind.
Clearing your browser’s cache is a good first step. After doing that, you may find that your file count has dropped by thousands of files, and, depending on the size of your browser cache, you may recover a gigabyte or more of disk space. You might then go to a command prompt and issue commands like:
del *.bak /s
del *.bk! /s
del *.tmp /s
and so on. This brute-force approach might eliminate thousands of unneeded files and recover gigabytes of disk space, but it’s an imperfect solution at best. First, you’ll probably leave a lot of unneeded files on the drive because you didn’t think to look for every extension. Second, you may end up deleting some files you’d really rather have kept, and you may not even be aware that you’ve done so until you find yourself searching fruitlessly for them later. Third, if you’re not paying attention, a slip of the finger can have disastrous results.
It’s better to use a utility designed for file pruning. Microsoft includes an applet for this purpose, but, as is usually true of Microsoft applets, it’s feature-poor. The Windows Disk Cleanup applet, shown in Figure 3-26, does nothing you can’t do manually yourself in about 30 seconds flat.
Figure 3-26: The Windows XP Disk Cleanup utility
Fortunately, there are better alternatives available as commercial utilities. Our favorite is ShowSize (http://www.showsize.com), shown in Figure 3-27, which provides all of the tools you need to keep your hard drive clean and organized.
Figure 3-27: ShowSize disk cleanup utility
Once you’ve pruned unneeded files from your hard drive, it’s time to run a disk defragger. As you write, modify, and delete files on your hard drive, Windows attempts to keep every file stored contiguously on the drive. Unfortunately, Windows isn’t very good at that task, so pieces of various files end up scattered here, there, and everywhere about the drive, a phenomenon known as file fragmentation or disk fragmentation.
Fragmentation has several undesirable effects. Because the drive heads must constantly be repositioned to read and write files, hard disk performance suffers. Read and write performance on a badly fragmented drive is much slower than on a freshly defragmented drive, particularly if the drive is nearly full. That extra head movement also contributes to higher noise levels, and may cause the drive to fail sooner than it otherwise would. Finally, when a drive does fail, it is much easier (and less costly) to recover data if that drive had been defragmented recently.
NTFS and Fragmentation
For years, Microsoft claimed that NTFS was not subject to fragmentation. As Figure 3-28shows, that’s not true, even on a sparsely populated drive. With only 13% of this drive in use, Windows has still fragmented the majority of the occupied space. Even after the Windows Disk Defragmenter utility has finished running, some fragmentation remains. The thin green bars are system files the Master File Table and the paging file that are always open when Windows is running, and so cannot be defragged by the bundled Windows utility. As to the blue bar that remains out in the middle of nowhere after defragging: we have no idea why Windows does this, but it always seems to leave at least a few files on their own rather than consolidating all of the files.
The solution to disk fragmentation is to run a defragging utility periodically. A defragging utility reads each file and rewrites it contiguously, making file access much faster. The Disk Defragmenter utility bundled with Windows, shown in Figure 3-28, is slow, inefficient, and feature-poor. But, hey, it’s free, and it’s (usually) good enough to do the job.
Figure 3-28: The Windows XP Disk Defragmenter utility
If you need a defragger with more features and better performance, consider buying a commercial defragging utility. The two best known commercial defraggers are Vopt (http://www.vopt.com) and Diskeeper (http://www.diskeeper.com). We’ve used both for years, and have never had a problem with either of them.
One major failing of the Windows XP Disk Defragmenter utility is that it cannot defrag the paging file, at least unless you’re willing to go through hoops to do so. Windows uses the paging file to store applications and data for which there is no room in main memory. If you run many applications simultaneously or use large data sets, main memory inevitably becomes full. When that occurs, Windows temporarily swaps out inactive applications and data to the paging file. Because the paging file undergoes a lot of “churn,” it invariably becomes heavily fragmented, which in turn causes increased fragmentation of user programs and data.
Figure 3-29: The Windows XP Virtual Memory dialog
Unfortunately, the design of Windows makes it impossible to defragment the paging file while Windows is running. But there are two ways to defrag the paging file. First, use a commercial defragger like Diskeeper or the free pagedefrag (http://www.sysinternals.com/Utilities/Pa…) that provides a boot-time defragging utility that runs before Windows loads. Alternatively, you can use the Windows XP Disk Defragmenter utility to defrag the paging file by taking the following steps:
- Right-click My Computer and choose Properties to display the System Properties dialog.
- Click the Advanced tab.
- In the Performance pane, click the Settings button to display the Performance Options dialog.
- Click the Advanced tab.
- In the Virtual memory pane, click the Change button to display the Virtual Memory dialog, shown in Figure 3-29.
- Write down or memorize the current paging file size, which you’ll use later when you restore the paging file.
- Mark the “No paging file” radio button, and click the Set button to change the paging file system to zero.
- Restart the computer, which will now operate without a paging file.
- Run the Windows XP Disk Defragmenter utility to defrag the hard drive.
- When defragging completes, repeat steps 1 through 5 to display the Virtual Memory dialog.
- Reset the paging file size to the original value.
- Restart the system, which will now operate with a defragged paging file of the original size.
Keeping your system updated
Hardware and software companies periodically release updated software, device drivers, and firmware. These updates may be security related, or they may add support for new features or compatibility with new devices. We recommend that you keep yourself informed about such updates, but the Golden Rule when it comes to installing updates is, “If it ain’t broke, don’t fix it.”
Security Through Insecurity
Ironically, to use Microsoft’s automatic update services, you must use Internet Explorer the least secure browser on the planet.
Evaluate each update before you install it. Most updates include release notes or a similar document that describes exactly what the update does and what problems it fixes. If a particular update solves a problem you’re experiencing or adds support for something you need, install the update. Otherwise, be very leery. More than once, we’ve installed an update for no good reason and found that the update broke something that used to work. It’s often possible to recover from a failed update by uninstalling the update and reverting to the original version, but sometimes the only solution is to format the drive and reinstall everything from scratch.
Operating system and application software updates
Operating system and application software updates are one exception to our general rule of caution. Windows in particular is under constant attack from worms and other malicious software, so it’s generally a good idea to apply critical Windows patches as soon as possible.
Microsoft provides the Microsoft Update service (http://update.microsoft.com/microsoftupd…) to automate the process of keeping Windows and Office patched. To configure Microsoft Update to download and install patches automatically, display Control Panel and choose Security Center. At the bottom of the Security Center dialog, in the “Manage security settings for:” pane, click the Automatic Updates link to display the Automatic Updates dialog, shown in Figure 3-30.
Figure 3-30: The Windows XP Automatic Updates configuration dialog
The recommended (and default) setting is Automatic, which causes Windows to download and install updates without user intervention. That’s a bit too trusting for our taste. We’ve been burned many times by Microsoft patches that in retrospect we wish we’d never installed. We recommend choosing the second option, which causes updates to be downloaded automatically in the background, but not to be installed until you approve them, or the third option, which merely notifies you when updates are available.
Managing applications software is more problematic, because, for Windows at least, there is no central location where you can check for available updates. (Linux is far superior in this regard. Most modern Linux distributions can automatically check one central repository for available updates for the operating system and most or all installed applications.) With Windows, you have to search out updates for each application yourself.
Fortunately, most major applications nowadays, and many minor ones, automatically check periodically for updates, or at least prompt you to do so. We recommend keeping a close eye on applications that use the Internet heavily; for example, browsers, email clients, and P2P packages. Exploits against such applications are relatively common and have potentially severe consequences. Other applications, while not risk-free, don’t require such close supervision. It’s less likely, for example, that your CD burning application or a file viewer will suffer a severe security hole. (It’s not unheard of, though; Adobe’s Acrobat Reader has been patched several times to fix serious security holes.)
Device driver updates
Windows, Linux, and all other modern operating systems use an extensible architecture that allows loadable device drivers to add support for devices that are not supported directly by the OS kernel. Your system uses device drivers to support your video adapter, sound adapter, network adapter, and other peripheral devices.
Other than BIOS and other firmware code, device driver code is the most carefully debugged software running on your PC, so even old drivers are unlikely to have significant bugs. It’s still a good idea to keep an eye out for updated device drivers, though, because updated drivers may improve performance, add support for additional features, and so on. In general, we recommend updating your device drivers any time you install new hardware.
Video adapter drivers (and, to some extent, audio adapter drivers) are a special case, particularly if you play 3D games on your PC. Video adapter makers update their drivers frequently to add support for new games and to tweak performance for existing games. In many cases, the performance improvements can be substantial, even if you’re using an older model video adapter. If you game, check for video adapter updates every month. Otherwise, every three to six months is sufficient.
Firmware is halfway between hardware and software. Firmware is software that is semi-permanently stored on nonvolatile memory chips inside your PC. The main system BIOS, for example, is firmware. But the main system BIOS is by no means the only firmware on your system. Nearly every peripheral, from video and audio adapters to network cards to RAID controllers to hard drives and optical drives, has its own firmware.
We recommend keeping an eye out for updates to your motherboard BIOS and other firmware, but use caution in deciding whether to apply those updates. Again, in general: if it ain’t broke, don’t fix it. To some extent, the decision depends on how old the device is. It’s quite common for newly introduced components to have several firmware updates made available early in their life cycles. As time passes, firmware updates typically become less frequent, and tend to be minor fixes or feature additions rather than significant updates.
The major exception is optical writers. The firmware in CD and DVD burners includes write schema that allow the drive to use the optimum write strategies for different brands and types of media. As new brands of media are introduced, optical drive makers update their firmware to support the new types of media. We recommend checking for firmware updates for your optical writer every time you buy a new batch of discs.
BURNING YOUR BRIDGES
It’s usually easy to recover from a bad firmware update. If you update the firmware in your DVD writer, for example, and it stops working properly, you can usually just update the drive again using the older firmware revision and be back to where you started. When you update your motherboard BIOS, it’s a different story. A failed BIOS upgrade may render the board unusable, necessitating returning it to the factory for repair. The most frequent cause of failed motherboard BIOS upgrades is a power failure during the update process. If at all possible, connect your system to a UPS before you update the motherboard BIOS.
Better motherboards avoid this problem in one of two ways. Some have two BIOSs installed. If you bork one during a failed update, you can start the system using the backup BIOS and then recover the primary BIOS. Intel uses a different but equally effective method. If the BIOS update process fails on an Intel motherboard, you simply set a jumper to the BIOS recovery position. Even after a failed update, an Intel BIOS has enough smarts to attempt to boot from the floppy drive. You can simply copy the BIOS datafile to a floppy disk, set the jumper to the recover position, reboot the system, and allow the BIOS update to install automatically.
Curing Windows Rot
Microsoft made two very bad design decisions for Windows. Well, actually, they made a lot more than two bad decisions, but two are of primary concern.
The concept of using DLLs (dynamically linked libraries or dynamic link libraries) was flawed from the start, as millions of Windows users can attest. Old and new versions of the same DLL with the same name can co-exist on a system, and Windows provides no rigorous management of these diverse versions. A newer version of an application often doesn’t work with an older version of a DLL it requires, which is bad enough, but older versions of an application may not work with newer versions of the DLL. That means that something as simple as installing an update for one program may break another. Welcome to DLL Hell.
The Windows Registry, apparently patterned after the bindery used in antique versions of Novell NetWare, is the second part of the double whammy. With the introduction of Windows NT, Microsoft abandoned the use of simple, plain-text configuration files for the dubious benefits of a central registry. Although the registry concept might have worked had it been implemented properly, with rigorous controls and powerful management tools, Microsoft did none of that. Instead, the registry is a gigantic heap of spaghetti that even experts have trouble deciphering. The registry on a typical Windows box grows like Topsy, with obsolete data left cluttering up the place and new data added willy-nilly without consideration for conflicts or backward compatibility. Microsoft provides only the most basic tools for maintaining the registry, and even the best commercial registry maintenance software can do only so much to eliminate the mess.
The upshot is that any Windows system contains the seeds of its own destruction. Over the months and years, as new software is installed and old software deleted, Windows gradually becomes more and more unstable. DLL conflicts become increasingly common, and performance slows. This phenomenon is universally known as Windows Rot. Careful installation practices and periodic registry cleaning can slow Windows Rot, but in our experience nothing can stop it completely.
Microsoft claims that Vista will solve the Windows Rot problem, this time for sure. They may even be right, but we doubt it. Unfortunately, the only sure cure we know of for Windows Rot, short of Microsoft rewriting Windows from the ground up or you switching to another operating system, is to strip the hard drive down to bare metal, reinstall Windows and all applications, and restore your data. Most power users do this every six months to a year, but even casual users will probably benefit from doing a fresh install every year or two.
One sure indication that it’s time for a clean install is that your system begins behaving strangely in ways that aren’t attributable to a virus or a hardware problem, particularly if that occurs immediately after you’ve installed new software, updated drivers, or made other significant changes to your system. But Windows Rot can manifest in much more subtle ways. If you’ve been using your Windows system for a year or two without a reinstall and it seems much slower than it used to be, that’s probably not your imagination. Besides slow performance, Windows Rot can cause a variety of problems, from severe memory leaks to random reboots.
Because it’s so difficult to pin down the particulars of Windows Rot, or even to know the extent to which a particular system suffers from it, we recommend simply doing a fresh install once a year, whether you think you need it or not.
A periodic maintenance checklist
Table 3-3 summarizes the procedures we recommend for periodic maintenance.
Table 3-3: Periodic maintenance checklist