Sunday, January 29, 2012

Module 4

In today's world, electronic data is growing and compounding at ever increasing rates. The majority of us deal with electronic data on a daily basis, whether it be through songs, pictures, documents on one's own computer, to banking account information, electronic transactions, websites, e-mail, Facebook, and of course all the data that is produced at businesses on a daily basis. The challenge is how to protect that data from potential loss.

Most computer users are familiar with the problem of lost data. Fortunately, most incidents are relatively inconsequential, representing only a few minutes of lost work or the deletion of unnecessary files. However, sometimes the nature of the lost data is critical, and the cost of the data lost is substantial. As reliance on information and data as economic drivers for businesses continues to increase, owners and managers are subject to new risks. One study reports that a company that experiences a computer outage lasting for more than 10 days will never fully recover financially, and that 50% of companies suffering such a predicament will be out of business within 5 years.

The cause for data loss varies, but can be broken down into six distinct categories, according to a study done at Pepperdine University. The most significant cause for data loss is hardware failure, which accounts for approximately 40% of all data loss incidents. Typically, as you might expect, most of these losses are a result of hard drive failure, which causes the data to be unreadable. In second place is human error, which accounts for 29% of all data losses. After human error, the remaining four reasons drop off considerably; 13% are caused by software corruption, 9% by theft, 6% by viruses, and 3% from hardware destruction.

I, myself, being in the IT industry and the primary person responsible for data protection for the Credit Union I work for, I understand the the challenges and immense responsibility placed upon me for protecting our data from a variety of scenarios that may arise. Fortunately for me, I have had many hours of training in this area, and I am lucky enough to have a good friend Micah Marsden who is a graduate of Weber State University with a BA in telecommunications administration, and who I was fortunate enough to interview for this project. He is now a Sustaining Engineer at Quantum Corporation, and helps IT managers like myself deal with backup issues and design on a daily basis. He stated that data protection is a key issue of concern and cost for businesses. Not only does it consume a large amount of resources and time for companies, the potential devastating impact from poor design or implementation is all too widespread. He pointed out a problem far to many companies neglect to take seriously, which is that they simply set up their backup plan, implement it, and forget about it. Protecting data is an ongoing job, and requires daily monitoring and intervention. First off, he said one the simplest things any backup administrator can do is to observe the backup logs on a daily basis. Jobs that are supposed to run daily can run into problems, or simply not run at all. Another aspect that is neglected far too often is disaster recovery drills. He encourages IT staff to perform mock disaster recovery drills on at least an annual basis, and preferably more often than that. Practice makes perfect, and in a stressful situation where data is required to be recovered quickly, there is no substitute for actually performing the job at hand. He further states, the technologies out there are numerous, and can be tailored to fit budgets, recovery time and expertise. They can range from a simple home backup plan of taking personal computer data and backing up to a second hard drive, to backing up to the cloud, to complicated and expensive disk and tape based solutions.

Micah's favorite technology used today is what is referred to as data deduplication. This technology allows companies to backup to a local disk by either gigabyte cat5 or fiber channel, and uses what is referred to as block-level deduplication. Block-level data deduplication operates on the sub-file level. As its name implies, the file is typically broken down into segments that are examined for redundancy when compared to previously stored information. This allows backups to be much more efficient when it comes to retention and cost. I, myself, have been so impressed with this technology that I have implemented it in my own environment and have been very happy with the results. I am now able to keep approximately four months worth of backups on a single unit and without having to use an exorbitant amount of disk space to accomplish it. Not only does this save storage, I am also able to duplicate the data to tape for off-site storage.

3 comments:

  1. Really liked to read that drills were being run. They are so important in so many fields. Glad to hear that some people are doing them. My personal exp with IT people has left me with little confidence. Good to hear that some are doing things right.

    ReplyDelete
  2. I really liked reading you essay. I find myself getting frustrated with my lack of knowledge in computers. It is amazing what it takes to keep everything safe and secure. I am now backing up my laptop. I have had 2 scares with my computer where I could have lost all my data on it. I am impressed with your knowledge of computers and glad that you are promoting the importance of backing up your computer.

    ReplyDelete
  3. Funny, I know Micah Marsden. And it is so important that certain data is protected. Your job seems interesting, but also has a lot of responsibility. Take for instance the people that hacked into the Medicaid records and retrieved private individual records. Your position frightens me.

    ReplyDelete