In this video, W. Curtis Preston, independent backup expert, talks about the implementation of data deduplication technology in a data storage environment, and how disk builds a strong foundation in a data backup system.
Data deduplication is a method of reducing data in a storage environment by eliminating redundant data. Many organizations want to implement data deduplication technology in their environment; however, many people don't know which approach best fits their environment. And every backup vendor you go to is going to tell you their approach is best.
Preston explains several different approaches of implementing data dedupe and what data storage environments best match up to each approach. They include source vs. target dedupe, local vs. Global deduplication, hashing vs. delta differentials and inline vs. post-processing deduplication.
In target dedupe, you're using the same backup software you already have, but you are sending the data to a target, which will then dedupe it. This deduplication technology is best for large-scale data centers. With source deduplication, you have to use different backup software that dedupes at the source. It reduces the IP load from the very beginning, and is best suited for remote offices and branch offices.
Preston also explains how data deduplication technology differs from similar technologies such as data compression and single-instance storage. Plus, he discusses the different behaviors and data elements that change data deduplication ratios and the importance of restores and copy speed in data dedupe technology.