Erasure coding tradeoffs include additional storage, disk update needs

Erasure coding tradeoffs include requirement for additional data storage, need to update extra disks for redundancy purposes, UC professor cautions.

Erasure coding that protects against device failures and a significant amount of data loss may not be the best choice for most enterprise IT shops, according to a university professor who does research on storage technology.

Ethan Miller, who teaches computer science at the University of California at Santa Cruz, said the tradeoff is that the IT organizations will require additional storage for the redundant information, and they'll need to update all of the disks that they use for redundancy.

Professor Miller does research on erasure coding and how to use it in storage systems. He also works part-time at Pure Storage Inc., which sells enterprise solid-state storage arrays. But erasure coding is not the primary focus of his work at the company, and Pure Storage's products currently use no erasure coding beyond combinations of RAID 5 and RAID 6.

In this podcast interview with TechTarget senior writer Carol Sliwa, Professor Miller also discussed the impact of erasure coding on backups, the minimum amount of data at which erasure coding becomes a serious consideration, the decision point on how much erasure coding to do, and the long-term potential of erasure coding to effect change in storage systems.

What does an IT professional need to know about erasure coding?

Ethan Miller: I think that it's important for an IT professional to understand what erasure codes can do and what they can't do, and to understand what the tradeoffs are between using erasure codes that protect against a lot of device failures or a lot of missing data and erasure codes that don't protect against as much loss.

Why might you want to choose one that protects against a lot of loss, and why might you not want to choose one that protects against a lot of loss? Obviously, people think protecting against more loss is better, but there are drawbacks to doing so.

What are the drawbacks and, also, what are the upsides of erasure coding?

Miller: The big drawback to protecting against more loss is that typically protecting against additional loss means additional redundancy disks -- in other words, additional space taken up with redundancy information instead of taking it up with your actual data. So, it costs a little bit more in terms of storage space, but that's only part of the problem.

The second issue is that when you make a change to data -- in other words, when you write data to your array -- you have to update not just the one disk you write to, let's say for a small write, but you have to update all of the redundancy disks that correspond to that one little piece that you updated.

So, for RAID 5, every time you write a small piece of data on your array, you have to update the parity disk. For RAID 6, every time you make one small write, you have to update two parity disks. If you have eight redundancy disks, every time you make one small write, that's one disk operation; you have to have eight disk operations to update redundancy information.

Now, there are approaches where you can batch this stuff up and log it and everything else, but the bottom line is that you increase the number of writes you have to make if you have more and more redundancy information. So, clearly, unless you need it, it's a waste both of money for the extra storage and of balancing your storage array. That's why you wouldn't want to necessarily have lots and lots of extra redundancy.

Again, this doesn't matter on reads. Assuming your system is working correctly, you don't read the redundant information. But it does matter on writes. So, if you have a workload that has a lot of writes, this can be an issue.

Now, as far as the upside of doing so, obviously you can survive more failures. If I had three or four redundant data [pieces] stored with, let's say, 10 data disks and four redundancy disks, I could survive any four failures. The reason though that most IT shops don't have to worry about this is that, at some point, the chance of failure due to losing a single disk is much smaller than the chance of failure due to your entire data center getting destroyed [by] a fire or something else. No amount of redundancy in one data center will protect you if your building burns down. So, once you get the failure rate to be very, very low, other things take over -- human error or losing the entire data center. And that's why a lot of the cloud providers don't go much beyond RAID 5 and RAID 6 for an individual data center because they know it's much more likely that they're going to lose connectivity to a data center than to that fourth disk. And they just make that tradeoff based on that.

Can erasure coding eliminate the need for backups?

Miller: It depends on what you're doing a backup for. One reason you might be doing a backup is to guard against human error. If you have a human error and you destroy data that way, erasure coding won't help.

The second kind of thing that at least a lot of erasure coding doesn't help with is loss of an entire site. So, suppose that I have a very good erasure code at one site. I'm going to lose data one in every 10 million years. That's great until the site catches fire and burns down, which might happen only once every thousand years.

On the other hand, erasure coding can make it much less likely that you will lose data if you use it properly, either going across sites or using snapshots to keep old copies of data in case of human error. So, it can't eliminate the need for backups, but if you use it with other techniques, it can greatly reduce the need for backups.

What's the minimum threshold of data at which an IT shop should consider erasure coding?

Miller: Unfortunately, that depends on how resistant to loss you happen to be. For a company that doesn't mind losing data or that doesn't want to pay lots of extra money for losing data, you probably are OK with RAID 6 for now unless you go to very, very large data sizes on the order of a petabyte or so.

If you're somebody who is much more concerned about losing data, you might want to start using erasure coding at a smaller size. Of course, people like that will also often consider things like mirroring in combination with RAID 5 or RAID 6.

So, it really depends entirely on the workload. If you have a read-mostly workload, maybe you want to use erasure codes sooner than if you're somebody who has a lot of writes. It may also depend on how much your data is worth. In other words, what risk are you willing to accept of data loss? And again, all of this is typically relative to other causes of data loss. As I said earlier, all the erasure coding in the world won't do you any good in a single data center if the data center burns down. So you have to balance all of these factors against one another to decide when you start using erasure coding and when not.

But the thing to be aware of is that unless your storage comes with it, it can often be a lot harder to implement erasure coding yourself. So, buying an off-the-shelf solution is probably the right idea unless you have some very experienced people there with erasure coding.

How does an end user go about making the decision of how much erasure coding to do?

Miller: It's a tradeoff between performance -- writes, in particular, with erasure coding are slower -- and storage costs. The more erasure code you use, the lower you can make your storage cost, because you could actually survive more and more losses without necessarily more and more data. So, the tradeoff again is if you have more reliability, it means it's probably going to cost you more in performance to do your writes. Less reliability means your write performance is better. As I mentioned earlier, read performance isn't typically affected by erasure coding in most cases unless you actually have a failure, at which point, well, you're glad you had erasure coding to begin with.

So, as far as how much erasure coding you should do, you should be calculating the likelihood of data loss from doing different forms of erasure coding. This isn't a complex calculation, but it requires a little bit of training. It has to do with combinations -- how likely are you to lose data, and so on -- but basically you have to decide, what's my tolerance for risk? And how much erasure coding do I need to meet that tolerance? But you also have to consider what the other risk factors are, things like losing a data center, human error and so on. And that's how you come up with a decision of how much erasure coding to use. How likely am I willing to make a data loss event?

What's your vision on how erasure coding will be used in enterprise IT shops in the short term and in the long term?

Miller: I think that surviving the loss of no more than two drives is going to be sufficient in the relatively short term, especially as we transition to solid-state drives that have much lower drive failure rates. Disks fail at a rate of a couple percent a year. Solid-state drives don't fail at anywhere near that rate, so the likelihood that you have to recover from drive failure is much lower, meaning that RAID 6 may be sufficient across drives. Now, we're going to have to have more redundancy within drives, so you're seeing a lot of vendors go to using RAID 5 or RAID 6 on a single drive, where they'll store data, and then they'll store redundancy information on the same drive in case you lose a single sector. So, the combination of RAID 5, RAID 6 across drives, and RAID 5 and RAID 6 within a single drive is going to be enough redundancy in the near term for what I'll call active workloads, workloads that have a lot of reads and a lot of writes.

Where you're going to see a lot of use of heavier erasure codes, erasure codes that can survive three, four, five, even six device failures, is when you look at long-term storage, archival storage, because with archival storage, you write the data once and in big chunks. You pay the overhead of writing your erasure code, and then you never write it again. So, it doesn't cost you anything except a little bit of extra storage, and that little bit of extra storage you pay for gets you much higher reliability, which if you have a very large data set, maybe petabytes to exabytes, you would want that extra reliability because you want it to be around for 20 or 30 years. So, I think you'll start to see erasure codes like that used a lot more in the read-mostly world, write once and read-maybe world of archival storage, but you won't see it used as heavily in the very active kind of transactional storage.

Dig Deeper on Storage optimization

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close