Server blades and storage
Many IT shops are moving from traditional rack-mounted servers to blade configurations in hopes of reducing power and floor space requirements in their data centers. But combining blade architectures with server virtualization can cause problems with I/O and storage systems.
| Blades offer many advantages over traditional servers, but they create some unique storage and application issues.
PETER CHAU is the infrastructure architect at North Shore Credit Union (NSCU), a North Vancouver, British Columbia, organization with 44,000 members, 12 branches and a new software banking system that runs on a bunch of Hewlett-Packard (HP) Co. blades.
From Chau's vantage point, his blade infrastructure is finally ready for prime time, approximately two years after NSCU got into the blade market with HP's first iteration of blades, the p-Class. That changed Chau's infrastructure to a smaller footprint with decreased power costs, but it didn't change his life. That came later, with HP's c-Class and its Virtual Connect Enterprise Manager, says Chau. By creating bay-specific I/O profiles with unique MAC addresses, Virtual Connect allows network and storage administrators to establish all LAN and SAN connections during deployment, and lets them avoid having to do it again even if additional servers are deployed or existing ones are changed.
"If the blade on slot 1 fails, all I need to do is replace it with a new one," says Chau. "The back-end connections to the SAN and the network remain intact; it doesn't change. Previous to [Virtual Connect], I would have to engage the SAN admin to re-establish the LUN connections and map to a new WWN."
The technology, he says, has not only made his IT shop more efficient, but freed up some precious time for him to work on new projects. Chau's p-Class blades did leverage a SAN back end but, from an admin perspective, he might as well have been using standard rack-mountable servers.
"With the c-Class and its Virtual Connect Onboard Administrator, we feel it's ready to facilitate our banking system and BI [business intelligence] infrastructure," he notes.
The increasing popularity of blades in the data center has helped fuel serious competition among vendors who are all trying to package, or stuff, the same capabilities of a traditional four-socket rack-mount server into a blade form-factor. However, many users and analysts agree that blades, while just another server form-factor, present unique challenges from a storage management perspective. Managing aggregated I/O for multiple blades, proprietary technology, and the mistaken assumption that blades will automatically lessen your power and cooling costs are among the chief challenges for storage pros considering blade strategies.
James Jancewicz rejected two blade server products. Jancewicz, who took over the storage duties at a large insurance organization 10 years ago, says his firm (which he asked not be named due to company policy) brought in competing HP and IBM Corp. blade centers and dumped a workload on them to see how they would perform. As far as I/O is concerned, they didn't overload the individual blades, he says. "But it was much more difficult to manage a box when you had aggregated I/O for multiple blades running over common interfaces--both fibre and IP," he explains.
For Jancewicz's shop, the pizza box storage they already used offered more flexibility. "When we max out the VM, we don't worry about adding any additional guests to it," he explains. "We still get the benefits of virtualization."
Blade technology is proprietary, despite what you might hear about industry-standard servers. Matthew Ushijima, director of network operations at Northlake, IL-based Empire Today LLC, issued this warning to his IT peers: "Make sure your storage vendor has done thorough testing on the very specific technology you are implementing." Being aware of the proprietary nature of your hardware is crucial, he says. "Look for a vendor that shares parts commonality between their rack mount and blade hardware."
And the savings aren't guaranteed. Plenty of users are surprised by the cost of a blade project, says Anne Skamarock, research director at Boulder, CO-based Focus Consulting and co-author of Blade Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs. "A fully populated blade system may require more cooling and power than a fully loaded rack system," she cautions. And if you don't have a SAN in place, you should establish one as the foundation for a blade project, adds Skamarock.
"That's an upfront investment you should make as you prepare for a growth strategy with blades," she says. Many analysts will tell you that real returns on blades can only be seen when you buy enough blades--more than eight--to cancel the price of the chassis.
"What we're finding is that people are asking us 'How would we go about transitioning from a siloed environment to a virtualized, bladed environment?'" says Skamarock.
While blade projects can't be guaranteed to always lower your power and cooling costs, many blade enthusiasts say the smaller data center footprints make them a worthwhile investment. For Gentry Ganote, CIO at Golf & Tennis Pro Shop (GTPS) Inc., the consolidation resulted in obvious cost savings as they relate to staffing.
A couple of years ago, GTPS started growing quickly due to the expansion of its PGA Tour Superstores at key golf locations across the country. "We were growing and had a lot of one-use servers that we were putting in the data center. We're basically a [Microsoft] shop, and every time we needed to do something we would add a new server," says Ganote. Each one of those provided about a half a gig of storage, and he says they finally realized they didn't want to die or choke from server sprawl.
Ganote needed to get away from directly attaching storage to one-use servers, so he went shopping for a new blade strategy. He started with a new SAN, and the IT team evaluated different SAN vendors before deciding on Compellent Technologies Inc. Switches (built into the backplane) from Brocade Communications Systems Inc. were installed and Cisco Systems Inc. was chosen as their network equipment provider. "From a cabling standpoint, we took this spider web of cabling down to a minimum," he says. Ganote took the project to the next step and virtualized four of the blades. "We actually have more than 12 of our servers virtualized," he explains. (That translates to three or four virtual servers per blade.)
"The timing was perfect for us from a financial perspective," says Ganote, whose shop settled on Virtual Iron Software Inc.'s server virtualization product. When it comes to staffing, he notes, the savings are obvious. "If we didn't have a blade infrastructure and have virtualization in place, I'd definitely need four or five people just to manage the servers," he says.
Charles Falcone is president at Devon Health Services Inc., a preferred provider organization (PPO) in King of Prussia, PA. He was fairly sure blades were too expensive for his long-term IT strategy and when he first saw the cost of the IBM BladeCenter package designed for small businesses, "pricey" was his first thought.
"But at the end of the day, we thought that, overall, it was a better product than its competitors," he says. For Devon Health Services, it was a matter of available power, says Falcone. "Literally, we were maxed out as to what we had available in our data centers. To bring more power in, I'd have to upgrade the backup generator and that would have been a fairly costly initiative," he says. Like many blade users, the "green movement" motivated Falcone, as well as a data center that was a source of shame.
"It was ugly," says Falcone. "It wasn't something we showed people. We had to clean it up." The company isn't running its BladeCenter at full capacity yet. It holds up to 14 servers, and Falcone is currently running nine of them. "That's taken over for 30-something odd servers" from various vendors, he explains.
Just like NSCU's Chau brags about the power of HP's Virtual Connect, Devon Health Services' lead storage architect Michael Salerno raves about IBM BladeCenter's remote management tool. "With some other systems, you have to run a separate network cable if you're running any kind of remote management system. What's nice about the BladeCenter is that it's already plugged in. You can remotely manage any given server in the BladeCenter via a built-in KVM console, even on a Saturday," says Salerno, who's a member of an eight-person team. "The BladeCenter makes it easy enough for us to be able to focus on new initiatives."
Blades and the back end
But with systems that share network modules (the equivalent of a network card on blades) against multiple blades (i.e., 10 blades across four network modules), many IT pros will ask "Who gets the access?" says Stephen Foskett, director of data practice at Mountain View, CA-based consultancy Contoural Inc. "People are very interested in the impact of VMware on storage because it makes use of a lot of back-end I/O," he says. "You can get a four-port NIC card and, in some cases, you can even get each blade a new port. That leads users to start looking at 8Gb Fibre Channel [FC] and solutions like virtualized InfiniBand. Those two things--8Gb or 10Gb Ethernet cards, and InfiniBand--come from this idea that we're pushing all this through the same pipe in the back end."
With multiple virtualized servers sharing the same FC attachments, Contoural's Foskett is predicting a surge in popularity in N_Port ID Virtualization (NPIV) technology, which essentially lets users virtualize their host bus adapters (HBAs). "If you have more than one instance of an operating system running on the other side of the port--as you do with virtualization--suddenly it might not be a good idea to have one, two or 20 servers using the same N_Port ID. There are ways you can keep them from seeing each other's storage," he says. "But, basically, it's a bottleneck problem."
With NPIV, a virtual N_Port ID is assigned to each server. This is important because users can move a port and that port name will follow. Indeed, Foskett is predicting a "perfect storm" collision of 8Gb FC or 10Gb FC over Ethernet (FCoE), N_Port ID and virtualized servers as more users adopt blade strategies and combine them with virtualization. This technology storm is gathering strength around blades, rather than standalone servers, because of the way blades share resources, he says.
"Blades share physical resources on the back end," says Foskett. "With blades and server virtualization, you can have more than one server or more than one instance of the operating system all passing through a single interface. Therefore, they can oversubscribe those resources and make demands on them that can't be met."
Even though NSCU's Chau is ready to run a new banking system on his blades, he cautions anyone investigating blade strategies that it "definitely brings more complexity to the environment." NSCU is revamping the hardware architecture to support its BI initiative by implementing four HP ProLiant BL680c G5 Server blades. One runs Microsoft SQL Server 2005, and there is one each for SQL Server 2005 Analysis Services, Reporting Services and Integration Services. (Those blades are quad-core x 4 sockets, and 64GB RAM, with all connecting back into a HP Storage- Works EVA 8100 array.)
"There has to be some sort of application profiling," says Chau. "For example, SQL is I/O-intense and file printers are not that intense. So you have to mix and match. You can't just rely on the hardware infrastructure and what it can do. You have to do some performance profiling or there comes a point of diminishing return," he adds. (See "Performance tips," below.)
Blades on the cutting edge
Take Verari Systems Inc., which has a product that supports two quad-core processors (from Intel Corp. or Advanced Micro Devices Inc.) on a single 1U vertical blade and up to 96 blades.
Verari's selling pitch is that the HPs and IBMs of the world are offering relatively low capacity in dense packaging, while it backs up its blade servers with standard-sized motherboards tipped up on their ends and slipped into a proprietary rack that equals the floor space density of an IBM BladeCenter or HP c-Class.
Perhaps what's most interesting about Verari's systems are its SB5165XL StorageServer and disk blades that are essentially used to create NAS and iSCSI SAN appliances within the rack. These can be mixed in with their server blades in the same rack. Verari bills its SB5165XL StorageServer product as the industry's first high-density, blade-based, all-in-one storage appliance.
"[Verari's] systems are designed and tailored for the ultra-large environments, people with very large data centers," says Greg Schulz, founder and senior analyst at StorageIO Group, Stillwater, MN. "This is what HP had in mind when they said they were going to take their c-Class and attach PolyServe to it. What they're really following is the bulk storage market," he says.
Jim Damoulakis, CTO at Framingham, MA-based GlassHouse Technologies Inc., says many vendors are anticipating the I/O problems users might have when they begin testing the limits of their blade infrastructures. A key development, he says, is that some blades are now shipping with PCI Express ports. "That's high-bandwidth, back-end ports. You can plug in an InfiniBand card and go to a concentrator. Essentially, the software presents virtual NICs and it's InfiniBand on the back end, and Ethernet and Fibre Channel on the front end," he says.
Last month, Verari announced a partnership with Xsigo Systems Inc. to help make it easier to virtualize I/O between blade servers and storage. The partnership is designed to let customers reduce the number of storage and network connections, and make it simpler to manage the rising number of virtual servers connected to networks and storage.
When talking about blade centers, many industry analysts agree, it's the connectivity issues that have users looking to vendors for help. That's what blade switch specialist Blade Network Technologies Inc. and 10Gig Ethernet vendor NetXen Inc. were anticipating last year when they announced what they called "the industry's first solution offering 10Gb connectivity for blade servers." In April, Blade Network Technologies announced its new RackSwitch. The devices are 1U 1Gb and 10Gb Ethernet switches the company says will manage network connectivity for server blades for competing vendors.
Martin MacLeod, a London-based consultant who authors a blog on blades in his spare time, says users are being inundated with an impressive range of options from vendors trying to take advantage of the consolidation and virtualization craze. He has dedicated a spot in his blog for "peripheral blade vendors" as technologies converge. In his practice, he often fields questions about purchasing and chargeback, he says. "The first blade encounters all of the cost: the new enclosure, the power supplies and the switches," he says. "So if HR requires another blade, do we need to buy another enclosure? Is that an HR cost or an IT cost?" (See "Buying tips," below.)
Scott Lowe specializes in virtualization for ePlus Technology Inc., a reseller headquartered in Herndon, VA. For many of his customers, says Lowe, blade servers have no real impact on storage unless they're being coupled with virtualization. Because they're so popular, and so much of the sales pitches around blades highlight the simplicity of buying and using them, Lowe sees some users underestimate their impact.
"I've seen a lot of customers not think about it," he says. "You need to realize that a blade is a bona fide, fully fledged, fully independent piece of hardware. You need to incorporate that into your design just like you would any other switch or server."