NYU's cost-effective open source network powers research
At ONUG's AI Networking Summit, NYU presented how it replaced its network with open networking to achieve low latency and higher speeds for research applications.
People typically consider factors that burden the network and prevent it from performing sufficiently. Sometimes, however, the network itself can be a bottleneck to operations.
NYU experienced this firsthand in 2017 when its network couldn't handle the massive amounts of data required for its research applications. The university needed to create a network that could accommodate massive amounts of data transfer, said Robert Pahle, senior research scientist of research technology at NYU, during a presentation at ONUG's AI Networking Summit conference.
When NYU began the project, it initially tried to create a network with traditional practices in place. However, the university soon realized it needed a network with greater flexibility. For example, traditional switches don't offer detailed information about the network data that researchers need to collaborate with partner organizations. This need to connect with partners more freely enabled NYU to create an open source network, Pahle said.
What is open networking?
Open networking is an approach to networking that decouples hardware from software. Instead, organizations can use commodity hardware -- cheaper, interchangeable hardware -- and open standards. This gives enterprises greater flexibility to choose hardware components that better suit their specific needs.
According to Gavin Cato, CTO and head of platform solutions at Celestica, networking is undergoing a transformation -- similar to the telecom industry -- from modular systems with proprietary hardware to customizable, open networks with commodity hardware. Networks with these capabilities better suit an organization's use cases and requirements, such as high speeds and low latency.
"If you have a massive pipe, you don't need a bunch of functions for [quality of service] to deliver your performance. The pipe itself will deliver the performance; it's how you structure that architecture," Cato said.
This directly relates to NYU's situation. Rather than adding new capabilities to restructure the existing network to match their needs, the university built a new open network architecture to support massive data transfer and meet their low-latency requirements.
NYU's transition to open networking
When NYU planned to create a new network for its research applications, it faced two significant problems with traditional networking:
- Scalability. Most traditional networking hardware can't scale efficiently and isn't sufficient for its high-bandwidth, low-latency applications.
- Cost. Large, proprietary switches that could better fulfill their needs aren't cost-effective for the upgrade project.
The core issue was that NYU needed a flexible, scalable and high-performing network and that it needed to build this network cost-effectively. This led to the decision to build an open network with small, lean devices. The simplicity and reduced costs of these devices enabled NYU to deploy them across the network both quickly and strategically, Pahle said, and provided them with the ability to adjust capacity where necessary.
Open networking addresses implementation challenges
According to Pahle, NYU's transition to open networking helped the organization support its research applications in a few ways.
Overcoming hardware limitations
Before NYU built its open network, the organization struggled with enabling high-speed networking for researchers. For example, the organization was able to provide 100 Gigabit ports, but NYU's desktop computers couldn't support such high speeds, Pahle said. Most consumer-level network interface cards capped out at around 35 Gigabits per second, so many vendors didn't support cards with such high speeds in desktops due to cooling issues.
When NYU created its open network, the organization built a lab to experiment and see how the architecture would respond to changes. Because the network was based on open source hardware, NYU could use advanced chips to enable research applications.
Adding additional services to scale
NYU's architecture, built using small, lean devices, resulted in an extremely fast network. To address the scaling problem, NYU distributed multiple storage services across the infrastructure. According to Pahle, round-trip latency across the network measures 400 microseconds.
"We're [distributing] storage systems [across] campus, and because the network is so fast and scalable, it doesn't really matter where they're located," Pahle said. "[We] don't have latency problems."
In addition to storage services, NYU is also adding edge computing nodes in each building. This is a cost-effective approach that enables NYU to build one high-powered server to build a shared infrastructure, Pahle said. All NYU researchers can use it for research studies to collect an immediate analysis of the environment. Then, they can process the data in real time in a closed-loop system to analyze and make decisions based on the information.
"It's all enabled by this super flexible open source infrastructure," Pahle said.
Deanna Darah is site editor for Informa TechTarget's SearchNetworking site.