YakobchukOlena - Fotolia
Linkerd vs. Istio battle heats up as service mesh gains steam
Service mesh has gone from a relatively unknown entity to a mainstream topic in 2019, but it's too early to declare a TKO in the Linkerd vs. Istio prizefight.
SAN DIEGO -- A year has made a huge difference in enterprise maturity around service mesh, but there's no dominant, Kubernetes-like tool set that acts as an industry standard.
There is, however, one central battle for that title: Linkerd vs. Istio, which captured attention among KubeCon attendees here this week. Linkerd was first to market with service mesh and is governed by the Cloud Native Computing Foundation, while Istio faces community trepidation about its governance. But with powerful industry backing for Istio from Google and IBM, Linkerd is clearly the underdog -- for now.
"As Kubernetes scales up, the next thing people think of is Istio," said Suresh Visvanathan, distinguished architect at Verizon subsidiary Yahoo, which has about 5% of its production workloads connected to Istio. "After that, they think of serverless and Knative, and that heavily relies on Istio for routing. They seem like a package."
Istio vs. Linkerd: An early container security edge
A year ago, IT pros were still learning the basic concepts of service mesh and what problems it might solve for them. This year, enterprises such as the U.S. Air Force, Nordstrom, Yahoo and Freddie Mac discussed their first production deployments of the technology, as well as the initial pros and cons of Linkerd vs. Istio.
Linkerd and its commercial backer, Buoyant, were the first to offer service mesh, but Kubernetes, containers and microservices security were what really put service mesh on the map. Istio took an early lead in support for those architectures, particularly in security, and is still closely associated with Kubernetes. Istio is sold as a package alongside Kubernetes and Knative by Google and IBM, and was first to offer features such as mutual TLS (mTLS) and distributed tracing for Kubernetes workloads.
"We're most interested in the feature set around security -- we're not really looking at the performance for now," said Lixun Qi, senior tech lead at Freddie Mac, in an interview after a KubeCon + CloudNativeCon North America 2019 presentation here this week. "We're different from other customers, where CPU and memory [metrics] are more critical."
Freddie Mac has five or six apps in production with service mesh and has begun its Istio service mesh deployment before its effort to containerize all its apps on Kubernetes. In addition to mTLS, Qi mentioned Istio's integration with Secure Production Identity Framework for Everyone (SPIFFE), which supports identity management among heterogeneous environments such as Freddie Mac's on-premises VMware and Amazon Elastic Container Service for Kubernetes deployments. SPIFFE support is on the Linkerd version 2 roadmap, according to Buoyant CEO William Morgan, but it isn't available yet.
Linkerd vs Istio: Closing the gap, plus performance and ease of use
While Linkerd offered the first service mesh, it was based on a Java virtual machine, which was too heavy a user of system memory to be feasible in Kubernetes and container environments. Linkerd addressed this with version 2, released in September 2018, but it took until version 2.3 in April for mTLS to reach a stable release. This month, version 2.6 added support for distributed tracing data collection with Jaeger; Istio has supported distributed tracing with OpenTracing since at least early 2018, but its upstream Jaeger support remains in beta. The Linkerd vs. Istio feature gap has begun to close.
Even before it began to add more features, however, service mesh early adopters most focused on the technology's role in advanced IT monitoring and observability found that Linkerd version 2 introduced less latency into their environments than Istio.
"We performance tested Istio and Linkerd just over a year ago, and chose Linkerd," said Cody Vandermyn, senior engineer at Nordstrom, a large retailer headquartered in Seattle. "At the time, our tests found that for us, Linkerd introduced less latency and met our requirements around memory footprint."
Such results were replicated by a published benchmark test that pitted Linkerd vs. Istio in December 2018. Istio maintainers said the latency issue had to do with a problem in the project's Mixer control plane software, which was removed from Istio's telemetry implementation in the version 1.4 release Nov. 14.
Another early knock on Istio has been the complexity of its deployment and management. For users who need advanced features, these costs are worth paying, but in some shops, they haven't been.
"Istio was a pain -- it kind of got thrown in [to our Kubernetes environment] before we had a lot of experience, and we weren't ready yet," said Wes Kanazawa, senior DevOps engineer at Primerica, a financial services firm in Duluth, Ga., which eventually removed Istio. Kanazawa said he planned to evaluate Linkerd at KubeCon this week.
Initial service mesh choices not set in stone
The U.S. Air Force, which has rolled out Istio widely in a DevSecOps environment over the last two years, also hasn't ruled out adding Linkerd and other service mesh tools to its arsenal as it matures.
"We wanted to move fast, and [Istio's] Envoy [proxy] was the clear winner," when the Air Force first looked at service mesh projects in 2018, said Nicolas Chaillan, chief software officer for the military branch, in a Q&A session after a presentation here. "But in the long run, we want to use two or three service mesh technologies, and we're looking at other options, including commercial products."
While Istio is closely associated with Knative, another priority for Chaillan's team, he said in a follow-up interview that he's had some difficulty installing Istio for use with containers and serverless functions within the same cluster, and hopes the community will make that task easier.
Yahoo's Visvanathan also said he planned to evaluate Linkerd 2 in 2020 as his company's service mesh deployment expands, and there are many items on his wish list for Istio.
"The amount of configuration that gets pushed to the [Envoy] proxy [in Istio] is huge, and I'd like to see them reduce that," he said. "We're also hoping to learn more about how we can use Istio in federated clusters spread out over different data centers without going through too many load balancers."