Andrea Danti - Fotolia

Tip

Is network traffic monitoring still relevant today?

An increase in DNS protocol variants has led to a higher demand for network traffic monitoring. The SANS Institute's Johannes Ullrich explains what this means for enterprises.

The internet started as an open and collaborative project, and security was a feature that was bolted on over time. Many of the internet's core protocols -- in particular, the domain name system and HTTP -- were not designed with privacy in mind.

With HTTP, we saw a surge in HTTPS (HTTP over TLS) deployments over time; but even with current versions of Transport Layer Security (TLS), not everything a user does is private. More recently, extensions to TLS and variants of the domain name system (DNS) protocol have been developed to close the last few remaining holes.

This, however, can throw network traffic monitoring into crisis mode. Most organizations have a mixed track record for securing endpoints. To make up for gaps in endpoint security -- in particular as IoT and BYOD initiatives have become more popular -- network traffic monitoring has been used to supplement incomplete endpoint controls. security standards will make you question the validity of that approach.

Let's start with DNS. In many ways, DNS is a big success story when it comes to internet protocols. Developed in the late 1980s, it has shown its ability to scale by orders of magnitude as the number of hosts and the amount of traffic DNS deals with has exploded. Even completely protocols like IPv6 are able to use DNS with only relatively minor adjustments.

But DNS has two big weak points: All DNS queries and responses are sent in the clear and the authentication of DNS responses are weak.

The attempt at adding security to DNS was the Domain Name System Security Extensions (DNSSEC) suite. While DNSSEC is very good at validating the authenticity of a response, it is also a very complex protocol and does not protect the confidentiality of DNS messages.

The DNS infrastructure distinguishes between recursive and authoritative name servers. Users typically connect to a small number of recursive name servers operated by their network provider. These recursive name servers then find answers by connecting to one of the many authoritative name servers distributed around the internet.

From a privacy perspective, the connection from the user to the recursive name server is critical. Anybody able to perform networking traffic monitoring between the user and a recursive name server will be able to catalog all the DNS lookups performed by the user and may also be able to manipulate the responses returned to specific users.

A properly configured TLS client and server can withstand state-of-the-art attempts to eavesdrop on network traffic. However, DNS -- and, in some cases, TLS itself -- does still release some sensitive information; for example, the websites someone is visiting or the operating system and browser they are using to visit the site. As a result, DNS has become a major source of network traffic intelligence.

Two transport options have been used to improve the confidentiality of DNS. Traditionally, DNS uses simple unencrypted User Datagram Protocol (UDP) packets. While UDP does lack the reliability of the other transport options, Transmission Control Protocol (TCP), UDP is simple and perfectly suited for small queries and responses, as they are typically used for DNS.

But UDP can be easily spoofed and, as a result, DNS has often been abused for distributed reflective denial-of-service (DoS) attacks. DNS requests can be much smaller than DNS responses. An attacker can use a number of small spoofed requests to redirect large amounts of traffic to a victim.

One quick fix for the spoofed DNS query problem is DNS cookies. DNS cookies start appearing in commonly used DNS implementations. These cookies prevent some spoofing attacks and are supposed to add a level of spoofing protection similar to what is found in TCP to DNS.

The advantage of DNS cookies over DNSSEC is that they not only protect against some spoofing attacks, but they also address the problem of reflective DoS attacks, which have been a big problem in recent years. DNS cookies are also very easy to deploy. On the other hand, they do not provide the strong cryptographic authentication DNSSEC provides and -- just like DNSSEC -- they do not protect the confidentiality of DNS traffic.

The protocol for protecting DNS confidentiality that found somewhat widespread support was DNS over TLS. DNS over TLS does what HTTPS did for HTTP. It uses the existing DNS protocol but transmits requests and responses through a protected TLS tunnel. This provides confidentiality between client and recursive resolvers. This protocol uses UDP port 853 -- not the default DNS port 53. As a result, the use of this protocol can easily be detected and blocked.

To prevent the connection from being blocked, HTTPS can be used as a transport mechanism. DNS over HTTPS (DoH) uses HTTPS requests and responses to specifically configured URLs for DNS . This protocol not only obscures the content of the DNS queries and responses, but it also hides the fact that DNS is used. DNS traffic becomes more or less indistinguishable from HTTPS traffic. The only hint there might be that a user is using DoH is the IP address of the endpoint.

One of the more popular DoH endpoints is operated by Cloudflare Inc. Cloudflare terminates HTTPS for many websites, so it is even harder to distinguish HTTPS from DoH. DoH endpoints can be used with any cloud provider, making it difficult to identify them.

Critiques of DoH have noted how it is significantly increasing the traffic volume generated by DNS. Each DNS query is now encapsulated in an HTTPS request. This may be less of an issue if newer versions of the protocol, like HTTP/2 and QUICare used, but so far, DoH endpoints appear to prefer the older HTTP/1.1.

The client can either send binary data that is encoded just like a traditional UDP DNS query or JSON encoded data that can add substantial additional traffic and require more processing power to parse. The networks then won't be able to extract the information they need to defend themselves without a robust TLS interception solution.

For most standard operating systems, all it takes is installing a TLS proxy and adding a respective trusted certificate authority to the endpoint. In Windows, for example, this can be done easily with a group policy. For other IoT devices, on the other hand, it can be very difficult to accomplish this, especially at scale.

But it's not just network traffic monitoring that is impacted by these new protocols. There is also the often-overlooked risk to end users using these technologies. With providers like Cloudflare concentrating more and more on traffic, there are now chokepoints ideally instrumented to do what these protocols are supposed to prevent: inspect network traffic and put a user's privacy at risk.

While Cloudflare specifically states that they will not use customer data, there are no technical guarantees. Even if the providers aggregating the traffic are not using the data, the security of their networks is critical. A breach of one of these providers could have far-reaching consequences for anybody using their services.

What is missing at this point is a method to diversify these services in order to prevent a traffic collision. Tor is probably the most well-known effort to accomplish that at this point, but it isn't a viable solution for most internet users.

New technologies like TLS 1.3 and DNS over TLS and HTTPS are significant challenges for traditional passive monitoring network security approaches. Endpoint security and configuration will become more important to control these technologies, and robust TLS interception will be an important tool to prevent the abuse of these technologies -- in particular, DoH.

Editor's note: Dr. Ullrich will give a detailed talk on "The New Internet" at SANS Cyber Defense Initiative 2018 taking place in December in Washington, D.C.

About the author: Dr. Johannes Ullrich is a SANS Institute fellow and director of the SANS Internet Storm Center (ISC), which is dedicated to monitoring malicious activity and cyberattacks. In 2000, he founded DShield, a collaborative firewall log correlation system that would later become the foundation for ISC's data collection.

Dig Deeper on Threat detection and response