Cisco Performance Routing (PfR)
Cisco Performance Routing (PfR) is a way of sending network packets based on intelligent path control. Packets are units of data that are routed between an origin and a destination on the internet or packet-switched networks. Normally, a routing protocol is used to determine the shortest path between an origin and destination; however, the shortest path could be highly trafficked. Instead of picking the shortest path, Cisco PfR -- which is not specifically the considered routing protocol by Cisco -- will pick the best path based on its performance.
PfR will measure metrics such as throughput, delay, packet loss and reachability to find the best performing path to send packets over. Besides performance metrics, PfR will choose paths by path status, policies and application types. PfR also can be used to improve application delivery and WAN efficiency.
Factors like WAN latency and lack of bandwidth will lead to inefficient performance. PfR can help administrators address network performance issues by allowing a router to choose the best performing path, while maintaining application performance.
Cisco originally created PfR, basing it off OER (Optimized Edge Routing). OER was able to select different paths in a network but was not considered a routing protocol.
How performance routing works
Performance routing is a form of intelligent path control, which performs path selection based on real-time performance metrics. These metrics are received by monitoring application performance, and traffic is collected from a primary data center. This differs from more classic routing protocols, which don't have the ability to monitor performance issues and change to alternative paths based on situations like traffic flow.
Because PfR is based on Cisco's OER, PfR will use the same steps for path selection. Profile learning is used to look at traffic flow, throughput and delay -- it also creates monitored traffic classes. The measure phase collects and calculates performance metrics for the traffic classes. The apply policy phase then compares performance metrics. The control/enforce phase enacts policy-based routing, while the verify phase monitors the traffic-class again to verify performance.
When one path starts to experience performance issues -- due to something like dropped packets or reachability -- PfR will move the path according to defined policies configured by the user. This should help improve application performance and availability. PfR can also use load balancing with traffic on available paths.
PfR will use two different device roles in a network -- border routers (BR) and a master controller (MC). BRs are routers that have one or more interfaces connected to external networks. Border routers will also send performance metrics to MCs. The MC will communicate with all the different border routers to monitor everything. The MC and BR can be configured to be the same or different routers.
PfR will also use three different interface types, internal (which connect the internal networks), external (which connect to external networks) and local (which is what's used to communicate between MCs and BRs).
Configuring performance routing
There are many ways a user can configure PfR, and they can do so for a variety of reasons. Some example configurations are as follows:
- The PfR master controller can be configured to automatically learn prefixes based on outbound traffic or delay time using NetFlow Top Talker. The learn command can be used to enter this mode from the MC.
- The master controller can be configured to gather learned prefixes by type. This can be done through the aggregation-type (PfR) command in PfR Top Talker and Delay learning.
- PfR can be configured to simplify the learning of traffic classes through a learn list configuration mode.
- Users can manually configure PfR to create traffic classes for monitoring and optimization. This allows users to define exact prefixes.
- Prefix Traffic Class Configuration can be used to select a prefix or range of prefixes for monitoring.
- Reachability, or the percentage/maximum number of unreachable hosts, can be specified on the master controller using the unreachable (PfR) command.
- Users can configure PfR to engage in passive, active, or passive and active monitoring.
- The range of utilization over all links can be configured and calculated -- for both egress and ingress
- To solve potential overlapping policies, users can run a resolve function, which allows them to set a priority to PfR policies. This can be done using the resolve (PfR) command in PfR master controller configuration mode.
- Users can also configure PfR policies based on the cost of each exit link in their network. This can be done by configuring the MC to send traffic over exit links that are the most cost-effective in terms of bandwidth.
PfR users who want to learn more about PfR configurations can learn through Cisco here.
Benefits of performance routing
Some benefits of Cisco Performance Routing are as follows:
- improves network and application performance;
- can improve application availability;
- needs only small amounts of configuration;
- large amount of policy options;
- reduces WAN expenses in terms of operation; and
- efficiently distributes traffic based on load, performance and other metrics.
Performance routing versions
Cisco PfR has had a few different iterations since it was originally built off Optimized Edge Routing (OER). Each version has focused on ease of deployment and scalability. OER had features, including basic WAN; provisioning per site, per policy; and configuration options.
The next version, PfRv2, added the ability to scale up to 500 sites, app path selections, policy simplifications and more configuration options.
The current version, PfRv3, focused adding scalability to 2,000 sites, centralized provisioning, automatic discovery, support for multiple data centers, as well as multiple next-hops per DMVPN network.