By: Craig S. Wright
Service provider takeaway: Regulatory and standards compliance can provide several challenges from both a business and a technical perspective. This section of the chapter excerpt from the book The IT Regulatory and Standards Compliance Handbook:: How to Survive Information Systems Audit and Assessments will focus on validating firewalls.
Download the .pdf of the chapter here.
Firewall configurations should be validated before they are put into production (a live environment). Validation means checking that the configuration would enable the firewall to perform the security functions that we expect it to do and that it complies with the security policy of the organization. You cannot validate a firewall by looking at the policy alone. The policy is an indicator, but not the true state. The only way to ensure that a firewall is behaving correctly is to test it using the thing it is set to control, packets. To validate a firewall, you need to fire packets at it.
Validated firewalls need to be constantly monitored for health and stability. Proper change management procedures and policies around the firewall rulebase should be observed at all times. Every time a new rule is made, the firewall should be validated again as a whole, not just for the particular rule that was added or changed.
Abnormal traffic patterns should be investigated immediately. If servers that normally receive a low volume of traffic are suddenly responsible for a significant portion of traffic passing through the firewall (either in total connections or bytes passed), then this might be a situation worthy of further investigation. While sudden peaks and spikes are to be expected in some situations (such as a Web server during a period of unusual interest), sudden peaks and spikes are also often signs of misconfigured systems or maybe even attacks in progress.
Rule violations should be treated as incidents. Looking at traffic denied by your firewall may lead to interesting discoveries, but it is unlikely that even the smallest of organizations could watch all the logs (if they are working). This is especially true for traffic that originates from inside your network. The most common cause of this activity is a misconfigured system or a user who is not aware of traffic restrictions, but analysis of rule violations may also uncover attempts at passing malicious traffic through the device.
Detecting probes originating from inside the trusted network should be performed periodically.
These are extremely interesting, as they most likely represent either a compromised internal system seeking to scan Internet hosts or an internal user running a scanning tool, which are both scenarios that merit attention.
Apart from those previously mentioned, firewall log files should be regularly monitored to check for significant events. These fall into three broad categories: critical system issues (such as hardware failures or performance bottlenecks), significant authorized administrative events (ruleset changes, administrator account changes), and network connection logs.
- Host operating system log messages For the purposes of this document, we will capture this data at the minimum severity (maximum verbosity) required to record system reboots, which will record other time-critical OS issues, too.
- Changes to network interfaces We need to test whether or not the default OS logging captures this information, or if the firewall software records it somewhere. (Is UNIX ifconfig (or the equivalent) invoked?)
- Changes to firewall policy
- Adds/deletes/changes of administrative accounts
- System compromises
- Network connection logs The information in these logs includes dropped and rejected connections, time/protocol/IP addresses /usernames for allowed connections, and amount of data transferred.
There are several tools that can automate firewall log monitoring, including such features as realtime alerts and notifications, and customized reports.
Configuration reviews may be mandatory for firewalls that process regulated data. In fact, the Payment Card Industry Data Security Standard (PCI-DSS) requires quarterly firewall reviews for systems involved in payment card processing.
A manual validation of the rulebase is most effective when done as a team exercise by the security manager, firewall administrator, network architect, and everyone else who has a direct involvement in the administration and management of the organization's network security.
First and foremost, the rulebase should conform to the organization's security policy, hence the recommendation that security managers and administrators be present in the rulebase review.
Prior to the validation, the rulebase should be backed-up to ensure that, if anything goes wrong after implementing changes to the firewall, the previous rulebase can be installed and troubleshooting can be done from there.
In validating the rulebase, unneeded rules should be eliminated. Keeping the rulebase as short and simple as possible conforms to best practices. If there is a rule that everyone is unsure of, it should be removed. The same applies to redundant rules. Some rules can also be grouped together.
While on the topic of best practices, it is recommended that any changes be documented for future reference. Any exceptions to the rules should also be documented, along with an explanation for why these exceptions exist. (This could be a good place to create a baseline for future audits).
Lastly, the rules should be validated for correct order. Rule order is very critical. Most firewalls (such as SunScreen EFS, Cisco IOS, and FW-1) inspect packets sequentially. When a packet is received, it is compared against the first rule, then the second, then the third, and so on. When a matching rule is found, checking is stopped; the rule is applied. If the packet goes through each rule without finding a match, then that packet is denied (or it should be. The only last rule on a firewall that should ever exist is a default drop or reject, this is not always the case).
It is critical to understand that the first rule that matches is applied to the packet, not the rule that best matches. Based on this, it is recommended that the more specific rules be first, and the more general rules be last. This arrangement of rules prevents a general rule being matched before hitting a more specific rule, helping to protect the firewall from misconfiguration.
Automated Rulebase Validation
There are readily-available tools that perform an analysis of the rulebase by matching it against a standard or benchmark, such as the Router Audit Tool (RAT) and Nipper. (See the router and network devices chapter for separate how-to manuals for RAT and Nipper.) These tools run every rule in the rulebase against known weaknesses and vulnerabilities, and then provide a report at the end, with recommendations on how best to rectify the discovered errors.
Using automated tools is much faster than manual validation and, often as an added feature, can detect whether the latest firewall patches/updates have been installed. However, automated tools do have their limitations. One limitation is that they cannot guarantee that the rulebase is in line with the security policy. In this case, manual validation has an advantage.
The IT Regulatory and Standards Compliance Handbook: How to Survive Information Systems Audit and Assessments
Working with firewall builder
Packet flow from all networks
Creating your checklist and Summary
About the book
The IT Regulatory and Standards Compliance Handbook: How to Survive Information Systems Audit and Assessments provides detailed methodology of several techincally based and professional IT audit skills that lead to compliance. Purchase the book from Syngress Publishing.
Printed with permission from Syngress, a division of Elsevier. Copyright 2008. "The IT Regulatory and Standards Compliance Handbook: How to Survive Information Systems Audit and Assessments" by Craig S. Wright. For more information about this title and other similar books, please visit www.elsevierdirect.com.