Logging can be incredibly useful but massively annoying to administer. When something goes wrong, logs are crucial to troubleshooting, especially in security-related issues. But if an attacker compromises your host, then the logs telling you this are useless to that host; you need to send them somewhere central. It’s important to protect your logs, and a central logging server makes it easier to manage, analyze and query them. In this tip, I’m going to show you how to collect and centralize your syslog messages from your hosts onto a single, central syslog server in Linux.
First, any centralized syslog server should be built as a secure and hardened host -- there is no point in protecting and centralizing your logs on a host that an attacker can compromise. Next, how do you get your logs from your hosts to a central box?
Let’s start by setting up the central syslog server. I am going to demonstrate this using rSyslog, the de facto standard for Linux syslog. It is used in both the Ubuntu and Red Hat distributions and is managed via the /etc/rsyslog.conf file. This file contains a number of rules that specify where particular syslog events should go: some to the console, to files or even to other hosts.
First, we need to load the appropriate TCP and UDP plug-ins to support the receipt of syslog events. Add the following to the top of the rsyslog.conf file:
These load two modules support TCP and UDP reception and specify the ports on which to receive events, in this case, 10514 for TCP and 514 for UDP. You will need to ensure your local firewall (and any intervening firewalls between your hosts and the central syslog server) has these ports open and allows traffic through.
Next we need to specify some rules to tell rSyslog where to put the incoming events. If you don’t add any rules, incoming events will be processed by the local rules and intermingled with events on the local host. We need to specify these rules right after the stanza we added above and before the local syslog processing rules. An example rule:
if $fromhost-ip isequal '192.168.0.2' then /var/log/192.168.0.2.log
Here we’re saying that any syslog entries from IP address 192.168.0.2 should be stored in the file /var/log/192.168.0.2.log. The &~ is important because it tells rSyslog to stop processing the message. If you leave it out, the message will proceed to the next rule and continue to get processed. There are other variations on this rule. For example:
if $fromhost-ip startswith '192.168.' then /var/log/192.168.log
Here we’re placing everything from IP addresses starting with 192.168.* into a file called /var/log/192.168.log. You can see some other filters here.
You will then need to re-start the rsyslog service to activate our new configuration:
$ sudo service rsyslog restart
Now, on the sending host we also need to make some changes to the rsyslog.conf file. At the top of the file, add the line:
This sends all events, from all sources and of all severity levels (the *.*), to the IP address of 192.168.0.1 on port 10514 and via TCP. You would replace the IP address (and/or the port) with the appropriate address for your environment. To enable this configuration, you will also need to re-start rSyslog on the host.
You can even take this a step further and send your syslog entries via SSL/TLS. This is good if you're transmitting syslog across the Internet or other unsecured networks. You can find simple instructions for this here.
Now, if you add this configuration to your configuration management system (if you don’t use one, you should look at a tool like Puppet or Cfengine), you can then deploy each host with the appropriate syslog configuration to ensure your logs are sent to the central syslog server, protected and available for analysis.