
Getty Images/iStockphoto
Manage storage with log forwarding via rsyslog
Follow this comprehensive step-by-step guide to effectively plan and implement an automated log centralization process using rsyslog's flexible features.
Log files are a critical part of any Linux administrator's job. They enable sysadmins to detect failed hardware, misconfigured services and other indicators of a problem.
Today's network environments are more distributed than ever, so it's essential to find an efficient way to store and analyze log files. Accessing log files for compliance and service level agreements is also crucial.
The rsyslog service facilitates these capabilities by enabling centralized storage, sorting and retention of essential OS and application log files. It requires few resources and is reasonably easy to configure, making it a great option for managing remote server log files while being conscious of storage costs.
This article provides a scenario consisting of a central headquarters (HQ) server acting as a log file repository and multiple remote servers that forward log files to it. It includes configuration options and commands, making it easy to adapt it to your own environment.
Adapt the following sample scenario to match your own environment:
- A centralized log repository server running Linux at the HQ location.
- Remote branch office servers that will forward some or all log files to the server at HQ. In the scenario, the example remote server is named remote_server1.
General rsyslog requirements
Consider your central log storage server hardware components carefully. While rsyslog itself is a lightweight service, it can still stress the network and storage subsystems if many remote servers write to it simultaneously.
Start with the following minimum hardware specifications:
- Two or more multi-core CPUs.
- 4 GB of RAM.
- 1 Gbps network connectivity.
Storage is the key subsystem to consider. NVMe SSDs are a good standard choice for their speed and reliability.
Disk configurations are also essential on this server. Begin by placing the Linux /var/log directory on a separate partition from the OS. Isolating the two to different physical storage devices reduces I/O competition. Finally, enable disk encryption to retain the security and privacy of the log file entries. You might need this encryption to satisfy compliance requirements.
Some organizations could easily use a Raspberry Pi or similar single-board computer with high-speed storage attached to it for this role.
Evaluate the central rsyslog server
Most enterprise-class Linux distributions include rsyslog by default. Use your distribution's preferred package manager to install or update rsyslog before modifying its configuration file.
In this scenario, you're using a centralized HQ server. Branch offices forward their logs to this server, and most of your configuration will occur on this device.
On Red Hat Enterprise Linux, Rocky, AlmaLinux and similar distributions, type one of these commands:
dnf install rsyslog
dnf update rsyslog
On Debian, Ubuntu Server or similar distributions, type one of these commands:
apt install rsyslog
apt update rsyslog
Start the service and enable it to start when the system boots with these commands:
systemctl start rsyslog
systemctl enable rsyslog
Configure the central rsyslog server
Configure the central server to receive inbound log files from the remote servers. Start by backing up the default configuration file:
cp /etc/rsyslog.conf /etc/rsyslog.conf.orig
Next, open the /etc/rsyslog.conf file using a text editor, such as Vim or Nano.
vim /etc/rsyslog.conf
Determine whether you want the log transfers to occur using TCP or UDP. UDP is often acceptable, but TCP adds error handling and reliability. It also copes with network congestion more effectively, which might be important in high traffic scenarios.
Uncomment or add the /etc/rsyslog.conf lines that match the following text:
module(load="imtcp")
input(type="imtcp" port="514")
These lines specify TCP using the standard rsyslog port 514. However, you could configure custom port numbers on a per-remote server basis to better organize incoming data. For example, if you allocate port 10514 to remote_server1, you can then configure the central server to recognize data from that device by that port number. Modify the entry to:
input(type="imtcp" port="10514")
You will almost certainly need to maintain separate log file storage for each remote server. Use rsyslog rules to achieve this. This example separates logs by remote server name remote_server1, in this case:
ruleset(name="remote_server1") {
action(type="omfile"
file="/var/log/remote/server1/%HOSTNAME%/%PROGRAMNAME%.log")
}
input(type="imtcp" port="10514" ruleset="remote_server1")
Repeat this entry, modifying the server's name as appropriate for your environment. You can also customize the rulesets to direct service-specific logs to particular files.
You must also configure the central server's firewall to accept the inbound network connections on TCP port 514. Use the following commands if your server supports firewalls:
firewall-cmd --permanent --zone=public --add-port=514/tcp
firewall-cmd --reload
Modify these commands to match your desired zone and port requirements. Be sure to add the custom port numbers you might have chosen for each remote server.
Finally, implement a management tool such as logrotate to archive the logs on the central server. Archiving is crucial to effectively managing log storage.
Configure the branch office rsyslog clients
Setup is considerably easier on the remote servers at branch offices. If necessary, install rsyslog on each device. Edit the /etc/rsyslog.conf configuration file to either forward all logs or selected log files.
To forward all logs using TCP, add the following line to the Rules section of the configuration file:
*.* @@central_server:514
If you configured the central server for UDP connections, use a single @ character instead. If you're using custom port numbers to identify servers, set them instead of the default 514.
Use the following settings to forward logs for specific services, such as FTP log entries:
ftp.* @@central_server:514
Be sure to restart the rsyslog service after editing the configuration file by using the following command:
systemctl restart rsyslog
You can also modify the severity levels rsyslog writes to the log files. Here are the possible severities:
emerg = 0
alert = 1
crit = 2
error = 3
warn = 4
notice = 5
info = 6
debut = 7
Review your configuration any time you make changes to these servers. Certain services add log file configurations to rsyslog. For example, the ubiquitous Apache web server relies on rsyslog and can forward its log files using this configuration.
You might sometimes find it helpful to centralize Linux workstation logs in addition to server entries. They'll use a similar configuration. Note that macOS supports rsyslog, so you can also integrate those systems into your logging architecture. Since many network devices are Linux-based, consider adding them to this design.
Test the configuration
Use the logger command to generate test messages from each remote server. Confirm that the messages arrived on the central server. Verify configuration file entries and firewall settings if you run into any issues.
Best practices
Rsyslog is a reliable mechanism. Use the following best practices to get the most from it:
- Use fast and reliable storage devices.
- Consider placing current logs on hot storage.
- Consider placing archived logs on cold storage.
- Separate server log files into specific directories.
- Use the TCP protocol for the most reliability.
- Secure your log files with Linux permissions, SELinux and disk encryption.
- Rotate and archive your log files using tools like logrotate.
Ensure your log files satisfy service level agreements and compliance requirements. Most importantly, read your logs. When you use automation or a manual review process, examine your log files to look for anomalies, suspicious activity and misconfigurations. Review the rsyslog configuration periodically to ensure you're logging the information you need from services, applications and the OS.
Damon Garn owns Cogspinner Coaction and provides freelance IT writing and editing services. He has written multiple CompTIA study guides, including the Linux+, Cloud Essentials+ and Server+ guides, and contributes extensively to Informa TechTarget, The New Stack and CompTIA Blogs.