Chinaunix首页 | 论坛 | 博客
  • 博客访问: 404030
  • 博文数量: 80
  • 博客积分: 8021
  • 博客等级: 中将
  • 技术积分: 1075
  • 用 户 组: 普通用户
  • 注册时间: 2007-09-08 10:36
文章分类

全部博文(80)

文章存档

2010年(3)

2009年(25)

2008年(52)

我的朋友

分类: LINUX

2010-11-13 17:37:10

Central Loghost Mini-HOWTO

This page is simply a collection of open source tools you can use to glue together your own centralized (syslog) loghost. Included are example configuration settings so that you can configure your loghost in a manner similar to mine.

There is very little that you need to read and understand in order to use these tools. Also, these tools are widely used and therefore easy to get help with on internet mailing lists.

I established a centralized location for syslog collection in order to facilitate:

  1. Log reporting
    • real time alerting
    • periodic (several times per day) summary reporting
  2. Log storage
    • long term archival for possible later analysis
Tools used:
  • UNIX hosts (Linux and Solaris)
  • though I'm slowly moving to , this page will be updated once I've completely switched
  • for a GUI interface

Newlogcheck

My central loghost machine uses a modified version of logcheck.sh that I wrote named (imaginatively) newlogcheck.sh. The modified version calls another script I wrote that sorts the output of the "logtail" by individual hosts into separate portions of the final report. The perl script attempts to avoid duplication of log messages by printing each log message only once, reporting how many times the event was reported.

This approach dramatically reduces the size of your logcheck reports, and sorting it by host makes it easy to read.

Download the "newlogcheck" scripts

You need to configure your logcheck settings yourself, read the README that comes with logcheck from then the one included with my scripts. Copy my newlogcheck.sh and sort_logs.pl into your logcheck dir, and run newlogcheck.sh instead of logcheck.sh for reports.

If you're ready to go ahead, get the tarred/gzipped file .

Warning: sean finney points out that there's potential race conditions with newlogcheck's use of predictable files in the TMPDIR directory. These scripts are meant to be used in a protected directory with access only to the user running it (mode 700), and the parent directories all the way to / writable only by that user and/or root.

Syslog-NG

Archive all logs automatically

I've setup syslog-ng to archive logs to host-specific directories in /var/log/HOSTS on my central loghost. This way standard UNIX tools like find and grep can be used for log parsing and do it either by time or by host. To grep against the log files for all hosts on November 8th, 2001 I can do this:

 $  grep hacker /var/log/HOSTS/*/2001/11/08/*

...and I can just traverse all logs for a host using find or 'grep -r' since all logs for one host are in a single directory structure.

Here's the syslog-ng.conf directives to archive the way I've done it:

  destination hosts { 
   file("/var/log/HOSTS/$HOST/$YEAR/$MONTH/$DAY/$FACILITY$YEAR$MONTH$DAY"
   owner(root) group(root) perm(0600) dir_perm(0700) create_dirs(yes)); 
  };
  
  log {
	source(src);
	destination(hosts);
  };
 

Email certain logs

With this you can match message content (in this case the string "attackalert") and mail them. In syslog-ng.conf:
  destination mail-alert-perl { program("/usr/local/bin/syslog-mail-perl"); };
  
  filter f_attack_alert {
		match("attackalert"); 
  };
  
  # find messages with "attackalert" in them, and send to the mail-alert script
  log {
	source(src);
	filter(f_attack_alert);
	destination(mail-alert-perl);
   };
Use to strip off the message priority (which I've found to be useless and just clutters up the message) and mail it:

Before you put in place automatic email alerts - ask yourself if it's possible to generate hundreds or even thousands of those log messages. What would happen to your mail server? Would you even see any other alerts if you're deleting hundreds or thousands of one message?

Put in place some throtting before you setup auto-emailing (you can use the "" script from the section). This is a great area for DoS, so watch yourself.


Input all logs to MySQL database

I started putting the data into a database for more flexible querying, using SQLSyslogd, written by Przemyslaw Frasunek, available at: http://www.frasunek.com/sources/security/sqlsyslogd/.

I like having SQL at my disposal when I need very specific information from my logs.

You can pipe to sqlsyslogd with a line like this:

destination sqlsyslogd { 
  program("/usr/local/sbin/sqlsyslogd -u sqlsyslogd -t logs sqlsyslogd -p");
};

log {
	source(src);
	destination(sqlsyslogd);
};
"src" in this case is all the incoming messages, there's no filtering of messages. You still need to setup your database according to the instructions for sqlsyslogd. Read the docs that come with it.

Swatch

Filter all logs through swatch

Here's a way to get swatch to read from standard input, so it gets every message that goes through your syslog server. In syslog-ng.conf:
  # way to get swatch to read from stdin
  destination swatch {
	program("/usr/bin/swatch --read-pipe=\"cat /dev/fd/0\"");
  };

  # send all logs to swatch
  log {
	source(src);
	destination(swatch);
   };

Andreas �stling reports:

It also seems to work to run swatch as non-root, like:

  destination swatch {
	program("su syslog -c '/usr/bin/swatch --read-pipe=\"cat /dev/fd/0\"'");
  };

('syslog' is my user running syslog-ng)

Warning:

I've found that the swatch "throttle" feature breaks when you force it to read from standard input like this (actually it appears to be broken in general). If you don't use that feature, then no problem.
If you do, then use to throttle instead of swatch (works on redhat linux, but should work on any UNIX with a recent BASH installed and some paths fixed):

Use it like this in swatchrc:
watchfor   /no swap space/
        echo
        exec echo $0 | bash-mail-alert swap_space pager@example.dom

(from before I implemented the shell script throttling).

SEC

I'm running Swatch and SEC in parallel. I have no reason to keep swatch around, execpt that the way that I'm doing throttling with swatch (temp files) has allowed me to easily implement log alert disabling via syslog (using the logger command). Of course these aren't security-related alerts, I'd never allow an attacker to turn off alerts that way!

I'll have to do something with SEC that uses temp files or something that allows me the same functionality. I need it because some nighly scripted restarts of certain crappy content management systems generates logs that during non-maintenance I'd want alerts from (to my pager).

Using SEC from syslog-ng is like using swatch, though no /dev/fd workaround since it directly supports reading from STDIN:

######################################################################

destination d_sec { 

        #
        # the redirection here makes syslog-ng invoke sec using a shell instead of directly
        # which leaves sec still running after syslog-ng restarts, DON'T do the redirect unless
        # you really know what you're doing!
        #
        #program("/usr/local/sbin/sec.pl -input=\"-\" -conf=/usr/local/etc/sec.conf >/var/log/sec.er
r 2>&1"); 

	# use this one
        program("/usr/local/sbin/sec.pl -input=\"-\" -conf=/usr/local/etc/sec.conf"); 
};

# send all logs to sec
log { 
        source(src);
        destination(d_sec); 
};

##################################################


Splunk

Lately I've been using to view my logs. I don't really have time to write it up, but it's incredibly easy to have it slurp up a lot of logs and then query it for specific information or just browse around.

There is a live demo on their site where you can quickly get an idea for what it's about.

Follow to set up Splunk as an interface to your syslog logs.

Stunnel

Stunnel -- Universal SSL Wrapper

Stunnel is a program that allows you to encrypt arbitrary TCP connections inside SSL (Secure Sockets Layer) available on both Unix and Windows. Stunnel can allow you to secure non-SSL aware daemons and protocols (like POP, IMAP, LDAP, etc) by having Stunnel provide the encryption, requiring no changes to the daemon's code.
- quoted from

I have syslog collection hosts in two datacenters that forward using TCP to a central loghost. The TCP connections from each collection host are port forwarded over stunnel.

The collection hosts in each datacenter collect the logs via UDP, so no special software is necessary on any hosts other than the collection hosts.

I wanted the reliability of TCP but know how easy it can be to hijack or otherwise disrupt a TCP stream between two hosts (if you're in the right place). Stunnel solves this for me by making the stream tamper proof. Sure they can block the packets, but they can't easily modify them en route.

I setup my tunnel like this on the satellite servers:

stunnel -c -d 5140 -r loghost:5140
Then I have syslog-ng write to the stunnel port on localhost:
destination loghost {
	tcp("127.0.0.1" port(5140));
};
log {
	source(src);
	destination(loghost);
};
The central loghost listens on port 5140 and redirects that connection to port 514, where syslog-ng is listening:
stunnel -p /etc/stunnel/stunnel.pem -d 5140 -r 127.0.0.1:514
阅读(465) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~