Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1092070
  • 博文数量: 186
  • 博客积分: 4939
  • 博客等级: 上校
  • 技术积分: 2075
  • 用 户 组: 普通用户
  • 注册时间: 2010-04-08 17:15
文章分类

全部博文(186)

文章存档

2018年(1)

2017年(3)

2016年(11)

2015年(42)

2014年(21)

2013年(9)

2012年(18)

2011年(46)

2010年(35)

分类: 系统运维

2016-04-26 12:43:39

   背景:原来的logstash只是把日志写到redis, 现在由于业务需求,需要同时存一份到S3. 大致配置如下:

input {
        tcp {
               add_filed =>["type","syslog"]
                host        =>"0.0.0.0"
                port         =>"5514"
         }
}

output {
           redis {
                 ......
             }
# upload to s3
          s3 {
                 .................
             }
}

添加S3部分不久,发现redis的监控,网络流量下跌的厉害,猜测是s3的部分添加造成的影响。

# input-s3这个plugin可以在github看到源代码,jruby写的,大概思路就是把文件写到本地临时文件,文件size到了指定大小或者时间戳到了就把任务到queue,上传的时候调用aws-sdk用了线程来queue来取。

猜测问题是,由于logstash都是aws ec2机器,写本地磁盘速度远低于写redis里面,所以假如原来redis的1分钟写10M,现在1分钟写完后,s3的还没完成,只好等着,同样时间内,redis的写入就慢了。

最终解释在这里 

Logstash internals (Queues and Threading)

The logstash agent is 3 parts: inputs -> filters -> outputs.

Each '->' is an internal messaging system. It is implemented with a 'SizedQueue' in Ruby. SizedQueue allows a bounded maximum of items in the queue such that any writes to the queue will block if the queue is full at maximum capacity.

Logstash sets the queue size to 20. This means only 20 events can be pending into the next phase - this helps reduce any data loss and in general avoids logstash trying to act as a data storage system. These internal queues are not for storing messages long-term.

In reverse, here's what happens with a queue fills.

If an output is failing, the output thread will wait until this output is healthy again and able to successfully send the message before moving on. Therefore, the output queue (there is only one) will stop being read from and will eventually fill up with events and cause write blocks.

A full output queue means filters will block trying to write to the output queue. Because filters will be stuck, blocked writing to the output queue, they will stop reading from the filter queue which will eventually cause the filter queue (input -> filter) to fill up.

A full filter queue will cause inputs to block when writing to the filters. This will cause each input to block, causing each input to stop processing new data from wherever that input is getting new events.

In ideal circumstances, this will behave similarly to when the tcp window closes to 0, no new data is sent because the receiver hasn't finished processing the current queue of data.

Thread Model

The thread model in logstash is currently:

N input threads | M filter threads | 1 output thread 

Filters are optional, so you will have this model if you have no filters defined:

N input threads | 1 output thread 

Each input runs in a thread by itself. This allows busier inputs to not be blocked by slower ones, etc. It also allows for easier containment of scope because each input has a thread.

The filter thread model is a 'worker' one, where each worker receives an event and applies all filters, in order, before emitting that to the output queue. This allows scalability across CPUs because many filters are CPU intensive (permitting that we have thread safety). Currently logstash forces the number of filter worker threads to be 1, but this will be tunable in the future.

The output thread model is a single thread. It operates like the worker model above where one event is received and all outputs process it in order and serially.

解决办法:用SSD替代现在的ec2的普通硬盘。或者另起一个logstash专门做s3的input。




                 
阅读(1823) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~