LEK Cluster in Docker Container
The structure of cluster like this:
We can use this LEK cluster in docker containers to collect, search , analyze, visualize our different kinds of Open-stack logs.
How to build this cluster in docker
Now, we want our lek cluster to run in docker containers.
1. Pull docker image from DockerHub;
$ sudo docker pull logstash
$ sudo docker pull redis
$ sudo docker pull elasticsearch
$ sudo docker pull kibana
$ sudo docker pull nginx
2. we use docker-compose to run and link our containers;
In 10.32.105.107, under ~/lek-docker/, there is a configure file named docker-compose.yml. In this file, we define how to run and link differert containers.
-----------------------------------------------------------------------------------------
shipper:
image: logstash
ports:
- "10514:10514"
- "11514:11514"
- "12514:12514"
- "13514:13514"
- "14514:14514"
- "1514:1514"
volumes:
- ./shipper.conf:/shipper.conf
- ./logstash_patterns:/logstash_patterns
links:
- broker01
command: logstash -f /shipper.conf
In this code, there are some fields:
A. image: which docker image run in docker container
B. ports: Host’s port map in container’s port
$ sudo docker run -p …
C. volumes: like “mount”, host’s file mount on container’s file
$ sudo docker run –v …
D. Links: docker container communicate with another docker container
$sudo docker run –link …
E. command: docker run commands
Because rsyslog use TCP to transport the log datas, the shipper.conf file like this:
input {
tcp {
port => 10514
mode => "server"
tags => ['nova', 'oslofmt']
type => "nova"
}
}
-----------------------------------------------------------------------------------------
broker01:
image: redis
expose:
- "6379:6379"
-----------------------------------------------------------------------------------------
indexer:
image: logstash
volumes:
- ./indexer.conf:/indexer.conf
links:
- broker01
- es01
command: logstash -f /indexer.conf
-----------------------------------------------------------------------------------------
es01:
image: elasticsearch
expose:
- "9300"
ports:
- "9200:9200"
volumes:
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./es01data:/usr/share/elasticsearch/data
-----------------------------------------------------------------------------------------
kibana:
image: kibana
volumes:
- ./kibana.yml:/opt/kibana/config/kibana.yml
expose:
- 5601
links:
- es01
-----------------------------------------------------------------------------------------
proxy:
image: nginx
volumes:
- ./nginx/conf:/etc/nginx
- ./nginx/logs:/logs
ports:
- 8080:8080
links:
- kibana
How nodes’rsyslog connect to docker container
In 10.32.170.11, under /etc/rsyslog.d/*, there are different kinds of configure files which used to connect to the docker container.
For example: /etc/rsyslog.d/nova.conf
-------------------------------------------------------------------------------------------------------
input(type="imfile"
File="/var/log/nova/nova-compute.log"
Tag="nova-compute:"
Severity="error"
Facility="local0"
Statefile="stat-nova-compute")
local0.* @@10.32.105.107:10514
input: the source log which will input into rsyslog
“local0.* @@10.32.105.107:10514” this line means this machine can connect to 10.32.105.107, and can deliver log datas to the port 10514.
This picture derives from Fuyuan chu
I think this cluster is not very hard, just configure every file carefully, then yon can run your cluster smoothly.
优质内容筛选与推荐>>