ELK
是一套完整的日志集中处理方案。
E:ElasticSearch ES分布式索引型非关系数据库。存储logstash输出的日志,全文检索引擎,保存的格式是json格式
L:logstash 基于java语言开发的,数据收集引擎。日志的收集,可以对数据进行过滤,分析,汇总,以标准格式输出。
K:Kiabana 是ES的可视化工具。对ES存储的数据进行可视化展示,分析和检索。
架构
日志集中管理
1、es的主从和数据模式
node.master: true
es数据库的主从类型 true false
node.data: true
数据节点,是否保存数据,logstash发送数据,节点是否接收以及保存
2、es如何创建,修改,删除数据,数据管理
通过http的方式
put方式创建数据
curl -X PuT "localhost:9200/index-demo/test1?pretty&pretty'-H 'content-Type: application/json' -d'{"user":"zhangsan","mesg":"hello world"} #localhost对应就是本地数据库的IP地址 ip+端口 #index-demo 创建索引分片名称 #test 数据名称 #1 数据的id字段 #?pretty&pretty 参数会设定为json格式 #-d 数据的具体内容
POST修改数据方式
curl -X POST 'localhost:9200/index-demo/test/1/_update?pretty' -H 'Content-Type: application/json' -d '{ "doc": { "user": "zhangsan", "mesg": "hello1 world1" } }'
DELETE删除:
curl -X delete "localhost:9200/index-demo/test1?pretty&pretty'-H 'content-Type: application/json' -d'{"user":"zhangsan","mesg":"hello world"}
ELK F K
F
filebea: 轻量级的开源日志文件数据搜集器。logstash 占用系统资源比较大,属于重量级。有了flebeat可以节省资源,可以通过filebeat和logstash实现远程数据收集。
filebeat不能对数据进行标准输出,不能输出为es格式的数据,所以需要logstash把filebeat数据做标准化处理。
K
kafka 消息队列
关闭防火墙和安全机制
[root@test45 ~]# systemctl stop firewalld [root@test45 ~]# setenforce 0
安装插件实现时间同步
[root@test45 ~]# yum -y install ntp [root@test45 ~]# date 2024年 08月 01日 星期四 10:01:45 CST
安装java
[root@test45 ~]# yum -y install java
进入opt/拖入安装包,对安装包解压
[root@test45 opt]# rz -E rz waiting to receive. [root@test45 opt]# rpm -ivh elasticsearch-6.7.2.rpm
进入opt/拖入安装包,修改配置文件
[root@test45 opt]# vim /etc/elasticsearch/elasticsearch.yml [root@test45 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml cluster.name: elk-cluster node.name: node2 node.master: false node.data: true path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 0.0.0.0 http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["192.168.65.44:9300", "192.168.65.45:9300"] [root@test44 opt]# grep -v "^#" /etc/elasticsearch/elasticsearch.yml cluster.name: elk-cluster node.name: node1 node.master: true node.data: true path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch network.host: 0.0.0.0 http.port: 9200 transport.tcp.port: 9300 discovery.zen.ping.unicast.hosts: ["192.168.65.44:9300", "192.168.65.45:9300"]
启动配置文件,查看端口(不会立即启动,会有延迟)
[root@test44 opt]# systemctl restart elasticsearch.service [root@test44 opt]# netstat -antp | grep 9200 [root@test44 opt]# netstat -antp | grep 9200 tcp6 0 0 :::9200 :::* LISTEN 11389/java
浏览器访问192.168.65.44:9200
四个安装包
[root@test44 opt]# lselasticsearch-6.7.2.rpm node-v8.2.1.tar.gzelasticsearch-head-master.zip phantomjs-2.1.1-linux-x86_64.tar.bz2
安装node
安装插件 [root@test44 opt]# yum -y install gcc gcc-c++ make 解压 [root@test44 opt]# tar -xf node-v8.2.1.tar.gz [root@test44 opt]# ls date node-v8.2.1 disk.sh node-v8.2.1.tar.gz elasticsearch-6.7.2.rpm phantomjs-2.1.1-linux-x86_64.tar.bz2 elasticsearch-head-master.zip [root@test44 opt]# cd node-v8.2.1/ [root@test44 node-v8.2.1]# ./configure creating ./icu_config.gypi * Using ICU in deps/icu-small creating ./icu_config.gypi { 'target_defaults': { 'cflags': [],'default_configuration': 'Release','defines': [],'include_dirs': [],'libraries': []}, [root@test44 opt]# make -j 2 && make install #安装node
[root@test44 node-v8.2.1]# cd .. [root@test44 opt]# ls date elasticsearch-6.7.2.rpm node-v8.2.1 phantomjs-2.1.1-linux-x86_64.tar.bz2 disk.sh elasticsearch-head-master.zip node-v8.2.1.tar.gz [root@test44 opt]# tar -xf phantomjs-2.1.1-linux-x86_64.tar.bz2 [root@test44 opt]# ls date elasticsearch-6.7.2.rpm node-v8.2.1 phantomjs-2.1.1-linux-x86_64 disk.sh elasticsearch-head-master.zip node-v8.2.1.tar.gz phantomjs-2.1.1-linux-x86_64.tar.bz2 [root@test44 opt]# cd phantomjs-2.1.1-linux-x86_64/ [root@test44 phantomjs-2.1.1-linux-x86_64]# ls bin ChangeLog examples LICENSE.BSD README.md third-party.txt [root@test44 phantomjs-2.1.1-linux-x86_64]# cd bin [root@test44 bin]# ls phantomjs [root@test44 bin]# cp phantomjs /usr/local/bin/ [root@test44 bin]# cd /opt [root@test44 opt]# ls date elasticsearch-6.7.2.rpm node-v8.2.1 phantomjs-2.1.1-linux-x86_64 disk.sh elasticsearch-head-master.zip node-v8.2.1.tar.gz phantomjs-2.1.1-linux-x86_64.tar.bz2 [root@test44 opt]# unzip elasticsearch-head-master.zip [root@test44 opt]# ls date elasticsearch-head-master node-v8.2.1.tar.gz disk.sh elasticsearch-head-master.zip phantomjs-2.1.1-linux-x86_64 elasticsearch-6.7.2.rpm node-v8.2.1 phantomjs-2.1.1-linux-x86_64.tar.bz2 [root@test44 opt]# cd elasticsearch-head-master/ [root@test44 elasticsearch-head-master]# ls Dockerfile grunt_fileSets.js plugin-descriptor.properties src Dockerfile-alpine index.html proxy test elasticsearch-head.sublime-project LICENCE README.textile Gruntfile.js package.json _site [root@test44 elasticsearchheadmaster]#npm config set registry http://registry.npm.taobao.org/ #安装淘宝镜像 [root@test44 elasticsearch-head-master]# npm install
[root@test44 elasticsearch-head-master]# vim /etc/elasticsearch/elasticsearch.yml http.cors.enabled: true #开启跨域访问 http.cors.allow-origin: "*" #允许跨域访问的地址为所有地址 [root@test44 elasticsearch-head-master]# systemctl restart elasticsearch.service [root@test44 elasticsearch-head-master]# netstat -antp | grep 9200 tcp6 0 0 :::9200 :::* LISTEN 58481/java [root@test44 elasticsearch-head-master]# npm run start & [1] 58612 [root@test44 elasticsearch-head-master]# > elasticsearch-head@0.0.0 start /opt/elasticsearch-head-master > grunt server Running "connect:server" (connect) task Waiting forever... Started connect web server on http://localhost:9100 #可视化工具的端口是9100
curl -X PUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}' 输出的内容 [root@test44 elasticsearch-head-master]# curl -X PUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'content-Type: application/json' -d '{"user":"zhangsan","mesg":"hello world"}' {"_index" : "index-demo","_type" : "test","_id" : "1","_version" : 2,"result" : "updated","_shards" : {"total" : 2,"successful" : 1,"failed" : 0},"_seq_no" : 1,"_primary_term" : 1 }
主
安装一个httpd [root@test43 opt]# yum -y install httpd [root@test43 opt]# rz -E rz waiting to receive. [root@test43 opt]# ls date disk.sh logstash-6.7.2.rpm [root@test43 opt]# yum -y install java #logstash依赖与java所以安装 [root@test43 opt]# rpm -ivh logstash-6.7.2.rpm [root@test43 opt]# systemctl restart logstash.service #重启服务 [root@test43 opt]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/ #做软连接
logstash的配置
[root@test43 opt]# cd /etc/logstash/ [root@test43 logstash]# ls conf.d log4j2.properties logstash.yml startup.options jvm.options logstash-sample.conf pipelines.yml [root@test43 logstash]# vim logstash.yml
[root@test43 conf.d]# logstash -e 'input { stdin{} } output { stdout{} }' #显示成功 Successfully started Logstash API endpoint {:port=>9601} www.baidu.com #这是标准输入 /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated { #这就是标准输出"@timestamp" => 2024-08-01T04:56:31.118Z,"message" => "www.baidu.com","host" => "test43","@version" => "1" } www.sina.com.cn {"@timestamp" => 2024-08-01T04:57:57.468Z,"message" => "www.sina.com.cn","host" => "test43","@version" => "1" }
#使用 Logstash 将信息写入 Elasticsearch 中 logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.65.44:9200","192.168.65.45:9200"] } }' [root@test43 conf.d]# logstash -e 'input { stdin{} } output { elasticsearch { hosts=>["192.168.65.44:9200","192.168.65.45:9200"] } }' [INFO ] 2024-08-01 13:00:56.964 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} #此时表示正确,无报错 www.baidu.com #直接标准输入,输入后不会输出内容 www.sina.com.cn #结果不在标准输出显示,而是发送至 Elasticsearch 中,可浏览器访问 http://192.168.65.44:9100/ 或者http://192.168.65.45:9100/查看索引信息和数据浏览。
刷新页面查看
页面数据浏览就能看到输入和输出
此时就能查看到输入的内容(node1和node2都会显示)
可视化工具
[root@test43 conf.d]# vim system.conf input { file{path =>"/var/log/messages"type =>"system"start_position =>"beginning"} } output {elasticsearch {hosts=> ["192.168.65.44:9200", "192.168.65.45:9200"]index =>"system-%{+YYYY.MM.dd}" } } #赋权 [root@test43 conf.d]# chmod 777 /var/log/messages 创建索引 [root@test43 conf.d]# logstash -f system.conf --path.data /opt/test2 & 拖入包安装 [root@test43 opt]# rz -E rz waiting to receive. [root@test43 opt]# rpm -ivh kibana-6.7.2-x86_64.rpm [root@test43 opt]# vim /etc/kibana/kibana.yml 2 server.port: 5601 #开启端口 7 server.host: "0.0.0.0" #设置监听地址 28 elasticsearch.hosts: ["http://192.168.65.44:9200","192.168.65.45:9200"] #指向实例数据库 37 kibana.index: ".kibana" #保存的索引名称 97 logging.dest: /var/log/kibana.log 114 i18n.locale: "ch-CN" #中文显示 [root@test43 opt]# touch /var/log/kibana.log #开启日志文件后需要创建,不开启也许 [root@test43 opt]# chown kibana:kibana /var/log/kibana.log #赋权 [root@test43 opt]# systemctl restart kibana.service [root@test43 opt]# systemctl enable kibana.service #设置为开机自启 Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
刷新会创建kibana
浏览器访问192.168.65.43:5601 就是kibana
192.168.65.44 es1
192.168.65.45 es2
192.168.65.43 kibana