Thursday, March 17, 2022

Kibana

Kibana


 


 
 

 

Logstash - its collect logs and transform
Logstash take as input whole log file, and created index passed as output to elasticsearch
logstash work on file, database both. e.g system logs, app logs, or any particular analysis

 Elasticsearch -
its analysis the logs send by logstash, and do indexing
elasticsearch work for searching and indexing,
its store index in memory, in key value form
so when every you type key(keyword), it will show its value.

Kibana -
its visialize the logs provided by Elasticsearch.

In logstash we have many plugin, that take as input and provide output for like system logs, app logs.

Same way we can read database also, eg. we have banking application, and we want to see which client got logged in today, what and how many transfer they have done, so process is same - collect data from logstash - dump to elasticsearch - visualize to kibana.

we have 10 server, we have issue on 1 server
From System logs, application logs, we can identify, we can narrow down the problem, where is issue came, and we can resolve it

If any session or any page taking so much time to load, we can identify after put the logs on the dashboard, and we can see what response taking so much time, as response we can check

In case we got any exception or error issue in system, we can take that exception and error as string, we can create as alert, and it should trigger when its occur and action its email to system admin etc.

CPU memory usage or any probability of any server get down, we can set threshold, and we can trigger as alert.

All 3 Logstash, ElasticSearch, Kibana can work individual also,   

 


logstash plugin

For logstash, you need plugins for different kind of data input, log input
e.g. logstash-input-jdbc, logstash-input-s3
logstash-output-s3, logstash-output-http

how to install logstash plugin
logstash-plugin install logstash-input-file

how to configure logstash?
go to config and open config file

where we put data in elasticsearch?
elasticsearch - data - node -

taking input from beat and putting output in elasticsearch
input {
  beats {
   port => 5044
 }
}

output {
 elasticsearch {
  hosts => ["http://localhost:9200"]
  index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  #user => "elastic"
  #password => "changeme"
 }
}
--------------------------------------------------------------------

Below is file input plugin, and output to elasticsearch

 -----------------------------------------------------------------------------------

jdbc input 


 ------------------------------------------------------------------------------------------

Latest Ver
 - 7.12

is not a datastore
Kibana should configure to run against an Elasticsearch node of the same version. This is the officially supported configuration.
Best suited for logs Analysis.


Interface    
Web based interface
by default runs on port 5601

 

Dashboard & Visualization
Best suited to analyze logs
supports test based Analysis
Offers different visualizations capabilites


Supported Datasources
Supports only Elastic Search
LDAP Integration

Querying
Use Elastic search Query Language to query data


Alerting
Graphite, Prometheus, InfluxDB, MySQL, PostgreSQL, and Elasticsearch

Community
Has Excellent Community
https://www.elastic.co/community/


Elastic Search

Search Engine -  Like Google, when we enter keyword. it will give result.
Free & Open Source -
Full Text Search - like manjeet written in any column, it can search,
bt in RDMS, its difficult like in mysql.

Scalable
 Horizontal - on single systme increase config.
 Vertical - create parallel server.
Inverted Index - Like index in end of book, like in mcgraw hill book, word with page info.
Schema Free - it create schema according to database, not like RDBMS
Json -  ELK created in java, but we can use diff. api for comm. like for data.

ELK is full fledged application.
ELK based on Lucene (its Library for search)

Apache Lucene
Search Library(IR)
Free
Open Source
Java
Inverted Index

Diff. ELK and Lucene, Lucene have some limitation, like if you need to use on so many nodes, then its difficult.

Terminology
Cluster - Multiple machine combine, its called cluster
Node - single instance.
Index
Document - its like row in mysql
Field - Column in mysql
mapping - Schema in mysql

When you have distributed form
Shard - we have replica copy of data on multiple server, one in write mode, and others are in replicating mode.
Primary Shard - the server where it writing its primary
Replica Shard - other then writing, like other are replica.

Elastic Vs Mysql
Index = Table
Mapping = Schema
Field = Column
Document = Row

Use case why use ELK

If you have 100 of server, and you need to find out 1 app logs, its difficult,

Also if you have restriction to login to application server and logs is there, so you need to provide some kind of decentralize why of logs to other people

ELK Installation
go to ELK website download

 

Elastic Search installation
go to ELK website download
after download
go to bin folder of ELK
and run ./elasticsearch
default port is for ELK 9200

Something for Kibana
go to bin folder of kibana
and run ./kibana
default port is for kibana 5601

 https://www.youtube.com/watch?v=nsJar753ROc&list=PLTgwj-KL1pO2I0EQu8lDbhoH1CpLIHg9d

ELK
E - elasticsearc
L - Log stash - limitation - 100 query support
K - Kibana

fluentd (EFK) f - fluentd
1000 query
time space query 4-6 pm
micro service query data

ELK / EFK stack

EFK is currently famous

In case you working on container you must know about filebeat/MatricBeat
and you can say you bring logs data using filebeat/matricbeat and seeing it using Kibana

How brings data in EFK?

Fluentd its kind of agent, you can install on any client machine.

Part 1 ELK

https://www.youtube.com/watch?v=JrqdVGzSe8U&t=3s

Part 2 ELK

https://www.youtube.com/watch?v=TAgPoAJsv8Q


https://www.youtube.com/watch?v=nsJar753ROc&list=PLTgwj-KL1pO2I0EQu8lDbhoH1CpLIHg9d

No comments:

Post a Comment