In this post we will give you a step by step guidance to install ELK Stack on Centos 7. You can refer 'ELK Stack- Elasticsearch, Logstash and Kibana' to get more information about ELK Stack.
In this Installation process contain main 6 steps.
- JAVA Installation-Elasticsearch and Logstash require
Java and it should a recent version of Oracle Java 8 because that is
what Elasticsearch recommends.
- Elasticsearch Installation-Working as a database of ELK and need java to run
- Kibana Installation-Visualizer of ELK
- Nginx Installation- Localhost needed to run Kibana service and need to set up a reverse proxy to allow external access to it. We will use Nginx for this purpose.
- Logstash Installation- Data Processor for ELK
- Filebeat Installation-Lightweight shipper for logs
Elasticsearch and Logstash require Java and it should a recent version of Oracle Java 8 because that is what Elasticsearch recommends.
Go to home directory and download the Oracle Java 8 (Update 73, the latest at the time of this writing) JDK RPM with these commands:
cd /home/{your_directory}
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm"
Then install the RPM with this yum command (if you downloaded a different release, substitute the filename here):
sudo yum -y localinstall jdk-8u73-linux-x64.rpm
Now Java should be installed at /usr/java/jdk1.8.0_73/jre/bin/java, and linked from /usr/bin/java.
You may delete the archive file that you downloaded earlier:
rm ~/jdk-8u*-linux-x64.rpm
Now that Java 8 is installed, Check installed java version with following command.
Java -version
2.
Elasticsearch Installation
Elasticsearch can be installed with a package manager by adding Elastic’s package repository.
Run the following command to import the Elasticsearch public GPG key into rpm:
sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Create a new yum repository file for Elasticsearch. Note that this is a single command:
echo '[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
' | sudo tee /etc/yum.repos.d/elasticsearch.repo
Install Elasticsearch with this command:
sudo yum -y install elasticsearch
Elasticsearch is now installed. Now edit the configuration file(elasticsearch.yml) in /etc/elasticsearch/
sudo vim /etc/elasticsearch/elasticsearch.yml
Enable memory lock for Elasticsearch by removing a comment on line 40. This disables memory swapping for Elasticsearch.
bootstrap.memory_lock: true
Need to restrict outside access to Elasticsearch instance (port 9200), so outsiders can’t read data or shut down Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with “localhost”.
elasticsearch.yml =>
network.host: localhost
All the configurations look like this:
Save and exit elasticsearch.yml.
Now start Elasticsearch:
sudo systemctl start elasticsearch
Then run the following command to start Elasticsearch automatically on boot up:
sudo systemctl enable elasticsearch
Now that Elasticsearch is up and running, Check status of Elasticsearch service from following command
sudo systemctl status elasticsearch
3. Kibana Installation
Install Kibana with this command:
sudo yum -y install kibana
Open the Kibana configuration file for editing:
sudo vi /opt/kibana/config/kibana.yml
In the Kibana configuration file, find the line that specifies server.host, and replace the IP address (“0.0.0.0” by default) with “localhost”:
server.host: "localhost"
Then find the line that specifies elasticsearch.hosts, uncomment it, and replace its value with “ ["http://localhost:9200"] ”.
elasticsearch.hosts: ["http://localhost:9200"]
All the configurations look like this:
Save and exit. This setting makes it so Kibana will only be accessible to the localhost. This is fine because we will install a Nginx reverse proxy, on the same server, to allow external access.
Now start the Kibana service, and enable it:
sudo systemctl enable kibana
sudo systemctl start kibana
Check status of Kibana service from following command.
sudo systemctl status kiban
4. Nginx Installation
Localhost needed to run Kibana service
and need to set up a reverse proxy to allow external access to it. We will use
Nginx for this purpose.
Add the EPEL repository to yum:
sudo yum -y install
epel-release
Now use yum to install Nginx and httpd-tools:
sudo yum -y install nginx
httpd-tools
Use htpasswd to create an admin user, called “kibanaadmin” (you should use another name), that can access the Kibana web interface:
sudo htpasswd -c
/etc/nginx/htpasswd.users kibanaadmin
Enter a password at the prompt. Remember this login, as you will need it to
access the Kibana web interface.
Now we will create a Nginx server block in a new file:
sudo vi /etc/nginx/conf.d/ lkdomain.lk.conf
Paste the following code block into the file. Be sure to update the server_name
to match your server’s name:
/etc/nginx/conf.d/ lkdomain.lk.conf
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save
and exit. This configures Nginx to direct your server’s HTTP traffic to the
Kibana application, which is listening on localhost:5601
.
Also, Nginx will use the htpasswd.users
file, that we created earlier, and require basic authentication.
Now start and enable Nginx to put our changes into effect:
sudo systemctl start nginx
sudo systemctl enable nginx
Now that nNginx server is up and running, Check status of Nginx service from following command
sudo systemctl status nginx
Note: This
tutorial assumes that SELinux is disabled. If this is not the
case, you may need to run the following command for Kibana to work properly: sudo setsebool -P httpd_can_network_connect 1
Kibana is now accessible via your FQDN or the public IP address of your ELK Server i.e. http://elk\_server\_public\_ip/. If you go there in a web browser, after entering the “kibanaadmin” credentials, you should see a Kibana welcome page which will ask you to configure an index pattern. Let’s get back to that later, after we install all of the other components.
Go to web browser and type http://{Server_IP}/status in the address bar to check Kibana dashboard work correctly and it should be green status.
http://{Your_ELK_IP}/status
5. Logstash Installation
Install Logstash with this command:
sudo yum -y install logstash
Logstash is installed but it is not configured yet.
Logstash configuration files are in the JSON-format, and reside in //etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.
Let’s create a configuration file called logstash.conf and set up our “Filebeat” input:
sudo vim /etc/logstash/conf.d/logstash.conf
Insert the following configuration:
#read input from filebeat by listening to port 5044 on which filebeat will send the data
input {
beats
type => "test"
port => "5044"
#client_inactivity_timeout => 3600
}
}
filter {
#If log line contains tab character followed by 'at' then we will tag that entry as stacktrace
if [message] =~ "\tat" {
grok {
match => ["message", "^(\tat)"]
add_tag => ["stacktrace"]
}
}
}
output {
stdout {
codec => rubydebug
}
# Sending properly parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
}
}
Then save and exit.
Verify the Logstash configuration files.
service logstash configtest
Finally add Logstash to start at boot time and start the service.
sudo systemctl enable logstash
sudo systemctl start logstash
Check status of Logstash service from following command
sudo systemctl status logstash
6. Filebeat Installation
On Client Server, create run the following command to import the Elasticsearch public GPG key into rpm:
sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Create and edit a new yum repository file for Filebeat:
sudo vim /etc/yum.repos.d/elastic-beats.repo
Add the following repository configuration:
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Save and exit.
Install Filebeat with this command:
sudo yum -y install filebeat
Filebeat is installed but it is not configured yet.
Now we will configure Filebeat to connect to Logstash on our ELK Server. This section will step you through modifying the example configuration file that comes with Filebeat.
On Client Server, create and edit Filebeat configuration file:
sudo vi /etc/filebeat/filebeat.yml
Filebeat is using Elasticsearch as the output target by default. We need to change it to Logshtash. Disable Elasticsearch output by adding comments to the following section in filebeat.yml
Disable Elasticsearch output.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
Now Uncomment following lines to enable Logstash as a default output of Filebeat.
#----------------------------- Logstash output ------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.10.120:5044"]
** you should
comment “output.elasticsearch:
”
and uncomment “output.logstash:” to get output from filebeat. Otherwise it will
not return output to logstash service
You can include your log file path to filebeat.input
Filebeat
need log file location to read error log for send data to Logstash pipeline.Add your error log path to filebeat.input
section in Filebeat.yml in /etc/filebeat
paths:
- /{your_log_file_path}/log/*
** You need to concern about yaml indentation when you edit .yml files.
To read system log from error modules in Filebeat:
Filebeat modules simplify the collection, parsing, and visualization of common log formats.
Fielbeat module list:
filebeat modules list
Enable the module:
filebeat modules enable {module_name}
Eg.: filebeat modules enable mysql
Now configure path to module’s error log from {module}.yml in /etc/filebeat/modules.d
Cd /etc/filebeat/modules.d
Vim {module}.yml
Eg.: vim
mysql.yml
Now add error log path as
follows
to {mysql}.yml
file.
var.paths: ["/path/to/log/mysql/error.log*"]
The ‘setup
’
command loads the recommended index template for writing to Elasticsearch and
deploys the sample dashboards (if available) for visualizing the data in
Kibana. This is a one-time setup step.
filebeat setup -e
Now, Restart Filebeat service
Sudo systemctl restart
filebeat
Go to Kibana server from your browser, your error log data will be there.
0 Comments