bitnami elk for huawei enterprise cloud elk for huawei enterprise cloud description the elk stack is...
TRANSCRIPT
Bitnami ELK for Huawei Enterprise
Cloud
Description
The ELK stack is a log management platform consisting of Elasticsearch (deep
search and data analytics), Logstash (centralized logging, log enrichment and
parsing) and Kibana (powerful and beautiful data visualizations).
First steps with the Bitnami ELK Stack
Welcome to your new Bitnami application running on Huawei Enterprise Cloud!
Here are a few questions (and answers!) you might need when first starting
with your application.
What is the administrator username set for me to log in to the
application for the first time?
Username: user
What is the administrator password?
To obtain the administrator password, click the "Remote Login" menu option
next to the server name in the Huawei Cloud Server Console. This will launch
a new browser window with an encrypted login session. The application
password will be displayed on the login welcome screen.
What SSH username should I use for secure shell access to
my application?
SSH username: root
Getting started with Bitnami ELK Stack
To get started with Bitnami ELK Stack, we suggest the following example to
read the Apache access_log and check the requests per minute to the ELK
server:
Step 1. Configure Logstash.
Stop the Logstash service:
sudo /opt/bitnami/ctlscript.sh stop logstash
Create the file /opt/bitnami/logstash/conf/access-log.conf as below:
input {
file {
path => "/opt/bitnami/apache2/logs/access_log"
start_position => beginning
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}
Check the configuration is ok. You should see an output message like
below:
/opt/bitnami/use_elk
/opt/bitnami/logstash/bin/logstash -f /opt/bitnami/logstash/conf/
--config.test_and_exit
Configuration OK
Start the Logstash service:
sudo /opt/bitnami/ctlscript.sh start logstash
Step 2. Check Elasticsearch.
Access your server via browser in order to generate data
(http://SERVER-IP/).
Check Elasticsearch is receiving data. You should see an index called
logstash-DATE:
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted
store.size pri.store.size
green open .kibana 1 0 1 0
3.1kb 3.1kb
yellow open logstash-2017.02.21 5 1 1 2
11.2kb 11.2kb
Step 3. Configure Kibana pattern.
Access the Kibana app via browser (http://SERVER-IP/app/kibana),
and use your user/password to pass the basic HTTP authentication.
Click the "Create" green button.
On the left bar, click the "Discover" menu item. You should see
something like the screenshot below:
Step 4. Create a Kibana dashboard.
On the left bar, click "Visualize" menu item.
Select the "Vertical bar chart -> From a new search" menu options.
Select "logstash-*" index.
Click the "X-Axis -> Aggregation -> Date Histogram" button secuence.
Select "Minute" in the "Interval" field, and click "Apply changes" button.
Save the visualization.
On the left bar, click "Dashboard" menu item.
Click the "Add" button, select the previous visualization and save the
dashboard.
How to connect remotely?
How to connect remotely to Elasticsearch?
IMPORTANT: Making this application's network ports public is a significant
security risk. You are strongly advised to only allow access to those ports from
non-routable IP addresses. If access is required from outside of a trusted
network, do not allow access to those ports via a public IP address. Instead,
use a secure channel such as a VPN or an SSH tunnel. Follow these
instructions to remotely connect safely and reliably.
To access the ELK server from another computer or application, make the
following changes to the node's
/opt/bitnami/elasticsearch/config/elasticsearch.yml file:
network.host: Specify the hostname or IP address where the server will
be accesible. Set it to 0.0.0.0 to listen on every interface.
network.publish_host: Specify the host name that the node publishes to
other nodes for communication.
NOTE: Remember to configure the firewall on your server to allow traffic on the
ports used by ELK. Refer to the FAQ for more information.
How to connect remotely to Logstash using SSL certificates?
It is strongly recommended to create an SSL certificate and key pair in order to
verify the identity of ELK Server. In this example, we are going to use Filebeat
to ship logs from our client servers to our ELK server:
Add the ELK Server's private IP address to the subjectAltName (SAN)
field of the SSL certificate on the ELK server. To do so, open the
OpenSSL configuration file (/opt/bitnami/common/openssl/openssl.cnf),
find the [ v3_ca ] section in the file, and add this line under it (substitute
in the ELK server's private IP address for the IP_ADDRESS
placeholder):
subjectAltName = IP: IP_ADDRESS
Generate the SSL certificate and private key in the appropriate
locations (e.g. /opt/bitnami/logstash/ssl/), with the following commands:
/opt/bitnami/use_elk
cd /opt/bitnami/logstash/ssl/
openssl req -config /opt/bitnami/common/openssl/openssl.cnf -x509
-days 3650 -batch -nodes -newkey rsa:2048 -keyout logstash-remote.
key -out logstash-remote.crt
Configure Logstash (/opt/bitnami/logstash/conf/) to add SSL certificates
for the input protocol. The code below will add SSL certificates for the
Beats plugin:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/opt/bitnami/logstash/ssl/logstash-remot
e.crt"
ssl_key => "/opt/bitnami/logstash/ssl/logstash-remote.key"
}
}
Restart Logstash:
sudo /opt/bitnami/ctlscript.sh restart logstash
Open port 5044 in the ELK server firewall
The logstash-remote.crt file should be copied to all the client instances
that send logs to Logstash.
Install Filebeat in the client machine. For example, the commands
below will install Filebeat on Ubuntu:
echo "deb https://packages.elastic.co/beats/apt stable main" | su
do tee -a /etc/apt/sources.list.d/beats.list
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sud
o apt-key add -
sudo apt-get update
sudo apt-get install filebeat
Configure Filebeat. In this example, we need to add the lines below in
the filebeat configuration file (by default /etc/filebeat/filebeat.yml) to
send syslog logs:
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
...
document_type: syslog
...
output:
logstash:
hosts: ["elk_server_private_ip:5044"]
bulk_max_size: 1024
...
tls:
certificate_authorities: ["<logstash-remote.crt_path>"]
...
Restart Filebeat service:
sudo service filebeat restart
How to start or stop the services?
Each Bitnami stack includes a control script that lets you easily stop, start and
restart services. The script is located at /opt/bitnami/ctlscript.sh. Call it without
any service name arguments to start all services:
sudo /opt/bitnami/ctlscript.sh start
Or use it to restart a single service, such as Apache only, by passing the
service name as argument:
sudo /opt/bitnami/ctlscript.sh restart apache
Use this script to stop all services:
sudo /opt/bitnami/ctlscript.sh stop
Restart the services by running the script without any arguments:
sudo /opt/bitnami/ctlscript.sh restart
Obtain a list of available services and operations by running the script without
any arguments:
sudo /opt/bitnami/ctlscript.sh
How to install a plugin?
How to install a plugin on Elasticsearch?
Install plugins with the plugin tool provided by Elasticsearch. For example, the
command below will install the ICU plugin plugin.
/opt/bitnami/use_elk
cd /opt/bitnami/elasticsearch
bin/elasticsearch-plugin install analysis-icu
How to install a plugin on Logstash?
Logstash supports input, filter, codec and output plugins. These are available
as self-contained gems (RubyGems.org). You can install, uninstall and upgrade
plugins using the Command Line Interface (CLI) invocations described below:
Install a plugin:
/opt/bitnami/use_elk
cd /opt/bitnami/logstash
bin/logstash-plugin install PLUGIN
Update a plugin:
bin/logstash-plugin update PLUGIN
List all installed plugins:
bin/logstash-plugin list
Uninstall a plugin (for Logstash <= 2.4 versions):
bin/logstash-plugin uninstall PLUGIN
How to install a plugin on Kibana?
Add-on functionality for Kibana is implemented with plug-in modules.
Install a plugin:
/opt/bitnami/use_elk
cd /opt/bitnami/kibana
bin/kibana-plugin install ORG/PLUGIN/VERSION
List all installed plugins:
bin/kibana-plugin list
Remove a plugin:
bin/kibana-plugin remove PLUGIN
You can also install a plugin manually by moving the plugin file to the plugins
directory and unpacking the plugin files into a new directory.
Updating the IP address or hostname
ELK requires updating the IP address/domain name if the machine IP
address/domain name changes. The bnconfig tool also has an option which
updates the IP address, called –machine_hostname (use –help to check if that
option is available for your application). Note that this tool changes the URL to
http://NEW_DOMAIN/elk.
sudo /opt/bitnami/apps/elk/bnconfig --machine_hostname NEW_DOMAIN
If you have configured your machine to use a static domain name or IP
address, you should rename or remove the /opt/bitnami/apps/elk/bnconfig file.
sudo mv /opt/bitnami/apps/elk/bnconfig /opt/bitnami/apps/elk/bnconfig.di
sabled
NOTE: Be sure that your domain is propagated. Otherwise, this will not work.
You can verify the new DNS record by using the Global DNS Propagation
Checker and entering your domain name into the search field.
You can also change your hostname by modifying it in your hosts file. Enter
the new hostname using your preferred editor.
sudo nano /etc/hosts
Add a new line with the IP address and the new hostname. Here's an
example. Remember to replace the IP-ADDRESS and DOMAIN
placeholders with the correct IP address and domain name.
IP-ADDRESS DOMAIN
How to create a full backup of Elasticsearch
data?
Backup
Elasticsearch provides a snapshot function that you can use to back up your
data. Follow these steps:
Register a repository where the snapshot will be stored. This may be a
local directory or cloud storage (which requires additional plugins). In
this example, we will use a local repository, which can be initialized via
the Elasticsearch REST API with the following commands:
cd /home/bitnami
mkdir backups
chown elasticsearch:bitnami /home/bitnami/backups/
chmod u+rwx /home/bitnami/backups/
Update the /opt/bitnami/elasticsearch/config/elasticsearch.yml file and
add the path.repo variable to it as shown below, pointing to the above
repository location:
path.repo: ["/home/bitnami/backups"]
Initialize the repository via the Elasticsearch REST API with the
following commands:
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type":"fs",
"settings":{
"location":"/home/bitnami/backups/my_backup",
"compress":true
}
}'
The location property has to be set to the absolute path to the backup
files. In this example, my_backup is the name of the backup repository.
See registered repositories with this command:
curl -XGET 'http://localhost:9200/_snapshot?pretty'
Once the repository is registered, launch the backup with the following
command:
curl -XPUT 'localhost:9200/_snapshot/my_backup/snapshot_1?wait_fo
r_completion=true&pretty'
In this example, my_backup is the name of the repository created
previously and snapshot_1 is the name for the backup. The
wait_for_completion option will block the command line until the
snapshot is complete. To create the snapshot in the background, simply
omit this option, as shown below:
curl -XPUT 'localhost:9200/_snapshot/my_backup/snapshot_1'
Restore
To restore a backup over existing data, follow these steps:
Close the specific indices that will be overwritten with this command:
curl -XPOST 'localhost:9200/my_index/_close'
Optionally, close all indices:
curl -XPOST 'localhost:9200/_all/_close'
Restore the backup with the following command. This command will
also reopen the indices closed before.
curl -XPOST 'localhost:9200/_snapshot/my_backup/snapshot_1/_resto
re'
For more information, refer to the official documentation.
How to upload files to the server with SFTP?
Although you can use any SFTP/SCP client to transfer files to your server, the
link below explains how to configure FileZilla (Windows, Linux and Mac OS X),
WinSCP (Windows) and Cyberduck (Mac OS X). It is required to use your
server's private SSH key to configure the SFTP client properly. Choose your
preferred application and follow the steps in the link below to connect to the
server through SFTP.
How to upload files to the server
How to enable HTTPS support with SSL
certificates?
NOTE: The steps below assume that you are using a custom domain name
and that you have already configured the custom domain name to point to your
cloud server.
Bitnami images come with SSL support already pre-configured and with a
dummy certificate in place. Although this dummy certificate is fine for testing
and development purposes, you will usually want to use a valid SSL certificate
for production use. You can either generate this on your own (explained here)
or you can purchase one from a commercial certificate authority.
Once you obtain the certificate and certificate key files, you will need to update
your server to use them. Follow these steps to activate SSL support:
Use the table below to identify the correct locations for your certificate
and configuration files.
Variable Value
Current
application URL https://[custom-domain]/
Example: https://my-domain.com/ or
https://my-domain.com/appname
Apache
configuration file /opt/bitnami/apache2/conf/bitnami/bitnami.conf
Certificate file /opt/bitnami/apache2/conf/server.crt
Certificate key file /opt/bitnami/apache2/conf/server.key
CA certificate
bundle file (if
present)
/opt/bitnami/apache2/conf/server-ca.crt
Copy your SSL certificate and certificate key file to the specified
locations.
NOTE: If you use different names for your certificate and key files, you should
reconfigure the SSLCertificateFile and SSLCertificateKeyFile directives in the
corresponding Apache configuration file to reflect the correct file names.
If your certificate authority has also provided you with a PEM-encoded
Certificate Authority (CA) bundle, you must copy it to the correct
location in the previous table. Then, modify the Apache configuration
file to include the following line below the SSLCertificateKeyFile
directive. Choose the correct directive based on your scenario and
Apache version:
Variable Value
Apache
configuration file /opt/bitnami/apache2/conf/bitnami/bitnami.conf
Directive to include
(Apache v2.4.8+)
SSLCACertificateFile
"/opt/bitnami/apache2/conf/server-ca.crt"
Directive to include
(Apache < v2.4.8)
SSLCertificateChainFile
"/opt/bitnami/apache2/conf/server-ca.crt"
NOTE: If you use a different name for your CA certificate bundle, you should
reconfigure the SSLCertificateChainFile or SSLCACertificateFile directives in
the corresponding Apache configuration file to reflect the correct file name.
Once you have copied all the server certificate files, you may make
them readable by the root user only with the following commands:
sudo chown root:root /opt/bitnami/apache2/conf/server*
sudo chmod 600 /opt/bitnami/apache2/conf/server*
Open port 443 in the server firewall. Refer to the FAQ for more
information.
Restart the Apache server.
You should now be able to access your application using an HTTPS URL.
How to create an SSL certificate?
You can create your own SSL certificate with the OpenSSL binary. A certificate
request can then be sent to a certificate authority (CA) to get it signed into a
certificate, or if you have your own certificate authority, you may sign it yourself,
or you can use a self-signed certificate (because you just want a test certificate
or because you are setting up your own CA).
Create your private key (if you haven't created it already):
sudo openssl genrsa -out /opt/bitnami/apache2/conf/server.key 2048
Create a certificate:
sudo openssl req -new -key /opt/bitnami/apache2/conf/server.key -o
ut /opt/bitnami/apache2/conf/cert.csr
IMPORTANT: Enter the server domain name when the above
command asks for the "Common Name".
Send cert.csr to the certificate authority. When the certificate authority
completes their checks (and probably received payment from you), they
will hand over your new certificate to you.
Until the certificate is received, create a temporary self-signed
certificate:
sudo openssl x509 -in /opt/bitnami/apache2/conf/cert.csr -out /opt
/bitnami/apache2/conf/server.crt -req -signkey /opt/bitnami/apach
e2/conf/server.key -days 365
Back up your private key in a safe location after generating a
password-protected version as follows:
sudo openssl rsa -des3 -in /opt/bitnami/apache2/conf/server.key -o
ut privkey.pem
Note that if you use this encrypted key in the Apache configuration file, it
will be necessary to enter the password manually every time Apache
starts. Regenerate the key without password protection from this file as
follows:
sudo openssl rsa -in privkey.pem -out /opt/bitnami/apache2/conf/se
rver.key
Find more information about certificates at http://www.openssl.org.
How to force HTTPS redirection?
Add the following to the top of the /opt/bitnami/apps/elk/conf/httpd-prefix.conf
file:
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]
After modifying the Apache configuration files, restart Apache to apply the
changes.
How to debug Apache errors?
Once Apache starts, it will create two log files at
/opt/bitnami/apache2/logs/access_log and /opt/bitnami/apache2/logs/error_log
respectively.
The access_log file is used to track client requests. When a client
requests a document from the server, Apache records several
parameters associated with the request in this file, such as: the IP
address of the client, the document requested, the HTTP status code,
and the current time.
The error_log file is used to record important events. This file includes
error messages, startup messages, and any other significant events in
the life cycle of the server. This is the first place to look when you run
into a problem when using Apache.
If no error is found, you will see a message similar to:
Syntax OK
How to add nodes to an Elasticsearch
cluster?
To add additional nodes to a cluster, update the following configuration
parameters in the node's /opt/bitnami/elasticsearch/config/elasticsearch.yml
file:
cluster.name: All the nodes should have the same cluster name to work
properly.
node.name: The name of each node should be unique. Set meaningful
names to your nodes according to their functions so it will be easier to
identify them.
network.publish_host: The host name that a node publishes to other
nodes for communication. This host should be accessible at least from
the master node.
discovery.zen.ping.unicast.hosts: When nodes are in the same
sub-network, they will auto-configure themselves into a cluster. In other
cases, specify a list with your nodes in this parameter.
Refer to the official documentation for more information.
How to debug ELK errors?
The Elasticsearch log files are created at /opt/bitnami/elasticsearch/logs/.
The Logstash log files are created at /opt/bitnami/logstash/logs/.
The Kibana log file is created at /opt/bitnami/kibana/logs/kibana.log.
How to install elasticsearch-head?
Elasticsearch-head is a Web front-end for an Elasticsearch cluster. For
Elasticsearch 5.x, site plugins are not supported, so it needs to run as a
standalone server. Follow these steps:
Install Node.js and npm. For example, the commands below will install
them on Ubuntu:
sudo apt install nodejs-legacy npm
Download the elasticsearch-head ZIP file and decompress it:
wget https://github.com/mobz/elasticsearch-head/archive/master.zi
p
unzip master.zip
Install the modules and run the service:
cd elasticsearch-head-master
npm install
./node_modules/grunt/bin/grunt server &
Update the /opt/bitnami/elasticsearch/config/elasticsearch.yml file and
enable CORS by setting http.cors.enabled to true:
http.cors.enabled: true
In the same file, set the http.cors.allow-origin variable to the domains
that are allowed to send cross-origin requests. If you prepend and
append a "/" to the value, this will be treated as a regular expression.
For example:
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
NOTE: You can set the value of http.cors.allow-origin to "*" to allow CORS
requests from anywhere if you wish. However, this is not recommended as it is
a security risk.
Add Apache configuration for elasticsearch-head to
/opt/bitnami/elasticsearch/apache-conf/elasticsearch.conf:
ProxyPass /elasticsearch-head http://127.0.0.1:9100
ProxyPassReverse /elasticsearch-head http://127.0.0.1:9100
Restart the services:
sudo /opt/bitnami/ctlscript.sh restart apache
Browse to
http://SERVER-IP/elasticsearch-head/?base_uri=http://SERVER-IP/ela
sticsearch and insert your Elasticsearch credentials. You should see
something like the screenshot below:
Which components are installed with the
Bitnami ELK Stack?
The Bitnami ELK Stack ships the components listed below. If you want to know
which specific version of each component is bundled in the stack you are
downloading, check the README.txt file on the download page or in the stack
installation directory. You can also find more information about each
component using the links below.
Main components
Elasticsearch
Logstash
Kibana
Apache Web server
What is the default configuration?
The main configuration file for Elasticsearch is
/opt/bitnami/elasticsearch/config/elasticsearch.yml.
By default, Elasticsearch will use port 9200 for requests and port 9300 for
communication between nodes within the cluster. If these ports are in use
when the server starts, it will attempt to use the next available port, such as
9201 or 9301.
Set custom ports using the configuration file, together with details such as the
cluster name (elasticsearch by default), node name, address binding and
discovery settings. All these settings are needed to add more nodes to your
Elasticsearch cluster.
The main configuration file for Logstash is
/opt/bitnami/logstash/conf/logstash.conf.
The Bitnami ELK Stack provides a basic example of this file which can be
edited for your specific purposes. This file has a separate section for each type
of plugin that can be added to the event processing pipeline.
By default, Logstash will use port 9600. If this port is in use when the server
starts, it will attempt to use the next available port, such as 9601.
The main configuration file for Kibana is /opt/bitnami/kibana/config/kibana.yml.
By default, Kibana will use port 5601. If this port is in use when the server
starts, it will attempt to use the next available port, such as 5602.
You can set a custom port using the configuration file, together with details
such as the Elasticsearch URL (http://127.0.0.1:9200 by default), Kibana index,
default application to load or verbosity level.
How to upgrade ELK?
NOTE: It's highly recommended to perform a backup before any upgrade.
Upgrade Elasticsearch
Since version 0.90.7, Elasticsearch supports rolling upgrades. As a result, it's
not necessary to stop the entire cluster during the upgrade process. Instead, it
is possible to upgrade one node at a time and keep the rest of the cluster
operating normally.
To upgrade a node, follow the steps below:
Disable shard reallocation using the command below:
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "none"
}
}'
Stop non-essential indexing and perform a synced flush (optional):
curl -XPOST 'http://localhost:9200/_flush/synced'
Stop the node:
curl -XPOST 'http://localhost:9200/_cluster/nodes/_local/_shutdow
n'
sudo /opt/bitnami/ctlscript.sh stop elasticsearch
Download the latest version.
Extract to a new directory (not overwriting the current installation) - for
example, /tmp/new_elasticsearch.
Rename old files:
cd /opt/bitnami
sudo mv elasticsearch/bin elasticsearch/old_bin
sudo mv elasticsearch/lib elasticsearch/old_lib
sudo mv elasticsearch/modules elasticsearch/old_modules
Copy files from new installation directory:
sudo cp -r /tmp/new_elasticsearch/bin elasticsearch/bin
sudo cp -r /tmp/new_elasticsearch/lib elasticsearch/lib
sudo cp -r /tmp/new_elasticsearch/modules elasticsearch/modules
Start the node again:
sudo /opt/bitnami/ctlscript.sh start elasticsearch
Remove the replicas:
curl -XPUT 127.0.0.1:9200/_settings -d{"number_of_replicas":0}
Confirm that the node joins the cluster:
curl -XGET 'http://localhost:9200/_cat/nodes'
Re-enable shard reallocation:
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.allocation.enable" : "all"
}
}'
Wait for the node to recover:
curl -XGET 'http://localhost:9200/_cat/health'
Repeat the process for all remaining nodes of your cluster.
Upgrade Logstash
To upgrade Logstash, follow the steps below:
Stop the service:
sudo /opt/bitnami/ctlscript.sh stop logstash
Download the latest version.
Extract to a new directory (not overwriting the current installation) - for
example, /tmp/new_logstash.
Backup old files:
cd /opt/bitnami
sudo cp logstash old_logstash
Copy files from new installation directory:
sudo cp -r /tmp/new_logstash/* logstash/
Test your configuration file:
logstash -t -f /opt/bitnami/logstash/conf/logstash.conf
Start the service again:
sudo /opt/bitnami/ctlscript.sh start logstash
Upgrade Kibana
To upgrade Kibana, follow these steps:
Create a snapshot of the existing .kibana index
Stop the service:
sudo /opt/bitnami/ctlscript.sh stop kibana
Download the latest version.
Extract to a new directory (not overwriting the current installation) - for
example, /tmp/new_kibana.
Take note of the Kibana plugins that are already installed:
kibana/bin/kibana-plugin list
Backup old files:
cd /opt/bitnami
sudo cp kibana old_kibana
Copy files from new installation directory:
sudo cp -r /tmp/new_kibana/* kibana/
Recover the kibana.yml file:
cp old_kibana/config/kibana.yml kibana/config/kibana.yml
Start the service again:
sudo /opt/bitnami/ctlscript.sh start kibana