Monitoring Redis Droplets Using Redis Exporter Service
Jan 06, 2025 am 10:19 AMMethod 1: Manual Configuration
Let’s proceed with the manual configuration method in this section.
Create Prometheus System User and Group
Create a system user and group named “prometheus” to manage the exporter service.
sudo?groupadd?--system?prometheus
sudo?useradd?-s?/sbin/nologin?--system?-g?prometheus?prometheus
Download and Install Redis Exporter
Download the latest release of Redis Exporter from GitHub, extract the downloaded files, and move the binary to the /usr/local/bin/ directory.
curl?-s?https://api.github.com/repos/oliver006/redis_exporter/releases/latest?|?grep?browser_download_url?|?grep?linux-amd64?|?cut?-d?'"'?-f?4?|?wget?-qi?-
tar?xvf?redis_exporter-*.linux-amd64.tar.gz
sudo?mv?redis_exporter-*.linux-amd64/redis_exporter?/usr/local/bin/
Verify Redis Exporter Installation
redis_exporter?--version
Here is the sample Output:
Configure systemd Service for Redis Exporter
Create a systemd service unit file to manage the Redis Exporter service.
sudo?vim?/etc/systemd/system/redis_exporter.service
Add the following content to the file:
[Unit]Description=Prometheus?Redis?ExporterDocumentation=https://github.com/oliver006/redis_exporterWants=network-online.targetAfter=network-online.target[Service]Type=simpleUser=prometheusGroup=prometheusExecReload=/bin/kill?-HUP?$MAINPIDExecStart=/usr/local/bin/redis_exporter? ??--log-format=txt? ??--namespace=redis? ??--web.listen-address=:9121? ??--web.telemetry-path=/metricsSyslogIdentifier=redis_exporterRestart=always[Install]WantedBy=multi-user.target
Reload systemd and Start Redis Exporter Service
sudo?systemctl?daemon-reload
sudo?systemctl?enable?redis_exporter
sudo?systemctl?start?redis_exporter
Configuring the Prometheus Droplet (Manual Method)
Let’s configure the Prometheous droplet for the manual configuration.
Take a backup of the prometheus.yml file
cp?/etc/prometheus/prometheus.yml?/etc/prometheus/prometheus.yml-$(date??'%d%b%Y-%H:%M')
Add the Redis Exporter endpoints to be scraped
Log in to your Prometheus server and add the Redis Exporter endpoints to be scraped.
Replace the IP addresses and ports with your Redis Exporter endpoints (9121 is the default port for Redis Exporter Service).
vi?/etc/prometheus/prometheus.yml
scrape_configs: ??-?job_name:?server1_db ????static_configs: ??????-?targets:?['10.10.1.10:9121'] ????????labels: ??????????alias:?db1 ??-?job_name:?server2_db ????static_configs: ??????-?targets:?['10.10.1.11:9121'] ????????labels:
This is the end of the manual configuration. Now, let’s proceed with the script-based configuration.
Method 2: Configuring Using Scripts
You can also achieve this by running two scripts - one for the target droplets and the other for the Prometheus droplet.
Let’s start by configuring the Target Droplets.
SSH into the Target Droplet.
Download the Target Configuration script by using the following command:
wget?https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Redis_Target_Config.sh
Once the script is downloaded, ensure it has executable permissions by running:
chmod??x?DO_Redis_Target_Config.sh
Execute the script by running:
./DO_Redis_Target_Config.sh
The configuration is complete.
Note: If the redis_exporter.service file already exists, the script will not run.
Configuring the Prometheus Droplet (Script Method)
SSH into the Prometheus Droplet and download the script by using the following command:
wget?https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Redis_Prometheus_Config.sh
Once the script is downloaded, ensure it has executable permissions by running:
chmod??x?DO_Redis_Prometheus_Config.sh
Execute the script by running:
./DO_Redis_Prometheus_Config.sh
Enter the number of Droplets to add to monitoring.
Enter the hostnames and IP addresses.
The configuration is complete.
Once added, check whether the targets are updated by accessing the URL prometheushostname:9090/targets.
Note: If you enter an IP address already added to the monitoring, you will be asked to enter the details again. Also, if you do not have any more servers to add, you can enter 0 to exit the script
Configuring Grafana
Log into the Grafana dashboard by visiting Grafana-IP:3000 on a browser.
Go to Configuration > Data Sources.
Click on Add data source.
Search and Select Prometheus.
Enter Name as Prometheus, and URL (Prometheushostname:9090) and click “Save & Test”. If you see “Data source is working”, you have successfully added the data source. Once done, go to Create > Import.
You can manually configure the dashboard or import the dashboard by uploading the JSON file. A JSON template for Redis monitoring can be found in the below link:
https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Grafana-Redis_Monitoring.json
Fill in the fields and Import.
The Grafana dashboard is ready. Select the host and check if the metrics are visible. Please feel free to modify and edit the dashboard as needed.
The above is the detailed content of Monitoring Redis Droplets Using Redis Exporter Service. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Redisislimitedbymemoryconstraintsanddatapersistence,whiletraditionaldatabasesstrugglewithperformanceinreal-timescenarios.1)Redisexcelsinreal-timedataprocessingandcachingbutmayrequirecomplexshardingforlargedatasets.2)TraditionaldatabaseslikeMySQLorPos

To reset the root password of MySQL, please follow the following steps: 1. Stop the MySQL server, use sudosystemctlstopmysql or sudosystemctlstopmysqld; 2. Start MySQL in --skip-grant-tables mode, execute sudomysqld-skip-grant-tables&; 3. Log in to MySQL and execute the corresponding SQL command to modify the password according to the version, such as FLUSHPRIVILEGES;ALTERUSER'root'@'localhost'IDENTIFIEDBY'your_new

When handling NULL values ??in MySQL, please note: 1. When designing the table, the key fields are set to NOTNULL, and optional fields are allowed NULL; 2. ISNULL or ISNOTNULL must be used with = or !=; 3. IFNULL or COALESCE functions can be used to replace the display default values; 4. Be cautious when using NULL values ??directly when inserting or updating, and pay attention to the data source and ORM framework processing methods. NULL represents an unknown value and does not equal any value, including itself. Therefore, be careful when querying, counting, and connecting tables to avoid missing data or logical errors. Rational use of functions and constraints can effectively reduce interference caused by NULL.

Turn on MySQL slow query logs and analyze locationable performance issues. 1. Edit the configuration file or dynamically set slow_query_log and long_query_time; 2. The log contains key fields such as Query_time, Lock_time, Rows_examined to assist in judging efficiency bottlenecks; 3. Use mysqldumpslow or pt-query-digest tools to efficiently analyze logs; 4. Optimization suggestions include adding indexes, avoiding SELECT*, splitting complex queries, etc. For example, adding an index to user_id can significantly reduce the number of scanned rows and improve query efficiency.

mysqldump is a common tool for performing logical backups of MySQL databases. It generates SQL files containing CREATE and INSERT statements to rebuild the database. 1. It does not back up the original file, but converts the database structure and content into portable SQL commands; 2. It is suitable for small databases or selective recovery, and is not suitable for fast recovery of TB-level data; 3. Common options include --single-transaction, --databases, --all-databases, --routines, etc.; 4. Use mysql command to import during recovery, and can turn off foreign key checks to improve speed; 5. It is recommended to test backup regularly, use compression, and automatic adjustment.

TosecurelyconnecttoaremoteMySQLserver,useSSHtunneling,configureMySQLforremoteaccess,setfirewallrules,andconsiderSSLencryption.First,establishanSSHtunnelwithssh-L3307:localhost:3306user@remote-server-Nandconnectviamysql-h127.0.0.1-P3307.Second,editMyS

MySQL paging is commonly implemented using LIMIT and OFFSET, but its performance is poor under large data volume. 1. LIMIT controls the number of each page, OFFSET controls the starting position, and the syntax is LIMITNOFFSETM; 2. Performance problems are caused by excessive records and discarding OFFSET scans, resulting in low efficiency; 3. Optimization suggestions include using cursor paging, index acceleration, and lazy loading; 4. Cursor paging locates the starting point of the next page through the unique value of the last record of the previous page, avoiding OFFSET, which is suitable for "next page" operation, and is not suitable for random jumps.

MySQL transactions and lock mechanisms are key to concurrent control and performance tuning. 1. When using transactions, be sure to explicitly turn on and keep the transactions short to avoid resource occupation and undolog bloating due to long transactions; 2. Locking operations include shared locks and exclusive locks, SELECT...FORUPDATE plus X locks, SELECT...LOCKINSHAREMODE plus S locks, write operations automatically locks, and indexes should be used to reduce the lock granularity; 3. The isolation level is repetitively readable by default, suitable for most scenarios, and modifications should be cautious; 4. Deadlock inspection can analyze the details of the latest deadlock through the SHOWENGINEINNODBSTATUS command, and the optimization methods include unified execution order, increase indexes, and introduce queue systems.
