Introduction
redisctl
is a unified command-line tool for managing Redis Cloud and Redis Enterprise deployments through their REST APIs.
Why redisctl?
- Single Tool - One CLI for both Cloud and Enterprise deployments
- Explicit Commands - Clear separation between Cloud and Enterprise operations
- Multiple Interfaces - Raw API access, human-friendly commands, and orchestrated workflows
- Flexible Output - JSON, YAML, or formatted tables with JMESPath filtering
Command Layers
The CLI provides three layers of interaction:
- Raw API Access - Direct REST calls to any endpoint
- Human-Friendly Commands - Typed wrappers around common operations
- Workflows - Multi-step orchestrated operations (coming soon)
Installation
Get started quickly with Homebrew:
brew install joshrotenberg/brew/redisctl
Or see Installation for other methods.
Quick Example
# Configure your profile
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# List all databases
redisctl cloud database list
# Get specific database details
redisctl cloud database get 12345
# Direct API call
redisctl api cloud get /subscriptions
Next Steps
- Installation - Get redisctl installed
- Configuration - Set up your profiles
- Quick Start - Your first commands
Installation
Homebrew (macOS/Linux)
The easiest way to install on macOS or Linux:
# Install directly (automatically taps the repository)
brew install joshrotenberg/brew/redisctl
# Or tap first, then install
brew tap joshrotenberg/brew
brew install redisctl
This will:
- Install the latest stable version
- Set up the binary in your PATH
- Enable automatic updates via
brew upgrade
To upgrade to the latest version:
brew upgrade redisctl
Binary Releases
Download the latest release for your platform from the GitHub releases page.
Linux/macOS
# Download the binary (replace VERSION and PLATFORM)
curl -L https://github.com/joshrotenberg/redisctl/releases/download/vVERSION/redisctl-PLATFORM.tar.gz | tar xz
# Move to PATH
sudo mv redisctl /usr/local/bin/
# Make executable
chmod +x /usr/local/bin/redisctl
Windows
Download the .zip
file from the releases page and extract to a directory in your PATH.
From Cargo
If you have Rust installed:
# Basic installation
cargo install redisctl
# With secure credential storage support (recommended)
cargo install redisctl --features secure-storage
Feature Flags
Feature | Description |
---|---|
secure-storage | Enables OS keyring support for secure credential storage (recommended) |
cloud-only | Builds only Cloud functionality (smaller binary) |
enterprise-only | Builds only Enterprise functionality (smaller binary) |
From Source
git clone https://github.com/joshrotenberg/redisctl.git
cd redisctl
# Basic installation
cargo install --path crates/redisctl
# With secure storage support (recommended)
cargo install --path crates/redisctl --features secure-storage
# Development build with all features
cargo build --release --all-features
Docker
# Pull the image
docker pull joshrotenberg/redisctl:latest
# Run commands
docker run --rm joshrotenberg/redisctl --help
# With environment variables
docker run --rm \
-e REDIS_CLOUD_API_KEY="your-key" \
-e REDIS_CLOUD_SECRET_KEY="your-secret" \
joshrotenberg/redisctl cloud database list
Shell Completions
redisctl
can generate shell completions for better command-line experience.
Bash
# Generate completion
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Or system-wide (requires sudo)
redisctl completions bash | sudo tee /usr/share/bash-completion/completions/redisctl
# Reload your shell or source the completion
source ~/.local/share/bash-completion/completions/redisctl
Zsh
# Add to your fpath (usually in ~/.zshrc)
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Or use oh-my-zsh custom completions
redisctl completions zsh > ~/.oh-my-zsh/custom/completions/_redisctl
# Reload shell
exec zsh
Fish
# Generate completion
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically
PowerShell
# Generate completion
redisctl completions powershell | Out-String | Invoke-Expression
# To make permanent, add to your PowerShell profile
redisctl completions powershell >> $PROFILE
Elvish
# Generate completion
redisctl completions elvish > ~/.config/elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.config/elvish/rc.elv
Verify Installation
redisctl --version
Platform-Specific Binaries
For specific deployment scenarios, you can build platform-specific binaries:
# Cloud-only binary (smaller size)
cargo build --release --features cloud-only --bin redis-cloud
# Enterprise-only binary
cargo build --release --features enterprise-only --bin redis-enterprise
Next Steps
- Configuration - Set up your credentials
- Quick Start - Run your first commands
Docker Development Environment
The Redis Enterprise CLI includes a comprehensive Docker setup for development and testing. This environment provides a real Redis Enterprise cluster without requiring manual setup.
Overview
Our Docker environment includes:
- Redis Enterprise cluster for local development
- Automated cluster initialization using our CLI workflows
- Multiple service profiles for different testing scenarios
- Development tooling with live code mounting
- Performance testing and debugging capabilities
Quick Start
# Start Redis Enterprise cluster
docker compose up -d
# Access the cluster
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_INSECURE="true"
redisctl enterprise cluster info
# Clean up
docker compose down -v
Using Docker Compose
The Docker Compose setup provides a Redis Enterprise cluster with automatic initialization:
# Start Redis Enterprise with auto-initialization
docker compose up -d
# Check cluster status
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="Redis123!"
export REDIS_ENTERPRISE_INSECURE="true"
redisctl enterprise cluster info
# View databases
redisctl enterprise database list
# Clean up when done
docker compose down -v
Includes:
- Redis Enterprise server
- Automatic cluster initialization via workflow
- Ready-to-use configuration
Testing with Docker
Running Tests Against the Cluster
Once your cluster is running, you can test various commands:
# Test cluster commands
redisctl enterprise cluster info
redisctl enterprise node list
redisctl enterprise database list
# Create a test database
redisctl enterprise database create --data '{
"name": "test-db",
"memory_size": 1073741824,
"port": 12000
}'
# Test with different output formats
redisctl enterprise database list -o yaml
redisctl enterprise database list -o table
# Use verbose logging for debugging
RUST_LOG=debug redisctl enterprise cluster info
Interactive Testing
For interactive testing, you can use a temporary container:
# Run interactive shell with redisctl
docker run --rm -it \
--network redisctl_redisctl-network \
-e REDIS_ENTERPRISE_URL="https://redis-enterprise:9443" \
-e REDIS_ENTERPRISE_INSECURE="true" \
-e REDIS_ENTERPRISE_USER="admin@redis.local" \
-e REDIS_ENTERPRISE_PASSWORD="Redis123!" \
joshrotenberg/redisctl:latest \
/bin/sh
# Inside the container, run commands
redisctl enterprise cluster info
redisctl enterprise database list
Environment Variables
Configure the Docker environment via .env
file (if needed):
# Copy example environment file (optional)
cp .env.example .env
# Edit .env to customize:
# - REDIS_ENTERPRISE_IMAGE: Docker image to use
# - REDIS_ENTERPRISE_PLATFORM: Platform architecture
Control logging and behavior:
# Set log level
RUST_LOG=debug docker compose up
# Component-specific logging
RUST_LOG="redis_enterprise=trace,redisctl=debug" docker compose up
Development Workflow
Typical Development Session
# 1. Start development environment
docker compose up -d
# 2. Build and test your changes locally
cargo build --release
./target/release/redisctl enterprise cluster info
# 3. Test with Docker image
docker build -t redisctl:dev .
docker run --rm \
--network redisctl_redisctl-network \
-e REDIS_ENTERPRISE_URL="https://redis-enterprise:9443" \
-e REDIS_ENTERPRISE_INSECURE="true" \
-e REDIS_ENTERPRISE_USER="admin@redis.local" \
-e REDIS_ENTERPRISE_PASSWORD="Redis123!" \
redisctl:dev enterprise cluster info
# 4. Clean up
docker compose down -v
Testing New Features
# Start basic environment
docker compose up -d
# Test your new command locally
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="Redis123!"
export REDIS_ENTERPRISE_INSECURE="true"
# Run your new command
cargo run -- enterprise your-new-command
# Or test with the release build
cargo build --release
./target/release/redisctl enterprise your-new-command
Debugging Connection Issues
# Check Redis Enterprise health
docker compose ps
docker compose logs redis-enterprise
# Test connectivity directly
curl -k https://localhost:9443/v1/bootstrap
# Test with verbose logging
RUST_LOG=debug redisctl enterprise cluster info
# Check network connectivity from container
docker run --rm \
--network redisctl_redisctl-network \
alpine/curl \
curl -k https://redis-enterprise:9443/v1/bootstrap
Service Architecture
Main Services
- redis-enterprise: Redis Enterprise server
- redis-enterprise-init: Automatic cluster initialization using the workflow command
Networking
All services use the redisctl-network
bridge network:
- Redis Enterprise API:
https://redis-enterprise:9443
(external:https://localhost:9443
) - Web UI:
https://redis-enterprise:8443
(external:https://localhost:8443
) - Database ports:
12000-12010
Volumes
- enterprise-data: Persistent Redis Enterprise data
- Source mounting: Development containers access project files
Troubleshooting
Common Issues
Port Conflicts:
# Check if ports are in use
lsof -i :9443
lsof -i :8443
# Stop conflicting services
docker compose down
Platform Compatibility Issues:
# If you encounter platform issues, check Docker settings
docker version
# Ensure Docker Desktop is configured for your platform
# Try pulling the image manually
docker pull redis/redis-stack-server:latest
Permission Issues:
# Reset Docker volumes
docker compose down -v
docker compose up -d
Build Issues:
# Force rebuild
docker compose build --no-cache
docker compose up --force-recreate
Debugging Commands
# Check service status
docker compose ps
# View logs
docker compose logs -f enterprise
docker compose logs -f enterprise-init
# Execute commands in running container
docker compose exec cli sh
docker compose exec enterprise bash
# Check network connectivity
docker compose exec cli ping enterprise
docker compose exec cli curl -k https://enterprise:9443/v1/bootstrap
Best Practices
Development
- Use
docker compose up -d
for complete environment setup - Build locally with
cargo build --release
for development - Use verbose logging (
RUST_LOG=debug
) for debugging - Always clean up with
docker compose down -v
Testing
- Always test against real Redis Enterprise
- Test all output formats (JSON, YAML, table)
- Clean up test data between runs
- Verify error handling with invalid inputs
Performance
- Use performance profile to validate changes
- Monitor resource usage during development
- Test with realistic data sizes
- Validate API response times
Configuration
redisctl
can be configured using either a configuration file or environment variables.
Configuration File
Create ~/.config/redisctl/config.toml
:
# Set default profile
default_profile = "cloud-prod"
# Redis Cloud Profile
[profiles.cloud-prod]
deployment_type = "cloud"
api_key = "your-account-key"
api_secret = "your-secret-key"
api_url = "https://api.redislabs.com/v1" # Optional, this is the default
# Redis Enterprise Profile
[profiles.enterprise-dev]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true # Allow self-signed certificates
# You can have multiple profiles
[profiles.cloud-staging]
deployment_type = "cloud"
api_key = "staging-key"
api_secret = "staging-secret"
File Locations
- Linux/macOS:
~/.config/redisctl/config.toml
- Windows:
%APPDATA%\redis\redisctl\config.toml
Security
Set appropriate file permissions:
chmod 600 ~/.config/redisctl/config.toml
Environment Variables
Alternatively, use environment variables:
Cloud Variables
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
export REDIS_CLOUD_API_URL="https://api.redislabs.com/v1" # Optional
Enterprise Variables
export REDIS_ENTERPRISE_URL="https://cluster:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # For self-signed certs
Getting Your Credentials
Redis Cloud
- Log in to Redis Cloud Console
- Go to Account Settings → API Keys
- Click Add API Key
- Copy the Account Key and Secret
Redis Enterprise
Get credentials from your cluster administrator or use the default:
- Username:
admin@cluster.local
- Password: Set during cluster setup
Testing Your Configuration
# Test Cloud connection
redisctl api cloud get /
# Test Enterprise connection
redisctl api enterprise get /v1/cluster
Using Profiles
# Use default profile
redisctl cloud database list
# Use specific profile
redisctl cloud database list --profile cloud-staging
# List all profiles
redisctl profile list
Next Steps
- Quick Start - Start using redisctl
Authentication
redisctl
supports authentication for both Redis Cloud and Redis Enterprise deployments.
Redis Cloud
Redis Cloud uses API key authentication:
- API Key - Your account key (public identifier)
- API Secret - Your secret key (keep this private!)
Getting Your API Keys
- Log in to app.redislabs.com
- Click your name → Account Settings → API Keys
- Click "Add API Key" and give it a name
- Copy both the Account key and Secret (you won't see the secret again!)
Setting Up Authentication
Option 1: Secure OS Keyring (Recommended)
When compiled with the secure-storage
feature, store credentials securely in your OS keyring:
# Install with secure storage support
cargo install redisctl --features secure-storage
# Create secure profile
redisctl profile set cloud \
--deployment cloud \
--api-key "your-account-key" \
--api-secret "your-secret-key" \
--use-keyring # Stores in OS keyring
# Test it works
redisctl --profile cloud api cloud get /
Your config will contain secure references:
[profiles.cloud]
deployment_type = "cloud"
api_key = "keyring:cloud-api-key" # Actual value in OS keyring
api_secret = "keyring:cloud-api-secret" # Actual value in OS keyring
Option 2: Environment Variables
Use environment variables (good for CI/CD):
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
# Test it works
redisctl api cloud get /
Option 3: Configuration File (Development Only)
For development only, you can use plaintext config at ~/.config/redisctl/config.toml
:
[profiles.cloud]
deployment_type = "cloud"
api_key = "your-account-key"
api_secret = "your-secret-key"
⚠️ Warning: This stores credentials in plaintext. Use keyring or environment variables for production!
Redis Enterprise
Redis Enterprise uses basic authentication with username/password.
Default Credentials
- Username:
admin@cluster.local
(default) - Password: Set during cluster setup
Setting Up Authentication
Option 1: Secure OS Keyring (Recommended)
Store credentials securely in your OS keyring:
# Create secure profile
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster.example.com:9443" \
--username "admin@cluster.local" \
--password "your-password" \
--use-keyring # Stores in OS keyring
# For self-signed certificates
redisctl profile set enterprise --insecure true
# Test it works
redisctl --profile enterprise api enterprise get /v1/cluster
Your config will contain secure references:
[profiles.enterprise]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "keyring:enterprise-username" # Actual value in OS keyring
password = "keyring:enterprise-password" # Actual value in OS keyring
insecure = false
Option 2: Environment Variables
Use environment variables:
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
# For self-signed certificates
export REDIS_ENTERPRISE_INSECURE="true"
# Test it works
redisctl api enterprise get /v1/cluster
Option 3: Configuration File (Development Only)
For development only, add to ~/.config/redisctl/config.toml
:
[profiles.enterprise]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true # For self-signed certs
⚠️ Warning: This stores credentials in plaintext. Use keyring or environment variables for production!
Security Tips
- Never commit credentials - Use environment variables or secure vaults
- Use read-only API keys when possible for Cloud
- Rotate credentials regularly
- Set file permissions:
chmod 600 ~/.config/redisctl/config.toml
Troubleshooting
Authentication Failed
Check your credentials:
# Enable debug logging to see what's happening
RUST_LOG=debug redisctl api cloud get /
Connection Refused
Verify the URL and port are correct:
curl -k https://your-cluster:9443/v1/cluster
Certificate Errors
For development/testing with self-signed certificates:
export REDIS_ENTERPRISE_INSECURE=true
See Also
- Configuration - Profile management
- Environment Variables - All supported variables
Examples
This section showcases common tasks and powerful features to get you started with redisctl.
Quick Setup
First, set up your authentication profile:
# For Redis Cloud
redisctl profile set cloud --api-key YOUR_KEY --api-secret YOUR_SECRET
# For Redis Enterprise (local Docker)
redisctl profile set enterprise --url https://localhost:9443 \
--username admin@redis.local --password Redis123! --insecure
Redis Cloud Examples
List All Resources
# See all your subscriptions
redisctl cloud subscription list -o table
# Get detailed info about databases
redisctl cloud database list -o json | jq '.[] | {name, port, status}'
# Output (example):
# {
# "name": "cache-prod",
# "port": 12000,
# "status": "active"
# }
Create and Manage Databases
# Create a new database with wait for completion
redisctl cloud database create --subscription-id 123456 \
--data '{"name": "my-cache", "memoryLimitInGb": 1}' --wait
# Monitor async operation progress
redisctl cloud task get 456789
# Update database configuration
redisctl cloud database update 123456:789 \
--data '{"memoryLimitInGb": 2}' --wait
Network Security
# Set up VPC peering
redisctl cloud connectivity vpc-peering create --subscription-id 123456 \
--data @vpc-config.json --wait
# Configure ACL rules
redisctl cloud acl rule create --subscription-id 123456 \
--data '{"name": "allow-app", "sourceIps": ["10.0.0.0/24"]}'
Redis Enterprise Examples
Cluster Management
# Get cluster health overview
redisctl enterprise cluster get -o json | jq '.name, .license_expired, .nodes | length'
# Output:
# "prod-cluster"
# false
# 3
# View all nodes status
redisctl enterprise node list -o table
Database Operations
# Create a database with replication
redisctl enterprise database create \
--data '{"name": "session-store", "memory_size": 1073741824, "replication": true}' \
--wait
# Get database metrics
redisctl enterprise database stats 1 -o json | \
jq '.intervals[0] | {ops_sec, used_memory, connected_clients}'
# Trigger backup
redisctl enterprise database backup 1
Support and Diagnostics
# Generate support package for troubleshooting
redisctl enterprise support-package create
# Check license status
redisctl enterprise license get -o json | jq '.license_expired, .expired_date'
# View recent cluster logs
redisctl enterprise logs list --limit 50
Power User Features
Raw API Access
# Direct API calls when you need something not yet wrapped
redisctl api cloud get /subscriptions/123456/databases \
-q "[?status=='active'].{name:name, port:port}"
# POST with custom payload
redisctl api enterprise post /v1/bdbs --data @database-config.json
Async Operations with Custom Timeouts
# Long-running operations with progress updates
redisctl cloud database create --subscription-id 123456 \
--data @large-db.json \
--wait --wait-timeout 1200 --wait-interval 30
# The command will:
# - Poll every 30 seconds
# - Show progress spinner
# - Timeout after 20 minutes
# - Return full operation result
JMESPath Filtering
# Complex queries on JSON output
redisctl cloud subscription list -o json \
-q "[?paymentMethodId=='12345'] | [0:3].{id:id, name:name, databases:databases[].name}"
# Find databases by port range
redisctl enterprise database list -o json \
-q "[?port >= `12000` && port <= `13000`].{name:name, port:port}"
Secure Credential Storage
# Store credentials in OS keyring (macOS Keychain, Windows Credential Store, etc.)
redisctl profile set cloud-prod \
--api-key YOUR_KEY \
--api-secret YOUR_SECRET \
--use-keyring
# Credentials are now encrypted in your OS keyring
# No plaintext secrets in config files!
Scripting and Automation
CI/CD Pipeline Example
#!/bin/bash
# deploy-database.sh
# Exit on error
set -e
# Create database
DB_RESULT=$(redisctl cloud database create \
--subscription-id $SUBSCRIPTION_ID \
--data @config.json \
--wait \
-o json)
# Extract database ID and endpoint
DB_ID=$(echo $DB_RESULT | jq -r '.databaseId')
ENDPOINT=$(echo $DB_RESULT | jq -r '.endpoint')
# Update application configuration
echo "REDIS_URL=redis://$ENDPOINT" >> .env
# Verify connectivity
redisctl cloud database get $DB_ID -o json | jq '.status'
Batch Operations
# Update multiple databases
for db_id in $(redisctl enterprise database list -o json | jq -r '.[].uid'); do
echo "Updating database $db_id"
redisctl enterprise database update $db_id \
--data '{"backup_interval": 3600}' \
--wait
done
Next Steps
- Check out deployment-specific commands in Cloud or Enterprise sections
- Learn about Output Formats for better data manipulation
- Set up Secure Storage for your credentials
- Explore Async Operations for long-running tasks
Shell Completions
redisctl supports tab completion for all major shells. This guide shows how to install and configure completions for your shell.
Generating Completions
First, generate the completion script for your shell:
# Bash
redisctl completions bash > redisctl.bash
# Zsh
redisctl completions zsh > _redisctl
# Fish
redisctl completions fish > redisctl.fish
# PowerShell
redisctl completions powershell > redisctl.ps1
# Elvish
redisctl completions elvish > redisctl.elv
Installing Completions
Bash
# Linux - User-specific
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Linux - System-wide (requires sudo)
sudo redisctl completions bash > /usr/share/bash-completion/completions/redisctl
# macOS with Homebrew
redisctl completions bash > $(brew --prefix)/etc/bash_completion.d/redisctl
# Reload your shell
source ~/.bashrc
# or start a new terminal
Zsh
# Add to your fpath (usually in ~/.zshrc)
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
# Create directory if needed
mkdir -p ~/.zsh/completions
# Generate completion file
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Reload your shell
source ~/.zshrc
# or start a new terminal
Fish
# Generate completion file
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically in new shells
# or reload current shell:
source ~/.config/fish/config.fish
PowerShell
# Add to your PowerShell profile
redisctl completions powershell >> $PROFILE
# Or save to a file and source it
redisctl completions powershell > redisctl.ps1
Add-Content $PROFILE ". $PWD\redisctl.ps1"
# Reload profile
. $PROFILE
Elvish
# Generate completion file
redisctl completions elvish > ~/.elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.elvish/rc.elv
# Reload shell
exec elvish
Testing Completions
After installation, test that completions work:
# Type and press Tab
redisctl <Tab>
# Should show: api, auth, cloud, enterprise, profile, etc.
# Try sub-commands
redisctl cloud <Tab>
# Should show: database, subscription, user, etc.
# Try options
redisctl --<Tab>
# Should show: --help, --version, --profile, --output, etc.
Troubleshooting
Completions Not Working
-
Check shell configuration:
# Bash - verify completion is enabled echo $BASH_COMPLETION_COMPAT_DIR # Zsh - check fpath echo $fpath # Fish - check completion directory ls ~/.config/fish/completions/
-
Reload your shell:
# Option 1: Source config file source ~/.bashrc # or ~/.zshrc, etc. # Option 2: Start new shell exec $SHELL # Option 3: Open new terminal
-
Verify file permissions:
# Check completion file exists and is readable ls -la ~/.local/share/bash-completion/completions/redisctl # or your shell's completion directory
Updating Completions
When updating redisctl, regenerate completions to get new commands:
# Example for Bash
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
source ~/.bashrc
Custom Completion Directories
If using non-standard directories:
# Bash - add to .bashrc
source /path/to/redisctl.bash
# Zsh - add to .zshrc
fpath=(/path/to/completions $fpath)
autoload -U compinit && compinit
# Fish - add to config.fish
source /path/to/redisctl.fish
Tips
- Auto-update completions: Add completion generation to your dotfiles setup
- Multiple shells: Generate completions for all shells you use
- Container usage: Mount completion files when using Docker:
docker run -v ~/.local/share/bash-completion:/etc/bash_completion.d:ro ...
- CI/CD: Include completion generation in your deployment scripts
See Also
- Installation Guide - Installing redisctl
- Configuration - Setting up profiles
- Quick Start - Getting started with redisctl
Quick Start
This guide will get you running your first commands in 5 minutes.
Step 1: Configure Credentials
Choose one method:
Option A: Environment Variables (Quickest)
# For Redis Cloud
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# For Redis Enterprise
export REDIS_ENTERPRISE_URL="https://cluster:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
Option B: Configuration File
Create ~/.config/redisctl/config.toml
:
[profiles.default]
deployment_type = "cloud" # or "enterprise"
api_key = "your-key"
api_secret = "your-secret"
Step 2: Test Connection
# For Cloud
redisctl api cloud get /
# For Enterprise
redisctl api enterprise get /v1/cluster
Step 3: Common Commands
List Resources
# List all databases
redisctl cloud database list
# List in table format
redisctl cloud database list -o table
# Filter active databases only
redisctl cloud database list -q "[?status=='active']"
Get Details
# Get database details
redisctl cloud database get 12345
# Get as YAML
redisctl cloud database get 12345 -o yaml
Direct API Access
# Any Cloud API endpoint
redisctl api cloud get /subscriptions
redisctl api cloud get /subscriptions/12345/databases
# Any Enterprise API endpoint
redisctl api enterprise get /v1/bdbs
redisctl api enterprise get /v1/nodes
Step 4: Using Workflows
Initialize Enterprise Cluster
For new Redis Enterprise installations, use the init-cluster workflow:
# Complete cluster setup with one command
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword"
# This workflow will:
# 1. Bootstrap the cluster
# 2. Set up authentication
# 3. Create a default database
# 4. Verify connectivity
Step 5: Explore More
Cloud Operations
# Cloud-specific commands
redisctl cloud subscription list
redisctl cloud database list --subscription-id 12345
Enterprise Operations
# Enterprise-specific commands
redisctl enterprise cluster info
redisctl enterprise database list
redisctl enterprise node list
Output Options
# JSON (default)
redisctl enterprise database list
# Table format
redisctl enterprise database list -o table
# YAML
redisctl enterprise database list -o yaml
# Filter with JMESPath
redisctl enterprise database list -q "[].{name:name,memory:memory_size}"
What's Next?
- Redis Cloud Guide - Cloud-specific operations
- Redis Enterprise Guide - Enterprise-specific operations
- Cloud Workflows - Cloud automation
- Examples - More detailed examples
Cookbook
Task-oriented recipes for common redisctl operations. Each recipe provides complete, copy-paste ready examples to accomplish specific tasks.
Quick Start
New to redisctl? Start here:
Redis Cloud Recipes
Getting Started
- Create Your First Database - 5 minutes
Networking
- Setup VPC Peering - 15-20 minutes
- Configure Private Service Connect (coming soon)
- Configure Transit Gateway (coming soon)
Security
- Configure ACL Security - 10-15 minutes
- Manage SSL/TLS Certificates (coming soon)
Operations
- Backup and Restore Workflow - 10-15 minutes
- Database Migration - 20-30 minutes
- Active-Active Setup - 30-45 minutes
Redis Enterprise Recipes
Getting Started
- Create a Database - 5 minutes
Operations
- Generate and Upload Support Package - 10 minutes
- Configure Database Replication - 10-15 minutes
- Configure Redis ACLs - 10 minutes
Cluster Management
- Cluster Health Check - 5 minutes
- Node Management - 10-15 minutes
How to Use These Recipes
Each recipe includes:
- Time estimate - How long it takes
- Prerequisites - What you need before starting
- Quick command - One-liner when possible
- Step-by-step - Detailed walkthrough
- Expected output - What success looks like
- Next steps - Related recipes
- Troubleshooting - Common errors and fixes
Contributing Recipes
Have a recipe to share? See our contribution guide.
Need More Detail?
These recipes are designed for quick wins. For comprehensive command documentation, see:
Create Your First Redis Cloud Database
⏱️ Time: 5-10 minutes
📋 Prerequisites:
- Redis Cloud account (sign up)
- redisctl installed (installation guide)
- Profile configured with Cloud credentials (authentication guide)
Quick Command
If you already have a subscription, create a database with one command:
redisctl cloud database create \
--subscription-id YOUR_SUBSCRIPTION_ID \
--data '{"name": "my-first-db", "memoryLimitInGb": 1}' \
--wait
Step-by-Step Guide
1. Verify Your Setup
First, check that redisctl can connect to Redis Cloud:
redisctl cloud subscription list -o table
What you should see:
┌────┬─────────────────┬────────┬────────────┐
│ ID │ Name │ Status │ Provider │
├────┼─────────────────┼────────┼────────────┤
│ 42 │ my-subscription │ active │ AWS │
└────┴─────────────────┴────────┴────────────┘
Troubleshooting:
- ❌ "401 Unauthorized" → Check your API credentials with
redisctl profile get
- ❌ Empty table → Create a subscription first (see subscription guide)
2. Choose Your Database Configuration
Decide on your database specifications. Here's a minimal configuration:
{
"name": "my-first-db",
"memoryLimitInGb": 1,
"protocol": "redis"
}
Common options:
memoryLimitInGb
: Memory size (1-100+ GB)protocol
:redis
ormemcached
dataPersistence
:none
,aof-every-1-second
,snapshot-every-1-hour
replication
:true
for high availability
3. Create the Database
Use the subscription ID from step 1:
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "my-first-db",
"memoryLimitInGb": 1,
"protocol": "redis",
"dataPersistence": "aof-every-1-second",
"replication": true
}' \
--wait \
--wait-timeout 300
What's happening:
--wait
: Waits for database to become active--wait-timeout 300
: Waits up to 5 minutes- Without
--wait
: Returns immediately with task ID
What you should see:
{
"taskId": "abc123...",
"status": "processing"
}
...
Database creation completed successfully!
{
"database_id": 12345,
"name": "my-first-db",
"status": "active",
"public_endpoint": "redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345"
}
4. Get Your Connection Details
Retrieve your database credentials:
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-o json \
-q '{endpoint: public_endpoint, password: password}'
Output:
{
"endpoint": "redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345",
"password": "your-password-here"
}
5. Test Your Connection
Using redis-cli:
redis-cli -h redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com \
-p 12345 \
-a your-password-here \
PING
Expected response: PONG
Advanced Options
Using a JSON File
For complex configurations, use a file:
# Create database-config.json
cat > database-config.json << 'EOF'
{
"name": "production-db",
"memoryLimitInGb": 10,
"protocol": "redis",
"dataPersistence": "aof-every-1-second",
"replication": true,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 25000
},
"dataEvictionPolicy": "volatile-lru",
"modules": [
{"name": "RedisJSON"}
]
}
EOF
# Create database
redisctl cloud database create \
--subscription-id 42 \
--data @database-config.json \
--wait
JSON Output for Automation
Use -o json
for scripts:
DB_INFO=$(redisctl cloud database create \
--subscription-id 42 \
--data '{"name": "api-cache", "memoryLimitInGb": 2}' \
--wait \
-o json)
DB_ID=$(echo "$DB_INFO" | jq -r '.database_id')
echo "Created database: $DB_ID"
Common Issues
Database Creation Times Out
Error: Database creation timed out after 300 seconds
Solution: Some regions take longer. Increase timeout:
redisctl cloud database create ... --wait --wait-timeout 600
Insufficient Subscription Capacity
Error: Subscription has insufficient capacity
Solution: Either:
- Delete unused databases:
redisctl cloud database delete ...
- Upgrade subscription: Contact Redis support or use the web console
Invalid Configuration
Error: 400 Bad Request - Invalid memory limit
Solution: Check subscription limits:
redisctl cloud subscription get --subscription-id 42 -q 'pricing'
Next Steps
Now that you have a database:
- 🔒 Configure ACL Security - Secure your database with access controls
- 🌐 Set Up VPC Peering - Connect to your private network
- 💾 Configure Backups - Protect your data
- 📊 Monitor Performance - Track your database metrics
See Also
- Cloud Database Command Reference - Complete command documentation
- Database Configuration Guide - All configuration options
- Redis Cloud Pricing - Understand costs
Setup VPC Peering
Time: 15-20 minutes
Prerequisites:
- Redis Cloud subscription with database
- AWS/GCP/Azure VPC to peer with
- Network admin access to your cloud provider
- redisctl configured with Cloud credentials
What is VPC Peering?
VPC Peering creates a private network connection between your Redis Cloud subscription and your application's VPC, eliminating public internet exposure and reducing latency.
Quick Command
If you already have your VPC details:
redisctl cloud connectivity vpc-peering create \
--subscription-id YOUR_SUB_ID \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait
Step-by-Step Guide
1. Get Your Subscription Details
First, identify which subscription to peer:
redisctl cloud subscription list -o table -q '[].{id: id, name: name, region: "deployment.regions[0].region"}'
Example output:
┌────┬──────────────┬───────────┐
│ id │ name │ region │
├────┼──────────────┼───────────┤
│ 42 │ production │ us-east-1 │
└────┴──────────────┴───────────┘
2. Gather Your VPC Information
You'll need these details from your cloud provider:
For AWS:
- AWS Account ID (12-digit number)
- VPC ID (starts with
vpc-
) - VPC CIDR block (e.g.,
10.0.0.0/16
) - Region (must match Redis Cloud region)
For GCP:
- GCP Project ID
- Network name
- Region
For Azure:
- Subscription ID (Azure subscription, not Redis)
- Resource group
- VNet name
- Region
3. Create VPC Peering Request
AWS Example
redisctl cloud connectivity vpc-peering create \
--subscription-id 42 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123def",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait \
--wait-timeout 600
GCP Example
redisctl cloud connectivity vpc-peering create \
--subscription-id 42 \
--data '{
"provider_name": "GCP",
"gcp_project_id": "my-project-123",
"network_name": "my-vpc-network",
"gcp_redis_project_id": "redis-project-456",
"gcp_redis_network_name": "redis-network",
"region": "us-central1"
}' \
--wait
Azure Example
redisctl cloud connectivity vpc-peering create \
--subscription-id 42 \
--data '{
"provider_name": "Azure",
"azure_subscription_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"resource_group": "my-resource-group",
"vnet_name": "my-vnet",
"region": "East US"
}' \
--wait
What you should see:
{
"taskId": "xyz789...",
"status": "processing"
}
...
VPC Peering created successfully!
{
"vpc_peering_id": 123,
"status": "pending-acceptance",
"provider_name": "AWS",
"aws_peering_id": "pcx-abc123def"
}
4. Accept the Peering Connection (AWS Only)
For AWS, you must accept the peering request in your AWS console:
# Get the AWS peering connection ID
redisctl cloud connectivity vpc-peering get \
--subscription-id 42 \
--peering-id 123 \
-q 'aws_peering_id'
# Output: "pcx-abc123def"
In AWS Console:
- Go to VPC Dashboard
- Click "Peering Connections"
- Find connection
pcx-abc123def
- Click "Actions" > "Accept Request"
Via AWS CLI:
aws ec2 accept-vpc-peering-connection \
--vpc-peering-connection-id pcx-abc123def \
--region us-east-1
5. Update Route Tables
Add routes to your VPC route tables to direct Redis traffic through the peering connection:
# Get Redis Cloud CIDR
redisctl cloud subscription get \
--subscription-id 42 \
-q 'deployment.regions[0].networking.cidr'
# Output: "172.31.0.0/24"
AWS Example:
aws ec2 create-route \
--route-table-id rtb-abc123 \
--destination-cidr-block 172.31.0.0/24 \
--vpc-peering-connection-id pcx-abc123def
6. Update Security Groups
Allow inbound Redis traffic (port 6379 or your database port):
aws ec2 authorize-security-group-ingress \
--group-id sg-abc123 \
--protocol tcp \
--port 6379 \
--cidr 172.31.0.0/24
7. Verify the Connection
Check peering status:
redisctl cloud connectivity vpc-peering get \
--subscription-id 42 \
--peering-id 123 \
-o json -q '{status: status, aws_peering_id: aws_peering_id}'
Expected status: active
8. Test Connectivity
Get your database endpoint and test from an instance in your VPC:
# Get database endpoint
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-q 'private_endpoint'
# From an EC2 instance in your VPC:
redis-cli -h redis-12345.internal.cloud.redislabs.com -p 12345 PING
Advanced: Active-Active VPC Peering
For Active-Active (CRDB) databases, peer with each region:
# List Active-Active regions
redisctl cloud subscription get --subscription-id 42 \
-q 'deployment.regions[].{region: region, cidr: networking.cidr}'
# Create peering for each region
redisctl cloud connectivity vpc-peering create-aa \
--subscription-id 42 \
--region-id 1 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-east-123",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait
Using Configuration Files
For complex setups, use a JSON file:
cat > vpc-peering.json << 'EOF'
{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123def",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1",
"vpc_peering_name": "production-redis-peer"
}
EOF
redisctl cloud connectivity vpc-peering create \
--subscription-id 42 \
--data @vpc-peering.json \
--wait
Common Issues
Peering Request Times Out
Error: VPC peering creation timed out
Solution: Check async operation status manually:
redisctl cloud action get --task-id xyz789...
CIDR Overlap
Error: VPC CIDR blocks overlap
Solution: Redis Cloud and your VPC cannot have overlapping CIDR blocks. Either:
- Choose a different CIDR for new subscription
- Use a different VPC with non-overlapping CIDR
Peering Stuck in "pending-acceptance"
Solution: For AWS, you must manually accept the peering request (see Step 4)
Cannot Connect After Peering
Troubleshooting checklist:
- Verify peering status is
active
- Check route tables have correct routes
- Verify security groups allow Redis port
- Ensure database has private endpoint enabled
- Test from instance actually in the peered VPC
Monitoring VPC Peering
List all peerings for a subscription:
redisctl cloud connectivity vpc-peering list \
--subscription-id 42 \
-o table \
-q '[].{id: id, status: status, provider: provider_name, region: region}'
Deleting VPC Peering
redisctl cloud connectivity vpc-peering delete \
--subscription-id 42 \
--peering-id 123 \
--wait
This also removes the peering from your cloud provider.
Next Steps
- Configure ACL Security - Secure your private database
- Setup Private Service Connect - Alternative private connectivity for GCP
- Configure Transit Gateway - Multi-VPC connectivity for AWS
- Monitor Performance - Track latency improvements
See Also
- VPC Peering Command Reference - Complete command documentation
- Redis Cloud Networking Guide - Official docs
- AWS VPC Peering - AWS documentation
Configure ACL Security
Time: 10-15 minutes
Prerequisites:
- Redis Cloud database already created
- redisctl configured with Cloud credentials
- Basic understanding of Redis ACL commands
What are ACLs?
Access Control Lists (ACLs) allow you to create users with specific permissions, limiting which commands they can run and which keys they can access. This is essential for:
- Multi-tenant applications
- Restricting administrative access
- Compliance requirements
- Defense in depth security
Quick Command
Create a read-only user for your application:
# Create Redis rule
redisctl cloud acl create-redis-rule \
--subscription-id YOUR_SUB_ID \
--data '{"name": "readonly-rule", "rule": "+@read ~*"}' \
--wait
# Create role with the rule
redisctl cloud acl create-role \
--subscription-id YOUR_SUB_ID \
--data '{"name": "readonly-role", "redis_rules": [{"rule_name": "readonly-rule"}]}' \
--wait
# Create user with the role
redisctl cloud acl create-acl-user \
--subscription-id YOUR_SUB_ID \
--data '{"name": "app-reader", "role": "readonly-role", "password": "SecurePass123!"}' \
--wait
Step-by-Step Guide
Understanding the ACL Hierarchy
Redis Cloud uses a three-level ACL system:
- Redis Rules - Define command and key access patterns (Redis ACL syntax)
- Roles - Group multiple Redis rules together
- Users - Assigned one role and a password
1. List Existing ACL Components
# View current Redis rules
redisctl cloud acl list-redis-rules --subscription-id 42 -o table
# View current roles
redisctl cloud acl list-roles --subscription-id 42 -o table
# View current users
redisctl cloud acl list-acl-users --subscription-id 42 -o table
2. Create Redis ACL Rules
Redis rules use standard Redis ACL syntax.
Common Rule Patterns
Read-only access:
redisctl cloud acl create-redis-rule \
--subscription-id 42 \
--data '{
"name": "readonly",
"rule": "+@read ~*"
}' \
--wait
Write-only to specific keys:
redisctl cloud acl create-redis-rule \
--subscription-id 42 \
--data '{
"name": "write-metrics",
"rule": "+set +del ~metrics:*"
}' \
--wait
Full access except dangerous commands:
redisctl cloud acl create-redis-rule \
--subscription-id 42 \
--data '{
"name": "safe-admin",
"rule": "+@all -@dangerous ~*"
}' \
--wait
Access to specific key prefix:
redisctl cloud acl create-redis-rule \
--subscription-id 42 \
--data '{
"name": "user-sessions",
"rule": "+@all ~session:*"
}' \
--wait
3. Create ACL Roles
Roles combine one or more Redis rules:
# Simple role with one rule
redisctl cloud acl create-role \
--subscription-id 42 \
--data '{
"name": "readonly-role",
"redis_rules": [
{"rule_name": "readonly"}
]
}' \
--wait
# Complex role with multiple rules
redisctl cloud acl create-role \
--subscription-id 42 \
--data '{
"name": "app-worker",
"redis_rules": [
{"rule_name": "readonly"},
{"rule_name": "write-metrics"}
]
}' \
--wait
4. Create ACL Users
Users are assigned a role and password:
redisctl cloud acl create-acl-user \
--subscription-id 42 \
--data '{
"name": "app-reader",
"role": "readonly-role",
"password": "SecureReadOnlyPass123!"
}' \
--wait
What you should see:
{
"taskId": "abc123...",
"status": "processing"
}
...
ACL user created successfully!
{
"id": 456,
"name": "app-reader",
"role": "readonly-role",
"status": "active"
}
5. Assign Users to Databases
After creating users, assign them to specific databases:
# Get database ID
redisctl cloud database list \
--subscription-id 42 \
-q '[].{id: database_id, name: name}'
# Update database with ACL users
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"security": {
"users": ["app-reader", "app-writer"]
}
}' \
--wait
6. Test ACL User
Connect to your database with the new user:
# Get database endpoint
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-q '{endpoint: public_endpoint, port: port}'
# Test connection
redis-cli -h redis-12345.cloud.redislabs.com \
-p 12345 \
--user app-reader \
--pass SecureReadOnlyPass123! \
PING
# Test permissions (should succeed)
redis-cli --user app-reader --pass SecureReadOnlyPass123! \
-h redis-12345.cloud.redislabs.com -p 12345 \
GET mykey
# Test restricted command (should fail)
redis-cli --user app-reader --pass SecureReadOnlyPass123! \
-h redis-12345.cloud.redislabs.com -p 12345 \
SET mykey value
# Error: NOPERM this user has no permissions to run the 'set' command
Common ACL Patterns
Application Access Pattern
Separate users for read, write, and admin operations:
# Read-only for queries
redisctl cloud acl create-redis-rule --subscription-id 42 \
--data '{"name": "app-read", "rule": "+@read +@connection ~*"}' --wait
# Write access for updates
redisctl cloud acl create-redis-rule --subscription-id 42 \
--data '{"name": "app-write", "rule": "+@write +@read +@connection ~*"}' --wait
# Admin for maintenance
redisctl cloud acl create-redis-rule --subscription-id 42 \
--data '{"name": "app-admin", "rule": "+@all ~*"}' --wait
# Create roles and users
redisctl cloud acl create-role --subscription-id 42 \
--data '{"name": "reader", "redis_rules": [{"rule_name": "app-read"}]}' --wait
redisctl cloud acl create-role --subscription-id 42 \
--data '{"name": "writer", "redis_rules": [{"rule_name": "app-write"}]}' --wait
redisctl cloud acl create-role --subscription-id 42 \
--data '{"name": "admin", "redis_rules": [{"rule_name": "app-admin"}]}' --wait
Multi-Tenant Pattern
Isolate tenants by key prefix:
# Tenant A access
redisctl cloud acl create-redis-rule --subscription-id 42 \
--data '{"name": "tenant-a", "rule": "+@all ~tenant:a:*"}' --wait
# Tenant B access
redisctl cloud acl create-redis-rule --subscription-id 42 \
--data '{"name": "tenant-b", "rule": "+@all ~tenant:b:*"}' --wait
# Create roles and users
redisctl cloud acl create-role --subscription-id 42 \
--data '{"name": "tenant-a-role", "redis_rules": [{"rule_name": "tenant-a"}]}' --wait
redisctl cloud acl create-acl-user --subscription-id 42 \
--data '{"name": "tenant-a-user", "role": "tenant-a-role", "password": "TenantAPass123!"}' --wait
Using Configuration Files
For complex ACL setups:
cat > acl-setup.json << 'EOF'
{
"rules": [
{
"name": "readonly",
"rule": "+@read ~*"
},
{
"name": "write-cache",
"rule": "+set +get +del +expire ~cache:*"
}
],
"roles": [
{
"name": "cache-worker",
"redis_rules": [
{"rule_name": "readonly"},
{"rule_name": "write-cache"}
]
}
],
"users": [
{
"name": "worker-1",
"role": "cache-worker",
"password": "Worker1Pass!"
}
]
}
EOF
# Create rules
jq -r '.rules[] | @json' acl-setup.json | while read rule; do
redisctl cloud acl create-redis-rule \
--subscription-id 42 \
--data "$rule" \
--wait
done
# Create roles
jq -r '.roles[] | @json' acl-setup.json | while read role; do
redisctl cloud acl create-role \
--subscription-id 42 \
--data "$role" \
--wait
done
# Create users
jq -r '.users[] | @json' acl-setup.json | while read user; do
redisctl cloud acl create-acl-user \
--subscription-id 42 \
--data "$user" \
--wait
done
Redis ACL Syntax Reference
Common patterns in Redis ACL rules:
Command categories:
+@read
- All read commands+@write
- All write commands+@admin
- Administrative commands+@dangerous
- Dangerous commands (FLUSHDB, KEYS, etc.)+@all
- All commands-@dangerous
- Deny dangerous commands
Specific commands:
+get
- Allow GET command+set
- Allow SET command-flushdb
- Deny FLUSHDB
Key patterns:
~*
- All keys~cache:*
- Keys starting with "cache:"~user:*
- Keys starting with "user:"~*
~-secret:*
- All keys except those starting with "secret:"
Managing ACLs
View ACL Details
# Get specific user details
redisctl cloud acl get-acl-user \
--subscription-id 42 \
--user-id 456 \
-o json
# List all users with their roles
redisctl cloud acl list-acl-users \
--subscription-id 42 \
-o json \
-q '[].{name: name, role: role, id: id}'
Update ACL Rules
# Update existing rule
redisctl cloud acl update-redis-rule \
--subscription-id 42 \
--rule-id 789 \
--data '{
"name": "readonly",
"rule": "+@read +@connection ~*"
}' \
--wait
Update User Password
redisctl cloud acl update-acl-user \
--subscription-id 42 \
--user-id 456 \
--data '{
"password": "NewSecurePass456!"
}' \
--wait
Delete ACL Components
# Delete user
redisctl cloud acl delete-acl-user \
--subscription-id 42 \
--user-id 456 \
--wait
# Delete role
redisctl cloud acl delete-role \
--subscription-id 42 \
--role-id 321 \
--wait
# Delete Redis rule
redisctl cloud acl delete-redis-rule \
--subscription-id 42 \
--rule-id 789 \
--wait
Common Issues
Cannot Create User with Reserved Name
Error: User name 'default' is reserved
Solution: Avoid reserved names: default
, admin
. Use descriptive application-specific names.
ACL Rule Syntax Error
Error: Invalid ACL rule syntax
Solution: Test your ACL rule locally first:
redis-cli ACL SETUSER testuser "+@read ~*"
redis-cli ACL GETUSER testuser
redis-cli ACL DELUSER testuser
User Cannot Connect
Troubleshooting:
- Verify user is assigned to the database
- Check password is correct
- Ensure user status is "active"
- Test with default user first to isolate ACL vs. network issues
Permission Denied
Error: NOPERM this user has no permissions to run the 'set' command
Solution: Review and update the user's role and rules:
# Check user's role
redisctl cloud acl get-acl-user --subscription-id 42 --user-id 456 -q 'role'
# Check role's rules
redisctl cloud acl list-roles --subscription-id 42 -q '[?name==`readonly-role`]'
Best Practices
- Principle of Least Privilege: Give users only the permissions they need
- Use Key Prefixes: Design your key naming to support ACLs (e.g.,
user:123:profile
) - Separate Credentials: Different users for read vs. write operations
- Rotate Passwords: Regularly update user passwords
- Test Before Production: Verify ACL rules in a test database first
- Document Rules: Keep track of what each rule and role does
Next Steps
- Setup VPC Peering - Private network connectivity
- Configure TLS/SSL - Encryption in transit
- Backup and Restore - Protect your data
- Monitor Performance - Track database metrics
See Also
- ACL Command Reference - Complete command documentation
- Redis ACL Documentation - Redis ACL syntax
- Redis Cloud Security - Security best practices
Backup and Restore Workflow
Time: 10-15 minutes
Prerequisites:
- Redis Cloud database with data persistence enabled
- redisctl configured with Cloud credentials
- Storage location configured (done automatically for Cloud)
What are Backups?
Redis Cloud provides automated backups and on-demand manual backups to protect your data. Backups can be:
- Automated - Scheduled periodic backups (hourly, daily, weekly)
- Manual - On-demand backups triggered when needed
- Stored - In Redis Cloud storage or your own cloud storage (AWS S3, GCP GCS, Azure Blob)
Quick Commands
# Trigger manual backup
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Check backup status
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json
Step-by-Step Guide
1. Check Current Backup Configuration
View your database's backup settings:
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-o json \
-q '{
data_persistence: data_persistence,
backup_interval: backup_interval,
backup_path: backup_path
}'
Example output:
{
"data_persistence": "aof-every-1-second",
"backup_interval": "every-24-hours",
"backup_path": "redis-cloud-storage"
}
2. Configure Backup Settings
If backups aren't configured, enable them:
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"data_persistence": "aof-every-1-second",
"backup_interval": "every-24-hours"
}' \
--wait
Backup interval options:
every-12-hours
- Twice dailyevery-24-hours
- Daily (recommended for most)every-week
- Weekly
3. Trigger Manual Backup
Create an on-demand backup before major changes:
redisctl cloud database backup \
--database-id 42:12345 \
--wait \
--wait-timeout 600
What you should see:
{
"taskId": "backup-abc123",
"status": "processing"
}
...
Backup completed successfully!
{
"backup_id": "bkp-20251007-143022",
"status": "completed",
"size_bytes": 10485760,
"timestamp": "2025-10-07T14:30:22Z"
}
4. Monitor Backup Status
Check backup progress and history:
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json
Example output:
{
"last_backup": {
"backup_id": "bkp-20251007-143022",
"status": "completed",
"timestamp": "2025-10-07T14:30:22Z",
"size_bytes": 10485760,
"type": "manual"
},
"next_scheduled": "2025-10-08T14:00:00Z",
"backup_progress": null
}
5. List Available Backups
View all backups for a database:
# Get subscription backup info
redisctl cloud subscription get \
--subscription-id 42 \
-o json \
-q 'databases[?database_id==`12345`].backup_status'
Restore Scenarios
Scenario 1: Restore from Recent Backup
If you need to restore to a previous state:
# Create new database from backup
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "restored-db",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
Note: Redis Cloud doesn't support in-place restore. You create a new database from a backup, verify it, then switch your application.
Scenario 2: Point-in-Time Recovery
For databases with AOF persistence:
# Create database with specific backup
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "pit-restore",
"memory_limit_in_gb": 2,
"restore_from_backup": {
"backup_id": "bkp-20251007-120000",
"timestamp": "2025-10-07T14:00:00Z"
}
}' \
--wait
Scenario 3: Clone Production to Staging
Use backups to create staging environments:
# Get latest production backup
BACKUP_ID=$(redisctl cloud database backup-status \
--database-id 42:12345 \
-o json \
-q 'last_backup.backup_id' \
| jq -r '.')
# Create staging database from production backup
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "staging-db",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "'$BACKUP_ID'"
}
}' \
--wait
Advanced: Custom Backup Storage
Configure S3 Backup Storage
Store backups in your own AWS S3 bucket:
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"backup_path": "s3://my-backup-bucket/redis-backups",
"backup_s3_access_key_id": "AKIAIOSFODNN7EXAMPLE",
"backup_s3_secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}' \
--wait
Configure GCS Backup Storage
Store backups in Google Cloud Storage:
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"backup_path": "gs://my-backup-bucket/redis-backups",
"backup_gcs_credentials": "$(cat gcs-key.json | jq -c .)"
}' \
--wait
Configure Azure Blob Storage
Store backups in Azure:
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"backup_path": "abs://my-storage-account/redis-backups",
"backup_abs_account_name": "mystorageaccount",
"backup_abs_account_key": "your-account-key"
}' \
--wait
Backup Automation Strategy
Daily Backups with Retention
#!/bin/bash
# backup-daily.sh - Daily backup script
SUBSCRIPTION_ID=42
DATABASE_ID=12345
DATE=$(date +%Y%m%d)
echo "Starting daily backup for database $DATABASE_ID..."
# Trigger backup
redisctl cloud database backup \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
--wait \
--wait-timeout 900 \
-o json | tee backup-${DATE}.log
# Check backup status
redisctl cloud database backup-status \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
-o json \
-q 'last_backup.{id: backup_id, status: status, size_mb: (size_bytes / 1048576)}'
echo "Backup completed: $DATE"
Schedule with cron:
# Daily at 2 AM
0 2 * * * /path/to/backup-daily.sh >> /var/log/redis-backup.log 2>&1
Pre-Deployment Backup
#!/bin/bash
# pre-deploy-backup.sh - Backup before deployments
SUBSCRIPTION_ID=42
DATABASE_ID=12345
DEPLOYMENT_ID=$(git rev-parse --short HEAD)
echo "Creating pre-deployment backup for $DEPLOYMENT_ID..."
# Trigger backup
BACKUP_RESULT=$(redisctl cloud database backup \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
--wait \
-o json)
BACKUP_ID=$(echo "$BACKUP_RESULT" | jq -r '.backup_id')
echo "Backup created: $BACKUP_ID"
echo "Safe to proceed with deployment $DEPLOYMENT_ID"
# Save backup ID for potential rollback
echo "$BACKUP_ID" > .last-backup-id
Backup Verification
Verify Backup Integrity
# Create test database from backup
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "backup-verify",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
# Test data integrity (example with known key)
redis-cli -h backup-verify-endpoint -p 12346 GET test-key
# Clean up test database
redisctl cloud database delete \
--subscription-id 42 \
--database-id 67890 \
--wait
Monitoring Backup Health
Check Backup Metrics
# Get backup statistics
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json \
-q '{
last_backup_age: ((now - last_backup.timestamp) / 86400 | floor),
backup_size_mb: (last_backup.size_bytes / 1048576 | floor),
next_backup: next_scheduled,
status: last_backup.status
}'
Alert on Backup Failures
#!/bin/bash
# check-backup-health.sh
SUBSCRIPTION_ID=42
DATABASE_ID=12345
MAX_AGE_HOURS=36
BACKUP_STATUS=$(redisctl cloud database backup-status \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
-o json)
LAST_BACKUP_TIME=$(echo "$BACKUP_STATUS" | jq -r '.last_backup.timestamp')
BACKUP_STATUS=$(echo "$BACKUP_STATUS" | jq -r '.last_backup.status')
# Calculate age in hours
CURRENT_TIME=$(date +%s)
BACKUP_TIME=$(date -d "$LAST_BACKUP_TIME" +%s)
AGE_HOURS=$(( ($CURRENT_TIME - $BACKUP_TIME) / 3600 ))
if [ "$BACKUP_STATUS" != "completed" ] || [ $AGE_HOURS -gt $MAX_AGE_HOURS ]; then
echo "ALERT: Backup health check failed!"
echo "Status: $BACKUP_STATUS"
echo "Age: $AGE_HOURS hours"
# Send alert (email, Slack, PagerDuty, etc.)
exit 1
fi
echo "Backup health OK - Last backup: $AGE_HOURS hours ago"
Disaster Recovery Plan
1. Document Current State
# Save current database configuration
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-o json > database-config-$(date +%Y%m%d).json
# Record backup details
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json > backup-status-$(date +%Y%m%d).json
2. Test Recovery Procedure
Regularly test your restore process:
# Quarterly DR test
./scripts/dr-test.sh production-db test-restore-db
3. Recovery Time Objective (RTO)
Estimate restore time based on database size:
- Small (< 1GB): 5-10 minutes
- Medium (1-10GB): 15-30 minutes
- Large (> 10GB): 30-60+ minutes
Common Issues
Backup Takes Too Long
Error: Backup timed out after 300 seconds
Solution: Increase timeout for large databases:
redisctl cloud database backup \
--database-id 42:12345 \
--wait \
--wait-timeout 1800 # 30 minutes
Restore Fails with "Backup Not Found"
Error: Backup ID not found
Solution: List available backups and verify ID:
redisctl cloud database backup-status \
--database-id 42:12345 \
-q 'last_backup.backup_id'
Insufficient Storage for Backup
Error: Insufficient storage space
Solution:
- Review backup retention policy
- Clean up old backups
- Upgrade storage capacity
- Use custom storage (S3/GCS/Azure)
Restored Database Has Missing Data
Troubleshooting:
- Check backup timestamp vs. expected data
- Verify AOF persistence was enabled
- Check if backup completed successfully
- Consider point-in-time recovery if available
Best Practices
- Enable Persistence: Always use AOF or snapshot persistence
- Multiple Backup Windows: Daily automated + manual before changes
- Test Restores: Regularly verify backups can be restored
- Off-Site Backups: Use custom storage in different region
- Monitor Backup Age: Alert if backups are too old
- Document Procedures: Maintain runbooks for recovery
- Verify Backup Size: Sudden size changes may indicate issues
Next Steps
- Database Migration - Migrate data between databases
- Monitor Performance - Track database health
- Configure ACLs - Secure your restored database
- Setup High Availability - Add redundancy
See Also
- Database Backup Reference - Complete command documentation
- Redis Cloud Backup Guide - Official backup documentation
- Data Persistence Options - Understanding AOF vs. RDB
Database Migration
Time: 20-30 minutes
Prerequisites:
- Source and destination databases (Redis Cloud or external Redis)
- redisctl configured with Cloud credentials
- Network connectivity between source and destination
Migration Strategies
Three common migration approaches:
- Import from backup - Best for one-time migrations
- Online replication - For minimal downtime
- RIOT (Redis Input/Output Tool) - For complex transformations
Quick Migration (from backup)
# Create backup from source
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Create new database from backup
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "migrated-db",
"memory_limit_in_gb": 2,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
Method 1: Import from RDB File
1. Export from Source Database
# If source is Redis Cloud, create backup and download
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Get backup URL
redisctl cloud database backup-status \
--database-id 42:12345 \
-q 'last_backup.download_url'
# Download backup
curl -o source-backup.rdb "https://backup-url..."
2. Upload to Cloud Storage
# Upload to S3
aws s3 cp source-backup.rdb s3://my-bucket/redis-migration/
# Get presigned URL (valid for import)
aws s3 presign s3://my-bucket/redis-migration/source-backup.rdb --expires-in 3600
3. Import to Destination Database
# Import data
redisctl cloud database import \
--database-id 43:67890 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://presigned-url..."
}' \
--wait \
--wait-timeout 1800
4. Monitor Import Progress
# Check import status
redisctl cloud database import-status \
--database-id 43:67890 \
-o json -q '{
status: status,
progress: progress_percentage,
imported_keys: keys_imported
}'
Method 2: Online Replication
For minimal downtime, use Redis replication:
1. Setup Destination as Replica
# Create destination database with replication source
redisctl cloud database create \
--subscription-id 43 \
--data '{
"name": "replica-db",
"memory_limit_in_gb": 2,
"replication": true,
"replica_of": ["redis-12345.cloud.redislabs.com:12345"]
}' \
--wait
2. Monitor Replication Lag
# Check replication status
redis-cli -h replica-endpoint -p 67890 INFO replication
3. Cutover to New Database
# Stop writes to source
# Wait for replication to catch up (lag = 0)
# Promote replica to master
redisctl cloud database update \
--subscription-id 43 \
--database-id 67890 \
--data '{"replica_of": []}' \
--wait
# Update application to use new endpoint
Method 3: Cross-Region Migration
Migrate between different Redis Cloud regions:
# 1. Create backup in source region
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# 2. Export backup to S3 in target region
# (This happens automatically with cross-region backup storage)
# 3. Create database in target region from backup
redisctl cloud database create \
--subscription-id 55 \
--data '{
"name": "us-west-db",
"memory_limit_in_gb": 2,
"region": "us-west-2",
"restore_from_backup": {
"backup_id": "bkp-20251007-143022",
"source_subscription_id": 42
}
}' \
--wait
Migration from External Redis
From Self-Hosted Redis
# 1. Create RDB backup on source
redis-cli --rdb /tmp/redis-backup.rdb
# 2. Upload to cloud storage
aws s3 cp /tmp/redis-backup.rdb s3://my-bucket/migration/
aws s3 presign s3://my-bucket/migration/redis-backup.rdb --expires-in 3600
# 3. Import to Redis Cloud
redisctl cloud database import \
--database-id 42:12345 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://presigned-url..."
}' \
--wait
From AWS ElastiCache
# 1. Create ElastiCache backup
aws elasticache create-snapshot \
--replication-group-id my-redis \
--snapshot-name migration-snapshot
# 2. Export to S3
aws elasticache copy-snapshot \
--source-snapshot-name migration-snapshot \
--target-snapshot-name migration-export \
--target-bucket my-bucket
# 3. Import to Redis Cloud (same as above)
Data Validation
Verify Migration Success
#!/bin/bash
# validate-migration.sh
SOURCE_HOST="source-redis"
SOURCE_PORT=6379
DEST_HOST="dest-redis"
DEST_PORT=12345
echo "Validating migration..."
# Compare key counts
SOURCE_KEYS=$(redis-cli -h $SOURCE_HOST -p $SOURCE_PORT DBSIZE)
DEST_KEYS=$(redis-cli -h $DEST_HOST -p $DEST_PORT DBSIZE)
echo "Source keys: $SOURCE_KEYS"
echo "Destination keys: $DEST_KEYS"
if [ "$SOURCE_KEYS" -eq "$DEST_KEYS" ]; then
echo "Key count matches!"
else
echo "WARNING: Key count mismatch!"
exit 1
fi
# Sample key validation
redis-cli -h $SOURCE_HOST -p $SOURCE_PORT --scan --pattern "*" | \
head -100 | \
while read key; do
SOURCE_VAL=$(redis-cli -h $SOURCE_HOST -p $SOURCE_PORT GET "$key")
DEST_VAL=$(redis-cli -h $DEST_HOST -p $DEST_PORT GET "$key")
if [ "$SOURCE_VAL" != "$DEST_VAL" ]; then
echo "Mismatch for key: $key"
exit 1
fi
done
echo "Validation successful!"
Zero-Downtime Migration Pattern
#!/bin/bash
# zero-downtime-migration.sh
# 1. Setup replication
echo "Setting up replication..."
redisctl cloud database update \
--subscription-id 43 \
--database-id 67890 \
--data '{"replica_of": ["source-redis:6379"]}' \
--wait
# 2. Monitor lag until synced
echo "Waiting for initial sync..."
while true; do
LAG=$(redis-cli -h new-redis -p 67890 INFO replication | \
grep master_repl_offset | cut -d: -f2)
if [ "$LAG" -lt 100 ]; then
break
fi
sleep 5
done
echo "Replication synced. Ready for cutover."
echo "Press ENTER to proceed with cutover..."
read
# 3. Stop writes to source (application-specific)
echo "Stop writes to source now!"
echo "Press ENTER when source is read-only..."
read
# 4. Wait for final sync
sleep 10
# 5. Promote replica
echo "Promoting replica to master..."
redisctl cloud database update \
--subscription-id 43 \
--database-id 67890 \
--data '{"replica_of": []}' \
--wait
echo "Migration complete! Update application to new endpoint."
Handling Large Databases
For databases > 10GB:
# 1. Use parallel import (if supported)
redisctl cloud database import \
--database-id 42:12345 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://backup-url...",
"parallel_streams": 4
}' \
--wait \
--wait-timeout 7200 # 2 hours
Common Issues
Import Times Out
# Increase timeout for large databases
redisctl cloud database import \
--database-id 42:12345 \
--data '{"source_type": "http-url", "import_from_uri": "..."}' \
--wait \
--wait-timeout 3600 # 1 hour
RDB Version Mismatch
Error: Unsupported RDB version
Solution: Ensure source Redis version is compatible. Redis Cloud supports RDB versions from Redis 2.6+
Network Timeout During Import
Error: Failed to download from URI
Solution:
- Verify URL is accessible
- Check presigned URL hasn't expired
- Ensure no firewall blocks
- Use cloud storage in same region
Partial Import
Warning: Import completed but key count mismatch
Solution:
- Check for keys with TTL that expired
- Verify no writes during migration
- Check for maxmemory-policy evictions
- Review logs for specific errors
Best Practices
- Test First - Always test migration on staging
- Backup Source - Create backup before migration
- Plan Downtime - Communicate maintenance window
- Validate Data - Compare key counts and sample data
- Monitor Performance - Watch latency during cutover
- Keep Source - Don't delete source immediately
- Update DNS - Use DNS for easy rollback
Migration Checklist
- Source database backed up
- Destination database created and configured
- Network connectivity verified
- Import method selected
- Dry run completed successfully
- Monitoring in place
- Rollback plan documented
- Application updated with new endpoint
- Data validation successful
- Source database retained for N days
Next Steps
- Backup and Restore - Protect migrated data
- Configure ACLs - Secure new database
- Monitor Performance - Track after migration
- Setup High Availability - Add redundancy
See Also
- Database Import Reference
- Redis Migration Guide
- RIOT Tool - Advanced migration tool
Active-Active (CRDB) Setup
Time: 30-45 minutes
Prerequisites:
- Redis Cloud account with Active-Active subscription
- redisctl configured with Cloud credentials
- Understanding of multi-region deployments
What is Active-Active?
Active-Active (Conflict-free Replicated Database, CRDB) provides:
- Multiple writable regions simultaneously
- Automatic conflict resolution
- Local read/write latency in each region
- Geographic redundancy and disaster recovery
Quick Setup
# Create Active-Active subscription
redisctl cloud subscription create \
--data '{
"name": "global-aa",
"deployment_type": "active-active",
"regions": [
{"region": "us-east-1", "networking": {"cidr": "10.0.1.0/24"}},
{"region": "eu-west-1", "networking": {"cidr": "10.0.2.0/24"}},
{"region": "ap-southeast-1", "networking": {"cidr": "10.0.3.0/24"}}
]
}' \
--wait
# Create Active-Active database
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "global-cache",
"memory_limit_in_gb": 2,
"support_oss_cluster_api": true,
"data_persistence": "aof-every-1-second",
"replication": true
}' \
--wait
Step-by-Step Setup
1. Plan Your Regions
Choose regions close to your users:
# List available regions
redisctl cloud region list -o json -q '[].{
region: region,
provider: provider,
availability_zones: availability_zones
}'
Common patterns:
- US + EU: us-east-1, eu-west-1
- Global: us-east-1, eu-west-1, ap-southeast-1
- US Multi-Region: us-east-1, us-west-2
2. Create Active-Active Subscription
redisctl cloud subscription create \
--data '{
"name": "production-aa",
"deployment_type": "active-active",
"payment_method_id": 12345,
"cloud_provider": "AWS",
"regions": [
{
"region": "us-east-1",
"networking": {
"cidr": "10.1.0.0/24"
},
"preferred_availability_zones": ["use1-az1", "use1-az2"]
},
{
"region": "eu-west-1",
"networking": {
"cidr": "10.2.0.0/24"
},
"preferred_availability_zones": ["euw1-az1", "euw1-az2"]
}
]
}' \
--wait \
--wait-timeout 900
Important: Each region needs a unique CIDR block.
3. Create Active-Active Database
redisctl cloud database create \
--subscription-id 42 \
--data '{
"name": "global-sessions",
"memory_limit_in_gb": 5,
"protocol": "redis",
"support_oss_cluster_api": true,
"data_persistence": "aof-every-1-second",
"replication": true,
"throughput_measurement": {
"by": "operations-per-second",
"value": 50000
},
"data_eviction_policy": "volatile-lru",
"modules": [
{"name": "RedisJSON"}
]
}' \
--wait
4. Get Regional Endpoints
# Get all regional endpoints
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-o json \
-q '{
name: name,
endpoints: regions[].{
region: region,
public_endpoint: public_endpoint,
private_endpoint: private_endpoint
}
}'
Example output:
{
"name": "global-sessions",
"endpoints": [
{
"region": "us-east-1",
"public_endpoint": "redis-12345-us-east-1.cloud.redislabs.com:12345",
"private_endpoint": "redis-12345-us-east-1.internal.cloud.redislabs.com:12345"
},
{
"region": "eu-west-1",
"public_endpoint": "redis-12345-eu-west-1.cloud.redislabs.com:12346",
"private_endpoint": "redis-12345-eu-west-1.internal.cloud.redislabs.com:12346"
}
]
}
5. Configure Applications
Connect each application to its nearest region:
US Application:
import redis
r = redis.Redis(
host='redis-12345-us-east-1.cloud.redislabs.com',
port=12345,
password='your-password',
decode_responses=True
)
EU Application:
r = redis.Redis(
host='redis-12345-eu-west-1.cloud.redislabs.com',
port=12346,
password='your-password',
decode_responses=True
)
Network Connectivity
Setup VPC Peering for Each Region
# US East peering
redisctl cloud connectivity vpc-peering create-aa \
--subscription-id 42 \
--region-id 1 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-us-east-abc",
"vpc_cidr": "172.31.0.0/16",
"region": "us-east-1"
}' \
--wait
# EU West peering
redisctl cloud connectivity vpc-peering create-aa \
--subscription-id 42 \
--region-id 2 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-eu-west-xyz",
"vpc_cidr": "172.32.0.0/16",
"region": "eu-west-1"
}' \
--wait
Conflict Resolution
Active-Active uses automatic conflict resolution with LWW (Last-Write-Wins):
Understanding Conflicts
# Example: Counter increment in both regions simultaneously
# US: INCR counter (value becomes 1)
# EU: INCR counter (value becomes 1)
# After sync: counter = 2 (both increments applied)
Conflict-Free Data Types
Use Redis data types that resolve conflicts automatically:
- Counters - INCR/DECR (additive)
- Sets - SADD/SREM (union)
- Sorted Sets - ZADD (merge by score)
- Hashes - HSET (field-level LWW)
Best Practices
# Good: Using counters
redis.incr('page:views')
# Good: Using sets
redis.sadd('user:tags', 'premium')
# Caution: Simple strings (LWW conflicts)
redis.set('user:status', 'active') # May conflict with other region
Monitoring Active-Active
Check Replication Lag
# Get replication status for all regions
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-o json \
-q 'regions[].{
region: region,
replication_lag: replication_lag_ms,
status: status
}'
Monitor Sync Traffic
# Check inter-region bandwidth usage
redisctl cloud subscription get \
--subscription-id 42 \
-q 'deployment.regions[].{
region: region,
sync_traffic_gb: sync_traffic_gb_per_month
}'
Scaling Active-Active
Add Region to Existing Database
# Add new region to subscription
redisctl cloud subscription update \
--subscription-id 42 \
--data '{
"add_regions": [
{
"region": "ap-southeast-1",
"networking": {
"cidr": "10.3.0.0/24"
}
}
]
}' \
--wait
# Database automatically extends to new region
Remove Region
redisctl cloud subscription update \
--subscription-id 42 \
--data '{
"remove_regions": ["ap-southeast-1"]
}' \
--wait
Disaster Recovery
Regional Failover
If a region becomes unavailable:
- Applications automatically retry to local endpoint
- Update application config to use different region
- Data remains consistent across remaining regions
# Check region health
redisctl cloud database get \
--subscription-id 42 \
--database-id 12345 \
-q 'regions[].{region: region, status: status}'
# Update application to use healthy region
# No data loss - all writes in healthy regions preserved
Cost Optimization
Monitor Inter-Region Traffic
# Check sync costs
redisctl cloud subscription get \
--subscription-id 42 \
-o json \
-q '{
monthly_sync_gb: (deployment.regions | map(&sync_traffic_gb_per_month, @) | sum(@)),
monthly_cost_estimate: monthly_cost
}'
Optimize for Read-Heavy Workloads
# Use read replicas in regions with heavy reads
redisctl cloud database update \
--subscription-id 42 \
--database-id 12345 \
--data '{
"replication": true,
"replica_count": 2
}' \
--wait
Common Patterns
Session Store
# Store sessions in nearest region
def store_session(session_id, data):
redis.hset(f'session:{session_id}', mapping=data)
redis.expire(f'session:{session_id}', 86400) # 24 hours
# Read from any region
def get_session(session_id):
return redis.hgetall(f'session:{session_id}')
Global Rate Limiting
# Distributed rate limit across regions
def check_rate_limit(user_id, limit=100):
key = f'rate:limit:{user_id}:{int(time.time() / 60)}'
count = redis.incr(key)
redis.expire(key, 120)
return count <= limit
Leaderboards
# Global leaderboard
def update_score(user_id, score):
redis.zadd('leaderboard:global', {user_id: score})
def get_top_players(n=10):
return redis.zrevrange('leaderboard:global', 0, n-1, withscores=True)
Common Issues
High Replication Lag
# Check network connectivity between regions
# Increase bandwidth allocation
redisctl cloud subscription update \
--subscription-id 42 \
--data '{"bandwidth_gb_per_month": 500}' \
--wait
Conflict Resolution Issues
Solution: Design data model for conflict-free types:
- Use INCR instead of SET for counters
- Use SADD instead of SET for collections
- Use HSET for field-level updates instead of full object replacement
Region Addition Takes Long Time
Solution: Adding regions requires data sync. For large databases:
- Expect 1-2 hours for initial sync
- Monitor with
--wait-timeout 7200
Best Practices
- Design for Conflicts - Use conflict-free data types
- Local Writes - Always write to nearest region
- Monitor Lag - Alert on high replication lag
- Test Failover - Regularly test regional failures
- Plan CIDRs - Use non-overlapping CIDR blocks
- Optimize Bandwidth - Monitor inter-region traffic costs
Next Steps
- Setup VPC Peering - Private connectivity per region
- Configure ACLs - Secure all regional endpoints
- Monitor Performance - Track per-region metrics
- Backup and Restore - Multi-region backup strategy
See Also
Create Your First Redis Enterprise Database
⏱️ Time: 5 minutes
📋 Prerequisites:
- Redis Enterprise cluster running (see cluster setup)
- redisctl installed (installation guide)
- Profile configured with Enterprise credentials (authentication guide)
Quick Command
Create a basic database with one command:
redisctl enterprise database create \
--data '{"name": "my-first-db", "memory_size": 1073741824}' \
--wait
Step-by-Step Guide
1. Verify Cluster Connection
Check that redisctl can connect to your cluster:
redisctl enterprise cluster get -o json -q 'name'
What you should see:
"cluster1.local"
Troubleshooting:
- ❌ "Connection refused" → Check
REDIS_ENTERPRISE_URL
or profile settings - ❌ "401 Unauthorized" → Verify credentials with
redisctl profile get
- ❌ "SSL error" → Add
--insecure
flag or setREDIS_ENTERPRISE_INSECURE=true
2. Check Available Resources
See what resources are available:
redisctl enterprise cluster get -o json -q '{
shards_limit: shards_limit,
shards_used: shards_used,
memory_size: memory_size
}'
Example output:
{
"shards_limit": 100,
"shards_used": 5,
"memory_size": 107374182400
}
3. Create the Database
Minimum configuration (1GB database):
redisctl enterprise database create \
--data '{
"name": "my-first-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12000
}' \
--wait
Common options:
memory_size
: Bytes (1073741824 = 1GB, 10737418240 = 10GB)type
:redis
ormemcached
port
: Must be unique on cluster (12000-19999 typical range)replication
:true
for high availabilitysharding
:true
for clustering across shards
What you should see:
{
"uid": 1,
"name": "my-first-db",
"status": "active",
"port": 12000,
"memory_size": 1073741824,
"endpoint": "redis-12000.cluster1.local"
}
4. Get Connection Details
Retrieve your database endpoint and authentication:
redisctl enterprise database get --database-id 1 -o json -q '{
endpoint: dns_address_master,
port: port,
password: authentication_redis_pass
}'
Output:
{
"endpoint": "redis-12000.cluster1.local",
"port": 12000,
"password": "your-password-here"
}
5. Test Connection
Using redis-cli:
redis-cli -h redis-12000.cluster1.local \
-p 12000 \
-a your-password-here \
PING
Expected response: PONG
Advanced Configuration
High Availability Database
Create a replicated database with automatic failover:
redisctl enterprise database create \
--data '{
"name": "ha-database",
"memory_size": 10737418240,
"type": "redis",
"port": 12001,
"replication": true,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec"
}' \
--wait
Clustered Database
Create a sharded database for scaling:
redisctl enterprise database create \
--data '{
"name": "clustered-db",
"memory_size": 53687091200,
"type": "redis",
"port": 12002,
"sharding": true,
"shards_count": 5,
"oss_cluster": true
}' \
--wait
Using a Configuration File
For complex setups:
# Create database-config.json
cat > database-config.json << 'EOF'
{
"name": "production-db",
"memory_size": 21474836480,
"type": "redis",
"port": 12003,
"replication": true,
"sharding": true,
"shards_count": 3,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec",
"eviction_policy": "volatile-lru",
"oss_cluster": true,
"authentication_redis_pass": "my-secure-password"
}
EOF
redisctl enterprise database create \
--data @database-config.json \
--wait
Common Issues
Port Already in Use
Error: Port 12000 is already allocated
Solution: Use a different port or check existing databases:
redisctl enterprise database list -o json -q '[].port'
Insufficient Cluster Resources
Error: Not enough memory available
Solution: Check cluster capacity:
redisctl enterprise cluster get -q '{available_memory: (memory_size - memory_used)}'
Database Stuck in "pending"
Status: pending
Solution: Check cluster node status:
redisctl enterprise node list -o table
All nodes should show online
status. If not, investigate node issues first.
Memory Size Reference
Quick conversion table:
Description | Bytes | Human |
---|---|---|
100 MB | 104857600 | 0.1 GB |
500 MB | 524288000 | 0.5 GB |
1 GB | 1073741824 | 1 GB |
5 GB | 5368709120 | 5 GB |
10 GB | 10737418240 | 10 GB |
50 GB | 53687091200 | 50 GB |
100 GB | 107374182400 | 100 GB |
Or use: echo "$((1 * 1024 * 1024 * 1024))"
for 1GB
Next Steps
Now that you have a database:
- 🔒 Configure Redis ACLs - Secure your database with access controls
- 💾 Generate Support Package - Troubleshooting and diagnostics
- 🔄 Configure Replication - Set up replica databases
- 📊 Monitor Database Health - Track performance metrics
See Also
- Enterprise Database Command Reference - Complete command documentation
- Database Configuration Options - All configuration parameters
- Redis Enterprise Documentation - Official Redis Enterprise docs
Generate and Upload a Support Package
⏱️ Time: 10-15 minutes
📋 Prerequisites:
- Redis Enterprise cluster running
- redisctl installed and configured
- (Optional) Files.com account for upload (sign up)
Quick Command
Generate support package for entire cluster:
redisctl enterprise support-package cluster \
--file /tmp/support-package.tar.gz
What is a Support Package?
A support package is a comprehensive diagnostic bundle containing:
- Cluster configuration and logs
- Database configurations and statistics
- Node health and metrics
- Network configuration
- Redis server logs
Used for troubleshooting with Redis support or internal diagnostics.
Step-by-Step Guide
1. Generate Basic Support Package
Create a support package for the entire cluster:
redisctl enterprise support-package cluster \
--file /tmp/cluster-support-$(date +%Y%m%d).tar.gz
What you should see:
Generating support package...
Support package saved to: /tmp/cluster-support-20251007.tar.gz
Size: 45.2 MB
2. Generate for Specific Database
Create a package for just one database (smaller, faster):
redisctl enterprise support-package database \
--database-id 1 \
--file /tmp/db1-support.tar.gz
3. Optimize Before Upload
Reduce package size for faster upload:
redisctl enterprise support-package database \
--database-id 1 \
--optimize \
--file /tmp/db1-optimized.tar.gz
What --optimize
does:
- Compresses logs more aggressively
- Excludes large binary dumps
- Typically 50-70% smaller
- Still contains all diagnostic info
4. Upload to Files.com
One-Time Setup
Set up your Files.com API key:
# Store securely in keyring (recommended)
redisctl files-key set --use-keyring
# Or set as environment variable
export FILES_API_KEY="your-api-key"
Generate and Upload
Create package and upload in one command:
redisctl enterprise support-package database \
--database-id 1 \
--optimize \
--upload \
--no-save
Flags explained:
--upload
: Upload to Files.com after generation--no-save
: Don't save locally (only upload)--optimize
: Reduce size before upload
What you should see:
Generating support package...
Optimizing package...
Uploading to Files.com...
✓ Uploaded: /support-packages/db1-20251007-abc123.tar.gz
URL: https://yourcompany.files.com/file/support-packages/db1-20251007-abc123.tar.gz
Advanced Usage
Generate with Custom Filters
Exclude certain log types:
redisctl enterprise support-package database \
--database-id 1 \
--file /tmp/filtered-support.tar.gz
Automated Uploads
Schedule regular support package uploads:
#!/bin/bash
# upload-support-package.sh
DATE=$(date +%Y%m%d-%H%M%S)
DB_ID=$1
redisctl enterprise support-package database \
--database-id "$DB_ID" \
--optimize \
--upload \
--no-save \
-o json | tee /var/log/support-upload-$DATE.log
Run via cron:
# Daily at 2 AM for database 1
0 2 * * * /usr/local/bin/upload-support-package.sh 1
Share with Redis Support
Generate and get sharable link:
RESULT=$(redisctl enterprise support-package cluster \
--optimize \
--upload \
--no-save \
-o json)
URL=$(echo "$RESULT" | jq -r '.upload_url')
echo "Share this URL with Redis Support:"
echo "$URL"
Common Issues
Package Generation Times Out
Error: Support package generation timed out
Solution: Use optimize flag to reduce generation time:
redisctl enterprise support-package cluster \
--optimize \
--file /tmp/support.tar.gz
Upload Fails
Error: Failed to upload to Files.com: 401 Unauthorized
Solution: Verify API key:
# Check current configuration
redisctl files-key get
# Re-enter API key
redisctl files-key set --use-keyring
Insufficient Disk Space
Error: Not enough disk space
Solution: Use --optimize
or clean up old packages:
# Find old packages
find /tmp -name "*support*.tar.gz" -mtime +7
# Use optimization
redisctl enterprise support-package cluster \
--optimize \
--file /tmp/support.tar.gz
Database Not Found
Error: Database with ID 999 not found
Solution: List available databases:
redisctl enterprise database list -o table -q '[].{id: uid, name: name}'
Package Size Reference
Typical sizes (uncompressed / compressed):
Scope | Uncompressed | Compressed | Optimized |
---|---|---|---|
Single small DB | 100-200 MB | 40-80 MB | 15-30 MB |
Single large DB | 500 MB-2 GB | 200-800 MB | 50-200 MB |
Entire cluster | 1-10 GB | 500 MB-3 GB | 200 MB-1 GB |
What's Inside?
A support package typically contains:
support-package/
├── cluster/
│ ├── cluster-config.json
│ ├── cluster-logs/
│ └── cluster-stats.json
├── databases/
│ ├── db-1/
│ │ ├── config.json
│ │ ├── stats.json
│ │ └── redis-logs/
│ └── db-2/...
├── nodes/
│ ├── node-1/
│ │ ├── system-info.json
│ │ ├── network-config.json
│ │ └── logs/
│ └── node-2/...
└── metadata.json
Next Steps
- 📊 Monitor Cluster Health - Proactive monitoring
- 🔍 Troubleshooting Guide - Common issues and solutions
- 🛠️ Node Management - Manage cluster nodes
See Also
- Support Package Command Reference - Complete command documentation
- Files.com Integration Guide - API key management
- Redis Enterprise Support - Contact Redis support
Cluster Health Check
Time: 5 minutes
Prerequisites:
- Redis Enterprise cluster running
- redisctl configured with Enterprise credentials
Quick Health Check
# Get cluster overview
redisctl enterprise cluster get -o json -q '{
name: name,
nodes: nodes_count,
shards: shards_count,
databases: databases_count,
status: cluster_state
}'
# Check all nodes
redisctl enterprise node list -o table -q '[].{
id: uid,
addr: addr,
role: role,
status: status,
cores: cores,
memory_available: (total_memory - provisional_memory - used_memory)
}'
Detailed Health Checks
1. Cluster Status
redisctl enterprise cluster get -o json -q '{
state: cluster_state,
license_state: license_state,
quorum: quorum_only,
shards: {used: shards_count, limit: shards_limit},
memory: {used: memory_size, available: ephemeral_storage_size}
}'
2. Node Health
# Check each node status
redisctl enterprise node list -o json -q '[].{
node: uid,
status: status,
uptime: uptime,
cpu: cpu_idle,
memory_used: (used_memory / total_memory * 100),
disk_used: (ephemeral_storage_used / ephemeral_storage_size * 100)
}'
3. Database Health
# List all databases with key metrics
redisctl enterprise database list -o json -q '[].{
db: uid,
name: name,
status: status,
memory: memory_size,
shards: shards_count,
ops_sec: total_req
}'
4. Alert Status
# Check cluster alerts
redisctl enterprise cluster alerts -o json -q '{
enabled: alerts_settings.enabled,
active_alerts: alerts[?state==`active`].name
}'
# Check node alerts
redisctl enterprise node alerts -o table
Automated Health Monitoring
#!/bin/bash
# cluster-health-check.sh
echo "Redis Enterprise Cluster Health Check"
echo "======================================"
# Cluster state
echo "Cluster Status:"
redisctl enterprise cluster get -q 'cluster_state'
# Node count and status
NODES=$(redisctl enterprise node list -o json -q 'length([])')
HEALTHY_NODES=$(redisctl enterprise node list -o json -q '[?status==`active`] | length([])')
echo "Nodes: $HEALTHY_NODES/$NODES healthy"
# Database status
DBS=$(redisctl enterprise database list -o json -q 'length([])')
ACTIVE_DBS=$(redisctl enterprise database list -o json -q '[?status==`active`] | length([])')
echo "Databases: $ACTIVE_DBS/$DBS active"
# Resource usage
SHARD_USAGE=$(redisctl enterprise cluster get -o json -q '((shards_count / shards_limit * 100) | floor)')
MEMORY_USAGE=$(redisctl enterprise cluster get -o json -q '((memory_size / ephemeral_storage_size * 100) | floor)')
echo "Resource Usage: Shards $SHARD_USAGE%, Memory $MEMORY_USAGE%"
# Exit code based on health
if [ "$HEALTHY_NODES" -eq "$NODES" ] && [ "$ACTIVE_DBS" -eq "$DBS" ]; then
echo "Status: HEALTHY"
exit 0
else
echo "Status: DEGRADED"
exit 1
fi
Next Steps
- Node Management - Manage cluster nodes
- Database Monitoring - Track database metrics
- Generate Support Package - Troubleshooting tools
See Also
Node Management
Time: 10-15 minutes
Prerequisites:
- Redis Enterprise cluster with multiple nodes
- redisctl configured with Enterprise credentials
- Admin access to cluster
Quick Commands
# List all nodes
redisctl enterprise node list -o table
# Get specific node details
redisctl enterprise node get --node-id 1 -o json
# Check node status
redisctl enterprise node list -q '[].{id: uid, status: status, role: role}'
Node Operations
1. View Node Details
# Get comprehensive node info
redisctl enterprise node get --node-id 1 -o json -q '{
uid: uid,
addr: addr,
status: status,
role: role,
cores: cores,
memory: {
total: total_memory,
used: used_memory,
available: (total_memory - used_memory)
},
storage: {
total: ephemeral_storage_size,
used: ephemeral_storage_used,
available: ephemeral_storage_avail
},
uptime: uptime,
version: software_version
}'
2. Add Node to Cluster
# Prepare new node (run on new node)
curl -k https://localhost:9443/v1/bootstrap/create_cluster \
-H "Content-Type: application/json" \
-d '{
"action": "join_cluster",
"cluster": {
"nodes": ["10.0.1.10:9443"],
"username": "admin@cluster.local",
"password": "admin-password"
}
}'
# Verify node joined
redisctl enterprise node list -o table
3. Remove Node from Cluster
# First, ensure no databases are on this node
redisctl enterprise database list -o json -q '[?node_uid==`3`]'
# Remove node
redisctl enterprise node delete --node-id 3 --wait
4. Update Node Configuration
redisctl enterprise node update \
--node-id 1 \
--data '{
"max_listeners": 100,
"max_redis_servers": 50
}'
Node Maintenance
Enable Maintenance Mode
# Put node in maintenance mode (no new shards)
redisctl enterprise node update \
--node-id 2 \
--data '{"accept_servers": false}'
# Verify
redisctl enterprise node get --node-id 2 -q 'accept_servers'
Drain Node
Move all shards off a node before maintenance:
#!/bin/bash
NODE_ID=2
# Get all shards on this node
SHARDS=$(redisctl enterprise shard list \
--node $NODE_ID \
-o json \
-q '[].uid')
# Migrate each shard to another node
for shard in $SHARDS; do
echo "Migrating shard $shard..."
redisctl enterprise shard migrate \
--uid $shard \
--target-node 1 \
--force
done
echo "Node $NODE_ID drained"
Check Node Resources
redisctl enterprise node get --node-id 1 -o json -q '{
cpu_idle: cpu_idle,
memory_free_pct: ((total_memory - used_memory) / total_memory * 100 | floor),
disk_free_pct: (ephemeral_storage_avail / ephemeral_storage_size * 100 | floor),
connections: conns,
shards: shard_count
}'
Monitoring Nodes
Node Health Script
#!/bin/bash
# node-health.sh
echo "Node Health Report"
echo "=================="
redisctl enterprise node list -o json | jq -r '
.[] |
"Node \(.uid): \(.status) - CPU: \(.cpu_idle)% idle, " +
"Memory: \((.used_memory / .total_memory * 100 | floor))% used, " +
"Shards: \(.shard_count)"
'
Resource Alerts
# Check for nodes with high resource usage
redisctl enterprise node list -o json -q '
[?
(used_memory / total_memory * 100) > 80 ||
(ephemeral_storage_used / ephemeral_storage_size * 100) > 85 ||
cpu_idle < 20
].{
node: uid,
memory_pct: (used_memory / total_memory * 100 | floor),
disk_pct: (ephemeral_storage_used / ephemeral_storage_size * 100 | floor),
cpu_idle: cpu_idle
}
'
Node Failover
Check Quorum
# Ensure cluster has quorum before operations
redisctl enterprise cluster get -q '{
quorum: quorum_only,
nodes: nodes_count,
required: ((nodes_count / 2 | floor) + 1)
}'
Handle Failed Node
# Identify failed node
redisctl enterprise node list -q '[?status!=`active`].{id: uid, status: status}'
# Check affected databases
redisctl enterprise database list -o json -q '[?node_uid==`3`].{db: uid, name: name}'
# Trigger failover for affected databases
redisctl enterprise database update \
--database-id 1 \
--data '{"action": "failover"}'
Common Issues
Node Not Responding
# Check node connectivity
curl -k https://node-ip:9443/v1/cluster
# Check from another node
redisctl enterprise node get --node-id 2 -q 'status'
High Memory Usage
# Find databases using most memory on node
redisctl enterprise database list -o json -q '
[?node_uid==`1`] |
sort_by(@, &memory_size) |
reverse(@) |
[].{db: uid, name: name, memory_gb: (memory_size / 1073741824)}
'
Best Practices
- Always maintain quorum - Keep odd number of nodes
- Monitor resources - Set up alerts for CPU, memory, disk
- Regular health checks - Automated monitoring
- Graceful operations - Drain nodes before maintenance
- Plan capacity - Add nodes before reaching limits
Next Steps
- Cluster Health Check - Monitor overall cluster health
- Generate Support Package - Troubleshooting tools
- Database Management - Manage databases
See Also
Configure Database Replication
Time: 10-15 minutes
Prerequisites:
- Redis Enterprise cluster with multiple nodes
- redisctl configured with Enterprise credentials
- Database already created
What is Replication?
Replication provides:
- High availability - automatic failover if master fails
- Read scalability - distribute reads across replicas
- Data durability - multiple copies of data
Quick Setup
# Enable replication on existing database
redisctl enterprise database update \
--database-id 1 \
--data '{
"replication": true,
"shards_count": 2
}' \
--wait
Step-by-Step Setup
1. Create Database with Replication
redisctl enterprise database create \
--data '{
"name": "replicated-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12000,
"replication": true,
"shards_count": 1,
"sharding": false
}' \
--wait
2. Verify Replication Status
redisctl enterprise database get \
--database-id 1 \
-o json \
-q '{
name: name,
replication: replication,
shards_count: shards_count,
endpoints: endpoints
}'
3. Check Shard Distribution
redisctl enterprise shard list-by-database \
--bdb-uid 1 \
-o json \
-q '[].{
uid: uid,
role: role,
node: node_uid,
status: status
}'
Expected: One master and one replica shard on different nodes.
Replication Topology
Single Master with Replica
# Default configuration
{
"replication": true,
"shards_count": 1,
"sharding": false
}
# Result: 1 master + 1 replica = 2 total shards
Sharded with Replication
# Clustered database with replication
{
"replication": true,
"shards_count": 3,
"sharding": true
}
# Result: 3 master + 3 replica = 6 total shards
Advanced Configuration
Set Replica Count
# Multiple replicas per master
redisctl enterprise database update \
--database-id 1 \
--data '{
"replication": true,
"replica_sources": [
{"replica_source_name": "replica1", "replica_source_type": "replica"},
{"replica_source_name": "replica2", "replica_source_type": "replica"}
]
}' \
--wait
Rack Awareness
Ensure master and replicas are on different racks/zones:
redisctl enterprise database update \
--database-id 1 \
--data '{
"rack_aware": true
}' \
--wait
Monitoring Replication
Check Replication Lag
# Get replication lag for database
redis-cli -h localhost -p 12000 INFO replication
# Or via REST API
redisctl enterprise database get \
--database-id 1 \
-q 'replica_sync[].{
replica: replica_uid,
lag: lag,
status: status
}'
Monitor Sync Status
# Check if replicas are in sync
redisctl enterprise shard list-by-database \
--bdb-uid 1 \
-o json \
-q '[?role==`replica`].{
shard: uid,
status: status,
sync_status: sync_status
}'
Failover Operations
Manual Failover
# Failover specific shard
redisctl enterprise shard failover \
--uid 1:1 \
--force
# Verify new master
redisctl enterprise shard get --uid 1:1 -q 'role'
Automatic Failover
Enabled by default with replication:
# Check failover settings
redisctl enterprise database get \
--database-id 1 \
-q '{
replication: replication,
watchdog_profile: watchdog_profile
}'
Replica Configuration
Read-Only Replicas
# Configure replica as read-only (default)
redisctl enterprise database update \
--database-id 1 \
--data '{
"replica_of": {
"endpoints": ["master-db:12000"],
"readonly": true
}
}' \
--wait
External Replication Source
Replicate from external Redis:
redisctl enterprise database create \
--data '{
"name": "replica-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12001,
"replica_of": {
"endpoints": ["external-redis.example.com:6379"],
"authentication_redis_pass": "source-password"
}
}' \
--wait
Replication Performance
Optimize Replication Speed
# Increase replication buffer
redisctl enterprise database update \
--database-id 1 \
--data '{
"repl_backlog_size": 104857600 # 100MB
}' \
--wait
Monitor Replication Traffic
redisctl enterprise database get \
--database-id 1 \
-o json \
-q '{
replication_traffic: repl_traffic,
backlog_size: repl_backlog_size
}'
Common Patterns
High Availability Setup
# Production-ready HA configuration
redisctl enterprise database create \
--data '{
"name": "ha-database",
"memory_size": 10737418240,
"type": "redis",
"port": 12000,
"replication": true,
"shards_count": 3,
"sharding": true,
"rack_aware": true,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec"
}' \
--wait
Read Scaling with Replicas
# Application pattern: writes to master, reads from replicas
from redis import Redis, Sentinel
# Connect to master for writes
master = Redis(host='master-endpoint', port=12000)
master.set('key', 'value')
# Connect to replica for reads
replica = Redis(host='replica-endpoint', port=12001)
value = replica.get('key')
Disaster Recovery
Backup Replication Status
# Save replication configuration
redisctl enterprise database get \
--database-id 1 \
-o json > db-replication-config.json
Restore After Failure
# Recreate database with same configuration
redisctl enterprise database create \
--data @db-replication-config.json \
--wait
Common Issues
Replication Lag Increasing
# Check network between nodes
redisctl enterprise node list -o table
# Check shard placement
redisctl enterprise shard list-by-database --bdb-uid 1 -o table
# Consider adding more replicas or increasing bandwidth
Replica Out of Sync
# Force resync
redisctl enterprise shard failover --uid 1:2 --force
# Check sync status
redisctl enterprise shard get --uid 1:2 -q 'sync_status'
Split Brain Scenario
Prevention:
- Always use odd number of cluster nodes
- Enable rack awareness
- Monitor node connectivity
Recovery:
# Identify correct master
redisctl enterprise shard list-by-database --bdb-uid 1 \
-q '[?role==`master`]'
# Force failover if needed
redisctl enterprise database update \
--database-id 1 \
--data '{"action": "recover"}' \
--wait
Best Practices
- Always Enable for Production - Replication is critical for HA
- Use Rack Awareness - Distribute across failure domains
- Monitor Replication Lag - Alert on high lag
- Test Failover - Regularly test automatic failover
- Plan Capacity - Replicas consume same resources as master
- Persist Configuration - Backup replication settings
Next Steps
- Cluster Health Check - Monitor replication health
- Node Management - Manage replica placement
- Generate Support Package - Troubleshooting tools
- Create Database - Database configuration basics
See Also
Configure Redis ACLs
Time: 10 minutes
Prerequisites:
- Redis Enterprise cluster (v6.0+)
- redisctl configured with Enterprise credentials
- Understanding of Redis ACL syntax
Quick Setup
# Create ACL with read-only access
redisctl enterprise redis-acl create \
--data '{
"name": "readonly",
"acl": "+@read ~*"
}' \
--wait
# Apply to database
redisctl enterprise database update \
--database-id 1 \
--data '{
"redis_acls": [{"name": "readonly"}]
}' \
--wait
Redis ACL Syntax
Command Permissions
+@read # Allow all read commands
+@write # Allow all write commands
+@admin # Allow admin commands
-@dangerous # Deny dangerous commands
+get +set # Allow specific commands
-flushdb # Deny specific command
Key Patterns
~* # All keys
~cache:* # Keys starting with "cache:"
~user:123:* # Specific user keys
~* ~-secret:* # All except "secret:" prefix
Creating ACL Rules
Basic ACL Rules
# Read-only access
redisctl enterprise redis-acl create \
--data '{
"name": "readonly",
"acl": "+@read ~*"
}'
# Write to specific keys
redisctl enterprise redis-acl create \
--data '{
"name": "cache-writer",
"acl": "+@write +@read ~cache:*"
}'
# Admin without dangerous commands
redisctl enterprise redis-acl create \
--data '{
"name": "safe-admin",
"acl": "+@all -@dangerous ~*"
}'
Apply ACLs to Database
redisctl enterprise database update \
--database-id 1 \
--data '{
"redis_acls": [
{"name": "readonly", "password": "ReadPass123!"},
{"name": "cache-writer", "password": "WritePass456!"}
]
}' \
--wait
Testing ACLs
# Test readonly user
redis-cli -h localhost -p 12000 \
--user readonly \
--pass ReadPass123! \
GET mykey # Works
redis-cli --user readonly --pass ReadPass123! \
SET mykey value # Fails with NOPERM
# Test cache-writer user
redis-cli --user cache-writer --pass WritePass456! \
SET cache:item value # Works
redis-cli --user cache-writer --pass WritePass456! \
SET other:item value # Fails
Common ACL Patterns
Application Access Tiers
# Level 1: Read-only
redisctl enterprise redis-acl create \
--data '{"name": "app-read", "acl": "+@read +ping ~*"}'
# Level 2: Read + Write cache
redisctl enterprise redis-acl create \
--data '{"name": "app-cache", "acl": "+@read +@write ~cache:* ~session:*"}'
# Level 3: Full access
redisctl enterprise redis-acl create \
--data '{"name": "app-admin", "acl": "+@all -flushdb -flushall ~*"}'
Multi-Tenant Isolation
# Tenant A
redisctl enterprise redis-acl create \
--data '{"name": "tenant-a", "acl": "+@all ~tenant:a:*"}'
# Tenant B
redisctl enterprise redis-acl create \
--data '{"name": "tenant-b", "acl": "+@all ~tenant:b:*"}'
Managing ACLs
List ACLs
redisctl enterprise redis-acl list -o table
Update ACL
redisctl enterprise redis-acl update \
--acl-id 123 \
--data '{
"name": "readonly",
"acl": "+@read +@connection ~*"
}'
Delete ACL
redisctl enterprise redis-acl delete --acl-id 123
Best Practices
- Principle of Least Privilege - Grant minimum required access
- Use Key Prefixes - Design schema for ACL isolation
- Deny Dangerous Commands - Always exclude FLUSHDB, KEYS, etc.
- Strong Passwords - Use secure passwords for each ACL
- Test Thoroughly - Verify ACLs before production use
- Document ACLs - Maintain clear documentation of each rule
Next Steps
- Create Database - Database setup
- Configure Replication - High availability
- Cluster Health Check - Monitoring
See Also
Redis Cloud Overview
Redis Cloud is a fully-managed database-as-a-service offering. redisctl
provides comprehensive access to the Redis Cloud REST API.
Authentication
Redis Cloud uses API key authentication:
# Set credentials
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
# Test connection
redisctl api cloud get /
Command Structure
Redis Cloud commands follow this pattern:
redisctl cloud <resource> <action> [options]
Resources include:
subscription
- Manage subscriptionsdatabase
- Manage databasesaccount
- Account informationuser
- User managementacl
- Access control listsbackup
- Backup operations
Common Operations
# List all subscriptions
redisctl cloud subscription list
# Get subscription details
redisctl cloud subscription get 12345
# List databases in a subscription
redisctl cloud database list --subscription-id 12345
# Get database details
redisctl cloud database get --subscription-id 12345 --database-id 67890
Next Steps
- Human-Friendly Commands - High-level command reference
- Raw API Access - Direct API endpoint access
- Examples - Real-world usage examples
Subscriptions
Manage Redis Cloud subscriptions - the containers for your databases and configuration.
Commands
List Subscriptions
List all subscriptions in your account.
redisctl cloud subscription list [OPTIONS]
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table (default: auto)-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# List all subscriptions
redisctl cloud subscription list
# Table format with specific fields
redisctl cloud subscription list -o table
# Get only subscription IDs and names
redisctl cloud subscription list -q "[].{id: id, name: name}"
# Filter by status
redisctl cloud subscription list -q "[?status=='active']"
Get Subscription
Get details of a specific subscription.
redisctl cloud subscription get <ID> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# Get subscription details
redisctl cloud subscription get 123456
# Get specific fields in YAML
redisctl cloud subscription get 123456 -o yaml -q "{name: name, status: status, databases: numberOfDatabases}"
Create Subscription
Create a new subscription.
redisctl cloud subscription create --data <JSON> [OPTIONS]
Options:
--data <JSON>
- JSON payload (inline or @file.json)--wait
- Wait for operation to complete--wait-timeout <SECONDS>
- Maximum time to wait (default: 600)--wait-interval <SECONDS>
- Polling interval (default: 10)
Example Payload:
{
"name": "Production Subscription",
"cloudProvider": {
"provider": "AWS",
"regions": [
{
"region": "us-east-1",
"multipleAvailabilityZones": true,
"networking": {
"deploymentCIDR": "10.0.0.0/24"
}
}
]
},
"databases": [
{
"name": "cache-db",
"memoryLimitInGb": 1,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
}
]
}
Examples:
# Create subscription from file
redisctl cloud subscription create --data @subscription.json
# Create and wait for completion
redisctl cloud subscription create --data @subscription.json --wait
# Create with inline JSON
redisctl cloud subscription create --data '{
"name": "Test Subscription",
"cloudProvider": {"provider": "AWS", "regions": [{"region": "us-east-1"}]}
}'
Update Subscription
Update an existing subscription.
redisctl cloud subscription update <ID> --data <JSON> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
--data <JSON>
- JSON payload with updates--wait
- Wait for operation to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Update subscription name
redisctl cloud subscription update 123456 --data '{"name": "New Name"}'
# Update payment method
redisctl cloud subscription update 123456 --data '{"paymentMethodId": 8840}' --wait
Delete Subscription
Delete a subscription (requires all databases to be deleted first).
redisctl cloud subscription delete <ID> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
--wait
- Wait for deletion to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Delete subscription
redisctl cloud subscription delete 123456
# Delete and wait for completion
redisctl cloud subscription delete 123456 --wait
Fixed Subscriptions
Fixed subscriptions offer reserved capacity with predictable pricing.
List Fixed Subscriptions
redisctl cloud fixed-subscription list
Get Fixed Subscription
redisctl cloud fixed-subscription get <ID>
Create Fixed Subscription
redisctl cloud fixed-subscription create --data @fixed-subscription.json --wait
Example Payload:
{
"name": "Fixed Production",
"plan": {
"provider": "AWS",
"region": "us-east-1",
"size": "r5.xlarge"
},
"quantity": 2
}
Related Commands
- Databases - Manage databases within subscriptions
- Network Connectivity - Configure VPC peering and private endpoints
- Provider Accounts - Manage cloud provider integrations
Common Patterns
List All Databases Across Subscriptions
# Get all subscription IDs
SUBS=$(redisctl cloud subscription list -q "[].id" | jq -r '.[]')
# List databases for each subscription
for sub in $SUBS; do
echo "Subscription $sub:"
redisctl cloud database list --subscription-id $sub
done
Monitor Subscription Usage
# Get memory usage across all databases
redisctl cloud subscription get 123456 -q "databases[].{name: name, memory: memoryLimitInGb}" | \
jq -r '.[] | "\(.name): \(.memory)GB"'
Troubleshooting
Common Issues
"Subscription not found"
- Verify the subscription ID is correct
- Check that your API key has access to the subscription
"Cannot delete subscription with active databases"
- Delete all databases first:
redisctl cloud database list --subscription-id <ID>
- Then delete each database before deleting the subscription
"Operation timeout"
- Increase timeout:
--wait-timeout 1200
- Check operation status:
redisctl cloud task get <TASK_ID>
API Reference
These commands use the following REST endpoints:
GET /v1/subscriptions
- List subscriptionsGET /v1/subscriptions/{id}
- Get subscriptionPOST /v1/subscriptions
- Create subscriptionPUT /v1/subscriptions/{id}
- Update subscriptionDELETE /v1/subscriptions/{id}
- Delete subscription
For direct API access, use: redisctl api cloud get /subscriptions
Databases
Manage Redis Cloud databases within subscriptions.
Commands
List Databases
List all databases in a subscription.
redisctl cloud database list --subscription-id <ID> [OPTIONS]
Required Options:
--subscription-id <ID>
- The subscription ID
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# List all databases in subscription
redisctl cloud database list --subscription-id 123456
# Show specific fields in table format
redisctl cloud database list --subscription-id 123456 -o table
# Filter active databases only
redisctl cloud database list --subscription-id 123456 -q "[?status=='active']"
# Get database names and endpoints
redisctl cloud database list --subscription-id 123456 \
-q "[].{name: name, endpoint: publicEndpoint}"
Get Database
Get details of a specific database.
redisctl cloud database get --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID
Examples:
# Get database details
redisctl cloud database get --subscription-id 123456 --database-id 789
# Get connection details
redisctl cloud database get --subscription-id 123456 --database-id 789 \
-q "{endpoint: publicEndpoint, port: port, password: password}"
Create Database
Create a new database in a subscription.
redisctl cloud database create --subscription-id <ID> --data <JSON> [OPTIONS]
Required Options:
--subscription-id <ID>
- The subscription ID--data <JSON>
- Database configuration (inline or @file.json)
Async Options:
--wait
- Wait for database creation to complete--wait-timeout <SECONDS>
- Maximum time to wait (default: 600)--wait-interval <SECONDS>
- Polling interval (default: 10)
Example Payload:
{
"name": "production-cache",
"memoryLimitInGb": 4,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 25000
},
"replication": true,
"dataPersistence": "aof-every-write",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{
"name": "RedisJSON"
},
{
"name": "RediSearch"
}
],
"alerts": [
{
"name": "dataset-size",
"value": 80
}
]
}
Examples:
# Create database from file
redisctl cloud database create --subscription-id 123456 --data @database.json
# Create and wait for completion
redisctl cloud database create --subscription-id 123456 --data @database.json --wait
# Create minimal database
redisctl cloud database create --subscription-id 123456 --data '{
"name": "test-db",
"memoryLimitInGb": 1
}'
Update Database
Update database configuration.
redisctl cloud database update --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID--data <JSON>
- Updates to apply
Async Options:
--wait
- Wait for update to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Increase memory limit
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 8}' \
--wait
# Update eviction policy
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"dataEvictionPolicy": "volatile-lru"}'
# Add modules
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"modules": [{"name": "RedisTimeSeries"}]}'
Delete Database
Delete a database.
redisctl cloud database delete --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID
Async Options:
--wait
- Wait for deletion to complete
Examples:
# Delete database
redisctl cloud database delete --subscription-id 123456 --database-id 789
# Delete and wait for completion
redisctl cloud database delete --subscription-id 123456 --database-id 789 --wait
Database Operations
Backup Database
Create a manual backup.
redisctl cloud database backup --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Examples:
# Create backup
redisctl cloud database backup --subscription-id 123456 --database-id 789
# Create and wait
redisctl cloud database backup --subscription-id 123456 --database-id 789 --wait
Import Data
Import data from a backup.
redisctl cloud database import --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Example Payload:
{
"sourceType": "s3",
"importFromUri": ["s3://bucket/backup.rdb"],
"s3Credentials": {
"accessKey": "AWS_ACCESS_KEY",
"secretKey": "AWS_SECRET_KEY"
}
}
Export Data
Export database data.
redisctl cloud database export --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Fixed Databases
Fixed databases run on reserved infrastructure.
List Fixed Databases
redisctl cloud fixed-database list --subscription-id <ID>
Create Fixed Database
redisctl cloud fixed-database create --subscription-id <ID> --data @fixed-db.json --wait
Active-Active Databases
Multi-region Active-Active (CRDB) databases.
Create Active-Active Database
redisctl cloud database create-active-active --subscription-id <ID> --data @crdb.json --wait
Example Payload:
{
"name": "global-cache",
"memoryLimitInGb": 10,
"regions": [
{
"region": "us-east-1",
"localThroughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
},
{
"region": "eu-west-1",
"localThroughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
}
]
}
Common Patterns
Get Database Connection String
# Get Redis URI
DB=$(redisctl cloud database get --subscription-id 123456 --database-id 789)
echo "redis://:$(echo $DB | jq -r .password)@$(echo $DB | jq -r .publicEndpoint)"
Monitor Database Metrics
# Check memory usage
redisctl cloud database get --subscription-id 123456 --database-id 789 \
-q "{used: usedMemoryInMB, limit: memoryLimitInGB}" | \
jq -r '"Memory: \(.used)MB / \(.limit)GB"'
Bulk Operations
# Update all databases in subscription
for db in $(redisctl cloud database list --subscription-id 123456 -q "[].databaseId" | jq -r '.[]'); do
echo "Updating database $db"
redisctl cloud database update \
--subscription-id 123456 \
--database-id $db \
--data '{"alerts": [{"name": "dataset-size", "value": 90}]}'
done
Troubleshooting
Common Issues
"Database creation failed"
- Check subscription has available resources
- Verify region supports requested features
- Check module compatibility
"Cannot connect to database"
- Verify security group/firewall rules
- Check if database is active:
status == 'active'
- Ensure correct endpoint and port
"Module not available"
- Some modules require specific Redis versions
- Check module compatibility in subscription settings
Related Commands
- Subscriptions - Manage parent subscriptions
- ACL - Configure access control
- Connectivity - Set up VPC peering
API Reference
These commands use the following REST endpoints:
GET /v1/subscriptions/{subId}/databases
- List databasesGET /v1/subscriptions/{subId}/databases/{dbId}
- Get databasePOST /v1/subscriptions/{subId}/databases
- Create databasePUT /v1/subscriptions/{subId}/databases/{dbId}
- Update databaseDELETE /v1/subscriptions/{subId}/databases/{dbId}
- Delete database
For direct API access: redisctl api cloud get /subscriptions/123456/databases
Network Connectivity
Configure VPC peering, Private Service Connect, and Transit Gateway connections for secure network access.
VPC Peering
List VPC Peerings
redisctl cloud connectivity list-vpc --subscription-id <ID>
Create VPC Peering
redisctl cloud connectivity create-vpc --subscription-id <ID> --data @vpc.json --wait
Example Payload:
{
"region": "us-east-1",
"awsAccountId": "123456789012",
"vpcId": "vpc-0abc123def456",
"vpcCidr": "10.0.0.0/16"
}
Delete VPC Peering
redisctl cloud connectivity delete-vpc --subscription-id <ID> --peering-id <PEERING_ID> --wait
Private Service Connect (GCP)
Create PSC
redisctl cloud connectivity create-psc --subscription-id <ID> --data @psc.json --wait
Transit Gateway (AWS)
Create Transit Gateway Attachment
redisctl cloud connectivity create-tgw --subscription-id <ID> --data @tgw.json --wait
Common Patterns
Verify Connectivity
# List all connections
redisctl cloud connectivity list-vpc --subscription-id 123456
# Check connection status
redisctl cloud connectivity get-vpc --subscription-id 123456 --peering-id abc123 \
-q "status"
Troubleshooting
- Ensure CIDR blocks don't overlap
- Verify cloud provider permissions
- Check security group rules allow Redis ports
Cloud Workflows
Workflows orchestrate multiple API calls to accomplish common tasks. This guide covers the available Cloud workflows.
Subscription Setup Workflow
The subscription-setup
workflow creates a complete Redis Cloud subscription with optional database configuration.
Basic Usage
# Create a subscription with default settings
redisctl cloud workflow subscription-setup \
--name "my-subscription" \
--wait
# Create with custom configuration
redisctl cloud workflow subscription-setup \
--name "production" \
--provider AWS \
--region us-west-2 \
--database-name "main-db" \
--database-memory-gb 2 \
--wait
Options
--name
: Subscription name (default: redisctl-test)--provider
: Cloud provider - AWS, GCP, or Azure (default: AWS)--region
: Cloud region (default: us-east-1)--payment-method-id
: Payment method ID (auto-detected if not specified)--database-name
: Database name (default: default-db)--database-memory-gb
: Database memory in GB (default: 1)--database-throughput
: Operations per second (default: 1000)--modules
: Comma-separated list of modules (e.g., "RedisJSON,RediSearch")--high-availability
: Enable HA replication--data-persistence
: Enable data persistence (default: true)--skip-database
: Only create subscription without database--wait
: Wait for operations to complete (default: true)--wait-timeout
: Maximum wait time in seconds (default: 600)--wait-interval
: Polling interval in seconds (default: 10)--dry-run
: Preview what would be created without executing
What It Does
- Validates payment method: Looks up your account's payment method
- Creates subscription: Provisions infrastructure in the specified cloud/region
- Creates database: Sets up a Redis database with your configuration
- Waits for completion: Monitors async operations until resources are ready
- Returns connection details: Provides endpoints and credentials
Output Formats
# Human-readable output (default)
redisctl cloud workflow subscription-setup --name "test"
# JSON output for automation
redisctl cloud workflow subscription-setup --name "test" --output json
# YAML output
redisctl cloud workflow subscription-setup --name "test" --output yaml
Example JSON Output
{
"success": true,
"message": "Subscription setup completed successfully",
"outputs": {
"subscription_id": 12345,
"subscription_name": "test",
"database_id": 67890,
"database_name": "default-db",
"connection_string": "redis://redis-12345.c1.us-east-1.ec2.cloud.redislabs.com:12345",
"provider": "AWS",
"region": "us-east-1",
"status": "active"
}
}
Use Cases
-
Quick Development Environment
redisctl cloud workflow subscription-setup \ --name "dev-env" \ --database-memory-gb 0.1 \ --wait
-
Production Setup with Modules
redisctl cloud workflow subscription-setup \ --name "production" \ --database-memory-gb 10 \ --modules "RedisJSON,RediSearch,RedisTimeSeries" \ --high-availability \ --wait
-
Multi-Region Preparation (subscription only)
redisctl cloud workflow subscription-setup \ --name "global-app" \ --region eu-west-1 \ --skip-database \ --wait
Future Workflows
Additional workflows are planned:
- active-active-setup: Multi-region Active-Active configuration
- database-migration: Migrate databases between subscriptions
- acl-setup: Configure comprehensive ACL security
See issue tracker for workflow development status.
Redis Cloud Examples
Real-world examples of managing Redis Cloud resources.
Database Lifecycle
Create a Production Database
# 1. List available subscriptions
redisctl cloud subscription list -o table
# 2. Create the database
redisctl api cloud post /subscriptions/12345/databases \
--data '{
"name": "production-cache",
"memoryLimitInGb": 5,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 10000
},
"modules": ["RedisJSON", "RediSearch"],
"replication": true,
"dataPersistence": "aof-every-1-second",
"alerts": [
{
"name": "high-memory",
"value": 80
}
]
}'
# 3. Check creation status
redisctl api cloud get /tasks/<task-id>
# 4. Get connection details
redisctl cloud database get \
--subscription-id 12345 \
--database-id 67890 \
-q '{endpoint: publicEndpoint, password: password}'
Backup and Restore
# Create manual backup
redisctl cloud backup create \
--subscription-id 12345 \
--database-id 67890
# List available backups
redisctl cloud backup list \
--subscription-id 12345 \
--database-id 67890 \
-o table
# Restore from backup
redisctl cloud backup restore \
--subscription-id 12345 \
--database-id 67890 \
--backup-id <backup-id>
User Management
Set Up Team Access
# Create team members
redisctl cloud user create \
--email "dev@company.com" \
--role "db-member"
redisctl cloud user create \
--email "ops@company.com" \
--role "db-viewer"
# List all users
redisctl cloud user list -q "[].{email:email,role:role}" -o table
# Update user role
redisctl cloud user update <user-id> --role "admin"
Monitoring and Metrics
Get Database Metrics
# Get current stats
redisctl api cloud get /subscriptions/12345/databases/67890/metrics \
-q '{
cpu: cpuUsagePercentage,
memory: memoryUsagePercentage,
connections: connectionsCount,
ops: operationsPerSecond
}'
# Monitor database status
watch -n 5 'redisctl cloud database get \
--subscription-id 12345 \
--database-id 67890 \
-q "{status:status,memory:memoryUsagePercentage}"'
Migration Scenarios
Export Database List
# Export all databases to JSON
redisctl cloud subscription list -q "[].id" | \
while read sub_id; do
redisctl cloud database list --subscription-id $sub_id
done > all-databases.json
# Create summary report
redisctl cloud subscription list | \
jq -r '.[] |
"\(.name): \(.numberOfDatabases) databases, \(.status)"'
Bulk Operations
# Scale all databases in subscription
for db_id in $(redisctl cloud database list --subscription-id 12345 -q "[].id" -r); do
redisctl cloud database update \
--subscription-id 12345 \
--database-id $db_id \
--memory-limit 2048
done
# Add module to multiple databases
redisctl cloud database list --subscription-id 12345 -q "[].id" | \
while read db_id; do
redisctl api cloud patch /subscriptions/12345/databases/$db_id \
--data '{"modules": ["RedisJSON", "RediSearch", "RedisTimeSeries"]}'
done
Cost Management
Analyze Costs
# Get subscription costs
redisctl api cloud get /subscriptions/12345/pricing \
-q '{
total: totalPrice,
databases: databases[].{
name: name,
cost: price
}
}' -o yaml
# Find most expensive databases
redisctl cloud database list --subscription-id 12345 \
-q "reverse(sort_by([].{name:name,memory:memoryLimitInGb}, &memory))[:5]" \
-o table
Security
Set Up ACLs
# Create read-only ACL
redisctl cloud acl create \
--subscription-id 12345 \
--name "readonly-access" \
--rule "+get +mget +exists +scan +xread -flushdb -flushall -keys"
# Create write-limited ACL
redisctl cloud acl create \
--subscription-id 12345 \
--name "app-access" \
--rule "+@all -@dangerous -flushdb -flushall -keys -config"
# Apply ACL to database
redisctl api cloud patch /subscriptions/12345/databases/67890 \
--data '{"redisAclId": "<acl-id>"}'
Automation Scripts
Health Check Script
#!/bin/bash
# Check all database health
redisctl cloud subscription list -q "[].id" | while read sub_id; do
echo "Checking subscription $sub_id..."
redisctl cloud database list --subscription-id $sub_id \
-q "[?status!='active'].{name:name,status:status}" | \
jq -r '.[] | " WARNING: \(.name) is \(.status)"'
done
Daily Report
#!/bin/bash
# Generate daily report
echo "Redis Cloud Daily Report - $(date)"
echo "========================"
# Account summary
echo -e "\nAccount:"
redisctl cloud account info -q '{owner:owner,id:id}'
# Subscription summary
echo -e "\nSubscriptions:"
redisctl cloud subscription list \
-q "[].{name:name,databases:numberOfDatabases,status:status}" \
-o table
# Database summary
echo -e "\nActive Databases:"
for sub_id in $(redisctl cloud subscription list -q "[].id" -r); do
redisctl cloud database list --subscription-id $sub_id \
-q "[?status=='active'] | length(@)"
done | awk '{sum+=$1} END {print sum}'
# Alert summary
echo -e "\nRecent Alerts:"
redisctl api cloud get /logs?type=alert&limit=10 \
-q "[:5].{time:timestamp,message:message}" \
-o table
Human-Friendly Commands
These commands provide a typed, user-friendly interface to common Redis Cloud operations.
Subscription Management
# List all subscriptions
redisctl cloud subscription list
# Get subscription details
redisctl cloud subscription get <subscription-id>
# Create new subscription
redisctl cloud subscription create \
--name "Production" \
--payment-method-id <id> \
--cloud-provider "AWS" \
--region "us-east-1"
Database Management
# List databases
redisctl cloud database list --subscription-id <id>
# Get database details
redisctl cloud database get \
--subscription-id <id> \
--database-id <id>
# Create database
redisctl cloud database create \
--subscription-id <id> \
--name "cache-db" \
--memory-limit 1024 \
--modules "search,json"
# Update database
redisctl cloud database update \
--subscription-id <id> \
--database-id <id> \
--memory-limit 2048
User Management
# List users
redisctl cloud user list
# Create user
redisctl cloud user create \
--email "user@example.com" \
--role "viewer"
# Update user role
redisctl cloud user update <user-id> \
--role "admin"
ACL Management
# List ACL rules
redisctl cloud acl list --subscription-id <id>
# Create ACL rule
redisctl cloud acl create \
--subscription-id <id> \
--name "readonly" \
--rule "+get +mget -flushdb"
Backup Operations
# List backups
redisctl cloud backup list \
--subscription-id <id> \
--database-id <id>
# Create backup
redisctl cloud backup create \
--subscription-id <id> \
--database-id <id>
# Restore from backup
redisctl cloud backup restore \
--subscription-id <id> \
--database-id <id> \
--backup-id <id>
Account Information
# Get account details
redisctl cloud account info
# Get payment methods
redisctl cloud account payment-methods
# Get cloud accounts
redisctl cloud cloud-account list
Output Formatting
All commands support output formatting:
# Table format
redisctl cloud subscription list -o table
# JSON (default)
redisctl cloud subscription list -o json
# YAML
redisctl cloud subscription list -o yaml
# Filtered with JMESPath
redisctl cloud database list \
--subscription-id <id> \
-q "[?status=='active'].{name:name,memory:memoryLimitInGb}"
Redis Cloud Commands
Redis Cloud commands are organized into three layers:
1. Human-Friendly Commands
High-level commands with typed parameters and structured output.
redisctl cloud <resource> <action> [options]
See Human-Friendly Commands for the complete reference.
2. Raw API Access
Direct access to any REST endpoint when you need full control.
redisctl api cloud <method> <path> [options]
See Raw API Access for details.
3. Workflows (Coming Soon)
Multi-step orchestrated operations for complex tasks:
- Database migration workflows
- Backup and restore procedures
- Cluster setup automation
Quick Reference
Most Common Commands
# Subscriptions
redisctl cloud subscription list
redisctl cloud subscription get <id>
# Databases
redisctl cloud database list --subscription-id <id>
redisctl cloud database get --subscription-id <id> --database-id <id>
# Direct API
redisctl api cloud get /subscriptions
redisctl api cloud post /subscriptions/<id>/databases --data @database.json
Redis Enterprise Overview
Redis Enterprise is a self-managed database platform that can be deployed on-premises or in your cloud account. redisctl
provides comprehensive access to the Redis Enterprise REST API.
Key Features
🎯 Support Package Management
Generate and download support packages for troubleshooting with Redis support:
# Download support package for entire cluster
redisctl enterprise support-package get
# Download for specific database
redisctl enterprise support-package get --database-uid 1
# Download for specific node
redisctl enterprise support-package get --node-uid 2
# Download and extract locally
redisctl enterprise support-package get --extract
See Support Package documentation for detailed usage.
Authentication
Redis Enterprise uses basic authentication:
# Set credentials
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # For self-signed certificates
# Test connection
redisctl api enterprise get /v1/cluster
Command Structure
Redis Enterprise commands follow this pattern:
redisctl enterprise <resource> <action> [options]
Resources include:
cluster
- Cluster managementdatabase
- Database operationsnode
- Node managementuser
- User managementrole
- Role-based access controlalert
- Alert configurationworkflow
- Multi-step automated operations
Common Operations
# Get cluster information
redisctl enterprise cluster info
# List all databases
redisctl enterprise database list
# Get database details
redisctl enterprise database get 1
# List nodes
redisctl enterprise node list
# Initialize a new cluster (workflow)
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "SecurePassword"
Next Steps
- Human-Friendly Commands - High-level command reference
- Workflows - Automated multi-step operations
- Raw API Access - Direct API endpoint access
- Examples - Real-world usage examples
Cluster
Manage Redis Enterprise cluster configuration and operations.
Commands
Get Cluster Info
Get current cluster configuration and status.
redisctl enterprise cluster info [OPTIONS]
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# Get full cluster information
redisctl enterprise cluster info
# Get specific fields in table format
redisctl enterprise cluster info -o table
# Get cluster name and version
redisctl enterprise cluster info -q "{name: name, version: version}"
# Check cluster health
redisctl enterprise cluster info -q "alert_settings"
Update Cluster
Update cluster configuration.
redisctl enterprise cluster update --data <JSON> [OPTIONS]
Options:
--data <JSON>
- Configuration updates (inline or @file.json)
Examples:
# Update cluster name
redisctl enterprise cluster update --data '{"name": "production-cluster"}'
# Update alert settings
redisctl enterprise cluster update --data '{
"alert_settings": {
"cluster_certs_about_to_expire": {"enabled": true, "threshold": 30}
}
}'
# Update from file
redisctl enterprise cluster update --data @cluster-config.json
Get Cluster Policy
Get cluster-wide policies.
redisctl enterprise cluster get-policy [OPTIONS]
Examples:
# Get all policies
redisctl enterprise cluster get-policy
# Get specific policy in YAML
redisctl enterprise cluster get-policy -o yaml -q "rack_aware"
Update Cluster Policy
Update cluster policies.
redisctl enterprise cluster update-policy --data <JSON> [OPTIONS]
Examples:
# Enable rack awareness
redisctl enterprise cluster update-policy --data '{"rack_aware": true}'
# Update multiple policies
redisctl enterprise cluster update-policy --data '{
"rack_aware": true,
"default_non_sharded_proxy_policy": "all-master-shards"
}'
Certificate Management
List Certificates
List cluster certificates.
redisctl enterprise cluster list-certificates [OPTIONS]
Examples:
# List all certificates
redisctl enterprise cluster list-certificates
# Check certificate expiration
redisctl enterprise cluster list-certificates -q "[].{name: name, expires: expiry_date}"
Update Certificate
Update cluster certificate.
redisctl enterprise cluster update-certificate --data <JSON> [OPTIONS]
Example Payload:
{
"name": "api-cert",
"key": "-----BEGIN RSA PRIVATE KEY-----\n...",
"certificate": "-----BEGIN CERTIFICATE-----\n..."
}
Examples:
# Update API certificate
redisctl enterprise cluster update-certificate --data @new-cert.json
# Update proxy certificate
redisctl enterprise cluster update-certificate --data '{
"name": "proxy-cert",
"key": "...",
"certificate": "..."
}'
Rotate Certificates
Rotate cluster certificates.
redisctl enterprise cluster rotate-certificates [OPTIONS]
Examples:
# Rotate all certificates
redisctl enterprise cluster rotate-certificates
# Rotate with custom validity period
redisctl enterprise cluster rotate-certificates --days 365
Cluster Operations
Check Cluster Status
Get detailed cluster status.
redisctl enterprise cluster status [OPTIONS]
Examples:
# Full status check
redisctl enterprise cluster status
# Check specific components
redisctl enterprise cluster status -q "services"
Get Cluster Stats
Get cluster statistics.
redisctl enterprise cluster stats [OPTIONS]
Options:
--interval <SECONDS>
- Stats interval (1sec, 1min, 5min, 15min, 1hour, 1day)
Examples:
# Get current stats
redisctl enterprise cluster stats
# Get hourly stats
redisctl enterprise cluster stats --interval 1hour
# Get memory usage
redisctl enterprise cluster stats -q "{used: used_memory, total: total_memory}"
License Management
Get License
redisctl enterprise cluster get-license
Update License
redisctl enterprise cluster update-license --data <JSON>
Example:
# Update license
redisctl enterprise cluster update-license --data '{
"license": "-----BEGIN LICENSE-----\n...\n-----END LICENSE-----"
}'
Module Management
List Modules
List available Redis modules.
redisctl enterprise module list
Upload Module
Upload a new module.
redisctl enterprise module upload --file <PATH>
Examples:
# Upload module
redisctl enterprise module upload --file redisgraph.zip
# Upload and get module ID
MODULE_ID=$(redisctl enterprise module upload --file module.zip -q "uid")
Common Patterns
Health Check Script
#!/bin/bash
# Check cluster health
STATUS=$(redisctl enterprise cluster info -q "status")
if [ "$STATUS" != "active" ]; then
echo "Cluster not healthy: $STATUS"
exit 1
fi
# Check certificate expiration
DAYS_LEFT=$(redisctl enterprise cluster list-certificates \
-q "[0].days_until_expiry")
if [ "$DAYS_LEFT" -lt 30 ]; then
echo "Certificate expiring soon: $DAYS_LEFT days"
fi
Monitor Cluster Resources
# Get resource utilization
redisctl enterprise cluster stats -q "{
cpu: cpu_usage_percent,
memory: memory_usage_percent,
disk: persistent_storage_usage_percent
}" | jq
Backup Cluster Configuration
# Export cluster config
redisctl enterprise cluster info > cluster-backup-$(date +%Y%m%d).json
# Export policies
redisctl enterprise cluster get-policy > policies-backup-$(date +%Y%m%d).json
Troubleshooting
Common Issues
"Cluster not responding"
- Check network connectivity to cluster endpoint
- Verify credentials are correct
- Check if API is enabled on cluster
"Certificate expired"
- Rotate certificates:
redisctl enterprise cluster rotate-certificates
- Or update manually with new certificate
"License expired"
- Update license:
redisctl enterprise cluster update-license --data @license.json
- Contact Redis support for new license
"Policy update failed"
- Some policies require cluster restart
- Check policy compatibility with cluster version
Related Commands
API Reference
These commands use the following REST endpoints:
GET /v1/cluster
- Get cluster infoPUT /v1/cluster
- Update clusterGET /v1/cluster/policy
- Get policiesPUT /v1/cluster/policy
- Update policiesGET /v1/cluster/certificates
- List certificatesPUT /v1/cluster/update_cert
- Update certificatePOST /v1/cluster/certificates/rotate
- Rotate certificates
For direct API access: redisctl api enterprise get /v1/cluster
Shard Management
Shards are the fundamental units of data storage and processing in Redis Enterprise. Each database is composed of one or more shards distributed across cluster nodes. The shard commands provide tools for monitoring, managing, and optimizing shard placement and performance.
Overview
Shards in Redis Enterprise:
- Primary building blocks of databases
- Distributed across nodes for high availability
- Replicated for data redundancy
- Can be migrated between nodes for load balancing
- Support failover for high availability
Available Commands
List Shards
List all shards in the cluster with optional filtering:
# List all shards
redisctl enterprise shard list
# Filter by node
redisctl enterprise shard list --node 1
# Filter by database
redisctl enterprise shard list --database 1
# Filter by role (master/slave)
redisctl enterprise shard list --role master
# Combine filters
redisctl enterprise shard list --node 1 --role slave
# Output as table
redisctl enterprise shard list -o table
Get Shard Details
Get detailed information about a specific shard:
# Get shard details
redisctl enterprise shard get <shard_uid>
# Get specific fields
redisctl enterprise shard get <shard_uid> -q "role"
redisctl enterprise shard get <shard_uid> -q "{uid: uid, node: node, role: role, status: status}"
List Database Shards
List all shards for a specific database:
# List shards for database
redisctl enterprise shard list-by-database <bdb_uid>
# Get shard distribution
redisctl enterprise shard list-by-database <bdb_uid> -q "[].{shard: uid, node: node, role: role}"
Shard Failover
Perform manual failover of a shard to its replica:
# Failover with confirmation
redisctl enterprise shard failover <shard_uid>
# Failover without confirmation
redisctl enterprise shard failover <shard_uid> --force
Shard Migration
Migrate a shard to a different node:
# Migrate shard to target node
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid>
# Migrate without confirmation
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid> --force
Bulk Operations
Perform failover or migration on multiple shards:
# Bulk failover from JSON file
redisctl enterprise shard bulk-failover --data @failover-list.json
# Bulk migration from stdin
echo '{"shards": [{"uid": 1, "target_node": 2}]}' | \
redisctl enterprise shard bulk-migrate --data -
# Force bulk operations
redisctl enterprise shard bulk-failover --data @failover.json --force
Shard Statistics
Get performance statistics for shards:
# Get stats for specific shard
redisctl enterprise shard stats <shard_uid>
# Get stats for all shards
redisctl enterprise shard stats
# Specify time interval
redisctl enterprise shard stats --interval 1hour
redisctl enterprise shard stats --interval 1day
# Specify time range
redisctl enterprise shard stats \
--stime "2024-01-01T00:00:00Z" \
--etime "2024-01-02T00:00:00Z"
# Get latest stats
redisctl enterprise shard stats-last
# Get latest stats for specific shard
redisctl enterprise shard stats-last <shard_uid> --interval 1sec
Shard Health & Configuration
Check shard health and configuration:
# Check shard health
redisctl enterprise shard health <shard_uid>
# Get shard configuration
redisctl enterprise shard config <shard_uid>
Shard Structure
A typical shard object contains:
{
"uid": 1,
"bdb_uid": 1,
"node": 1,
"role": "master",
"status": "active",
"loading": false,
"memory_usage": 1048576,
"cpu_usage": 0.5,
"connections": 10,
"shard_key_regex": ".*",
"backup": true,
"replication": {
"status": "in-sync",
"lag": 0
},
"persistence": {
"type": "aof",
"status": "active"
}
}
Use Cases
Load Balancing
Redistribute shards across nodes for better resource utilization:
#!/bin/bash
# Balance shards across nodes
# Get shard distribution
for node in 1 2 3; do
COUNT=$(redisctl enterprise shard list --node $node -q "[] | length")
echo "Node $node: $COUNT shards"
done
# Migrate shards from overloaded node
redisctl enterprise shard list --node 1 --role master -q "[].uid" | \
head -2 | while read shard; do
echo "Migrating shard $shard to node 2"
redisctl enterprise shard migrate $shard --target-node 2
done
Failover Management
Handle node maintenance with controlled failovers:
#!/bin/bash
# Failover all master shards on a node before maintenance
NODE_ID=1
# Get all master shards on the node
SHARDS=$(redisctl enterprise shard list --node $NODE_ID --role master -q "[].uid")
# Failover each shard
for shard in $SHARDS; do
echo "Failing over shard $shard"
redisctl enterprise shard failover $shard --force
sleep 5
done
echo "All master shards failed over from node $NODE_ID"
Performance Monitoring
Monitor shard performance metrics:
#!/bin/bash
# Monitor shard performance
# Get top memory-consuming shards
redisctl enterprise shard list -q "[] | sort_by(@, &memory_usage) | reverse(@) | [:5]"
# Check for lagging replicas
redisctl enterprise shard list --role slave -q \
"[?replication.lag > \`100\`].{shard: uid, lag: replication.lag, node: node}"
# Monitor shard connections
while true; do
clear
echo "=== Shard Connection Count ==="
redisctl enterprise shard list -q \
"[].{shard: uid, connections: connections}" -o table
sleep 10
done
Shard Health Check
Comprehensive health check script:
#!/bin/bash
# Check shard health across cluster
echo "=== Shard Health Report ==="
# Check for inactive shards
INACTIVE=$(redisctl enterprise shard list -q "[?status != 'active'].uid")
if [ -n "$INACTIVE" ]; then
echo "WARNING: Inactive shards found: $INACTIVE"
fi
# Check for loading shards
LOADING=$(redisctl enterprise shard list -q "[?loading == \`true\`].uid")
if [ -n "$LOADING" ]; then
echo "INFO: Shards currently loading: $LOADING"
fi
# Check replication lag
HIGH_LAG=$(redisctl enterprise shard list --role slave -q \
"[?replication.lag > \`1000\`].uid")
if [ -n "$HIGH_LAG" ]; then
echo "WARNING: High replication lag on shards: $HIGH_LAG"
fi
# Check memory usage
for shard in $(redisctl enterprise shard list -q "[].uid"); do
MEMORY=$(redisctl enterprise shard get $shard -q "memory_usage")
if [ "$MEMORY" -gt 1073741824 ]; then # 1GB
echo "INFO: Shard $shard using $(($MEMORY / 1048576))MB"
fi
done
Bulk Operation Examples
Bulk Failover Configuration
{
"shards": [1, 2, 3, 4]
}
Bulk Migration Configuration
{
"migrations": [
{
"shard_uid": 1,
"target_node": 2
},
{
"shard_uid": 3,
"target_node": 3
}
]
}
Best Practices
- Monitor shard distribution - Ensure even distribution across nodes
- Check replication lag - High lag indicates performance issues
- Plan migrations carefully - Migrations consume resources
- Use controlled failovers - For planned maintenance
- Monitor memory usage - Prevent out-of-memory situations
- Regular health checks - Detect issues early
Troubleshooting
Shard Not Responding
# Check shard status
redisctl enterprise shard get <shard_uid> -q "status"
# Check node status
NODE=$(redisctl enterprise shard get <shard_uid> -q "node")
redisctl enterprise node get $NODE -q "status"
# Force failover if needed
redisctl enterprise shard failover <shard_uid> --force
Migration Stuck
# Check migration status
redisctl enterprise action list --type shard_migration --status running
# Cancel if needed
redisctl enterprise action cancel <action_uid>
# Retry migration
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid>
High Memory Usage
# Identify high-memory shards
redisctl enterprise shard list -q \
"[] | sort_by(@, &memory_usage) | reverse(@) | [:10]"
# Check database configuration
BDB=$(redisctl enterprise shard get <shard_uid> -q "bdb_uid")
redisctl enterprise database get $BDB -q "memory_size"
# Consider adding shards to database
redisctl enterprise database update $BDB --data '{"shards_count": 4}'
Replication Issues
# Check replication status
redisctl enterprise shard list --role slave -q \
"[].{shard: uid, status: replication.status, lag: replication.lag}"
# Force re-sync if needed
redisctl enterprise shard get <shard_uid> -q "replication"
Integration with Other Commands
Shard commands work with:
# Get database shard count
redisctl enterprise database get 1 -q "shards_count"
# Check node shard capacity
redisctl enterprise node get 1 -q "max_shards"
# Monitor shard-related actions
redisctl enterprise action list --type shard_migration
Performance Considerations
- Migration impact: Shard migrations consume network and CPU resources
- Failover time: Typically completes in seconds but depends on data size
- Replication overhead: More replicas mean more network traffic
- Memory overhead: Each shard has memory overhead for metadata
Related Commands
enterprise database
- Database configuration affects shardsenterprise node
- Node capacity and shard placemententerprise action
- Monitor shard operationsenterprise stats
- Detailed performance metrics
Database Groups
Database groups (BDB groups) allow you to organize and manage related databases as a single unit in Redis Enterprise. This is useful for grouping databases that belong to the same application, tenant, or environment.
Overview
Database groups provide a way to:
- Organize databases logically by application or purpose
- Apply bulk operations to related databases
- Simplify management of multi-database deployments
- Track and monitor groups of databases together
Available Commands
List Database Groups
List all database groups in the cluster:
# List all groups
redisctl enterprise bdb-group list
# List groups with specific fields
redisctl enterprise bdb-group list -q "[].{uid: uid, databases: bdbs}"
# Output as table
redisctl enterprise bdb-group list -o table
Get Group Details
Get detailed information about a specific database group:
# Get group by UID
redisctl enterprise bdb-group get <uid>
# Get specific fields
redisctl enterprise bdb-group get <uid> -q "bdbs"
Create Database Group
Create a new database group:
# Create from JSON file
redisctl enterprise bdb-group create --data @group.json
# Create from stdin
echo '{"name": "web-app-group"}' | redisctl enterprise bdb-group create --data -
# Create with initial databases
redisctl enterprise bdb-group create --data '{"name": "api-group", "bdbs": [1, 2, 3]}'
Update Database Group
Update an existing database group:
# Update from JSON file
redisctl enterprise bdb-group update <uid> --data @updates.json
# Update inline
redisctl enterprise bdb-group update <uid> --data '{"name": "new-name"}'
Delete Database Group
Delete a database group:
# Delete with confirmation
redisctl enterprise bdb-group delete <uid>
# Delete without confirmation
redisctl enterprise bdb-group delete <uid> --force
Manage Group Membership
Add or remove databases from a group:
# Add database to group
redisctl enterprise bdb-group add-database <group_uid> --database <bdb_uid>
# Remove database from group
redisctl enterprise bdb-group remove-database <group_uid> --database <bdb_uid>
# List databases in group
redisctl enterprise bdb-group list-databases <group_uid>
Group Configuration
Basic Group Structure
{
"uid": 1,
"name": "production-group",
"bdbs": [1, 2, 3, 4],
"description": "Production application databases"
}
Creating Groups
When creating a group, the UID is auto-assigned by the cluster:
{
"name": "staging-group",
"bdbs": [],
"description": "Staging environment databases"
}
Use Cases
Application Grouping
Group all databases for a specific application:
# Create application group
redisctl enterprise bdb-group create --data '{
"name": "ecommerce-app",
"description": "E-commerce platform databases"
}'
# Add databases to the group
redisctl enterprise bdb-group add-database 1 --database 10 # Session store
redisctl enterprise bdb-group add-database 1 --database 11 # Product cache
redisctl enterprise bdb-group add-database 1 --database 12 # Shopping cart
Environment Separation
Organize databases by environment:
# Create environment groups
redisctl enterprise bdb-group create --data '{"name": "dev-databases"}'
redisctl enterprise bdb-group create --data '{"name": "staging-databases"}'
redisctl enterprise bdb-group create --data '{"name": "production-databases"}'
# Add databases to appropriate groups
for db in 1 2 3; do
redisctl enterprise bdb-group add-database 1 --database $db # Dev
done
for db in 4 5 6; do
redisctl enterprise bdb-group add-database 2 --database $db # Staging
done
Multi-Tenant Organization
Group databases by tenant:
# Create tenant groups
redisctl enterprise bdb-group create --data '{
"name": "tenant-acme",
"description": "ACME Corp databases"
}'
redisctl enterprise bdb-group create --data '{
"name": "tenant-globex",
"description": "Globex Inc databases"
}'
Practical Examples
Bulk Operations Script
Perform operations on all databases in a group:
#!/bin/bash
# Get all databases in a group
GROUP_UID=1
DATABASES=$(redisctl enterprise bdb-group get $GROUP_UID -q "bdbs[]")
# Perform operation on each database
for db in $DATABASES; do
echo "Processing database $db..."
redisctl enterprise database get $db
done
Group Health Check
Monitor all databases in a group:
# Get group databases
GROUP_UID=1
redisctl enterprise bdb-group list-databases $GROUP_UID | \
jq -r '.[]' | \
while read db_uid; do
echo "Checking database $db_uid..."
redisctl enterprise database get $db_uid -q "{uid: uid, status: status}"
done
Migration Helper
Move databases between groups:
# Move database from one group to another
move_database() {
local db_uid=$1
local from_group=$2
local to_group=$3
# Remove from old group
redisctl enterprise bdb-group remove-database $from_group --database $db_uid
# Add to new group
redisctl enterprise bdb-group add-database $to_group --database $db_uid
echo "Moved database $db_uid from group $from_group to $to_group"
}
# Usage
move_database 5 1 2
Group Report
Generate a report of all groups and their databases:
# Generate group report
redisctl enterprise bdb-group list -q '[]' | jq -r '.[] | .uid' | \
while read group_uid; do
group_info=$(redisctl enterprise bdb-group get $group_uid)
name=$(echo "$group_info" | jq -r '.name // "unnamed"')
db_count=$(echo "$group_info" | jq '.bdbs | length')
echo "Group $group_uid: $name ($db_count databases)"
echo "$group_info" | jq -r '.bdbs[]' | \
while read db_uid; do
db_name=$(redisctl enterprise database get $db_uid -q "name")
echo " - Database $db_uid: $db_name"
done
echo
done
Best Practices
- Logical Organization - Group databases by application, environment, or purpose
- Naming Conventions - Use clear, descriptive names for groups
- Documentation - Include descriptions to explain group purpose
- Regular Review - Periodically review group membership
- Avoid Overlaps - Each database should typically belong to one logical group
- Use for Bulk Operations - Leverage groups for maintenance and monitoring
Limitations
- Database groups are a logical organization feature
- They don't affect database performance or configuration
- Groups don't enforce any policies on member databases
- A database can belong to multiple groups
- Deleting a group doesn't delete the databases
Troubleshooting
Group Creation Fails
# Check cluster status
redisctl enterprise cluster get -q "cluster_state"
# Verify required fields
redisctl enterprise api get /v1/jsonschema | jq '.bdb_group'
Database Not Added to Group
# Verify database exists
redisctl enterprise database get <bdb_uid>
# Check current group membership
redisctl enterprise bdb-group get <group_uid> -q "bdbs"
Group Operations Slow
# Check number of databases in group
redisctl enterprise bdb-group get <group_uid> -q "bdbs | length"
# Consider splitting large groups
Related Commands
enterprise database
- Individual database managemententerprise cluster
- Cluster-wide operationsenterprise stats
- Statistics for grouped databases
LDAP Integration Commands
Configure and manage LDAP authentication for Redis Enterprise.
Overview
LDAP commands enable integration with Active Directory and other LDAP services for centralized authentication and authorization in Redis Enterprise clusters.
LDAP Configuration Commands
Get LDAP Configuration
redisctl enterprise ldap get
Retrieves the current LDAP configuration for the cluster.
Update LDAP Configuration
redisctl enterprise ldap update --data '{
"server_url": "ldaps://ldap.company.com:636",
"bind_dn": "CN=redis,OU=ServiceAccounts,DC=company,DC=com",
"bind_password": "password",
"user_search_base": "OU=Users,DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"group_search_base": "OU=Groups,DC=company,DC=com",
"group_search_filter": "(member={0})"
}'
Delete LDAP Configuration
redisctl enterprise ldap delete
Removes LDAP configuration, reverting to local authentication only.
Test LDAP Connection
redisctl enterprise ldap test
Tests the LDAP connection and configuration.
LDAP Mappings Commands
LDAP mappings define how LDAP groups map to Redis Enterprise roles.
List LDAP Mappings
redisctl enterprise ldap-mappings list
Shows all configured LDAP group-to-role mappings.
Get Specific Mapping
redisctl enterprise ldap-mappings get <uid>
Create LDAP Mapping
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Admins,OU=Groups,DC=company,DC=com",
"role": "admin",
"email": "redis-admins@company.com"
}'
Maps an LDAP group to a Redis Enterprise role.
Update LDAP Mapping
redisctl enterprise ldap-mappings update <uid> --data '{
"role": "db_admin"
}'
Delete LDAP Mapping
redisctl enterprise ldap-mappings delete <uid>
Common Use Cases
Setting Up Active Directory Integration
# 1. Configure LDAP connection
redisctl enterprise ldap update --data '{
"server_url": "ldaps://dc01.company.com:636",
"bind_dn": "CN=Redis Service,OU=Services,DC=company,DC=com",
"bind_password": "${LDAP_BIND_PASSWORD}",
"user_search_base": "OU=Users,DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"certificate": "-----BEGIN CERTIFICATE-----..."
}'
# 2. Test the connection
redisctl enterprise ldap test
# 3. Create role mappings
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Admins,OU=Groups,DC=company,DC=com",
"role": "admin"
}'
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Users,OU=Groups,DC=company,DC=com",
"role": "db_viewer"
}'
Troubleshooting LDAP Authentication
# Check current configuration
redisctl enterprise ldap get
# Test with specific user (requires additional test data)
redisctl enterprise ldap test --data '{
"username": "testuser",
"password": "testpass"
}'
# View all mappings
redisctl enterprise ldap-mappings list -o table
Migrating from Local to LDAP Authentication
# 1. Keep local admin account active
redisctl enterprise user update admin@redis.local --data '{
"auth_method": "local"
}'
# 2. Configure LDAP
redisctl enterprise ldap update --data @ldap-config.json
# 3. Create mappings for existing roles
redisctl enterprise role list -q '[].name' | while read role; do
echo "Map LDAP group for role: $role"
done
# 4. Test LDAP authentication before disabling local auth
redisctl enterprise ldap test
Configuration Examples
Basic Active Directory
{
"server_url": "ldaps://ad.company.com:636",
"bind_dn": "redis-service@company.com",
"bind_password": "password",
"user_search_base": "DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"group_search_base": "DC=company,DC=com",
"group_search_filter": "(member={0})"
}
OpenLDAP
{
"server_url": "ldap://openldap.company.com:389",
"bind_dn": "cn=admin,dc=company,dc=com",
"bind_password": "password",
"user_search_base": "ou=people,dc=company,dc=com",
"user_search_filter": "(uid={0})",
"group_search_base": "ou=groups,dc=company,dc=com",
"group_search_filter": "(memberUid={0})"
}
Security Considerations
- Always use LDAPS (LDAP over SSL) for production
- Store bind passwords in environment variables or secrets management
- Use service accounts with minimal privileges for bind DN
- Regularly rotate bind account passwords
- Test configuration changes in non-production first
Troubleshooting
Connection Issues
# Check network connectivity
nc -zv ldap.company.com 636
# Verify certificate
openssl s_client -connect ldap.company.com:636 -showcerts
# Test with ldapsearch
ldapsearch -H ldaps://ldap.company.com:636 \
-D "CN=redis,OU=Services,DC=company,DC=com" \
-w password \
-b "DC=company,DC=com" \
"(sAMAccountName=testuser)"
Authentication Failures
- Verify bind DN and password
- Check user search base and filter
- Ensure group memberships are correct
- Review Redis Enterprise logs
- Test with
ldap test
command
Related Commands
- User Commands - Manage local users
- Role Commands - Configure roles and permissions
- Auth Commands - Authentication settings
CRDB Tasks
CRDB tasks are background operations related to Active-Active (CRDB) databases in Redis Enterprise. These commands allow you to monitor and manage async tasks for CRDB operations like synchronization, migration, and backup.
Overview
CRDB tasks include:
- Database synchronization between participating clusters
- Data migration operations
- Backup and restore operations for Active-Active databases
- Replication and conflict resolution tasks
- Schema changes across participating clusters
Available Commands
List All CRDB Tasks
List all CRDB tasks with optional filtering:
# List all CRDB tasks
redisctl enterprise crdb-task list
# Filter by task status
redisctl enterprise crdb-task list --status running
redisctl enterprise crdb-task list --status completed
redisctl enterprise crdb-task list --status failed
# Filter by task type
redisctl enterprise crdb-task list --type sync
redisctl enterprise crdb-task list --type migration
redisctl enterprise crdb-task list --type backup
# Filter by CRDB UID
redisctl enterprise crdb-task list --crdb-uid 1
# Combine filters
redisctl enterprise crdb-task list --status running --type sync --crdb-uid 1
# Output as table
redisctl enterprise crdb-task list -o table
Get Task Details
Get detailed information about a specific CRDB task:
# Get task by ID
redisctl enterprise crdb-task get <task_id>
# Get specific fields using JMESPath
redisctl enterprise crdb-task get <task_id> -q "status"
redisctl enterprise crdb-task get <task_id> -q "{id: task_id, status: status, type: type}"
Check Task Status
Quick status check for a CRDB task:
# Get just the status
redisctl enterprise crdb-task status <task_id>
Get Task Progress
Monitor task progress information:
# Get progress details
redisctl enterprise crdb-task progress <task_id>
# Get progress percentage only
redisctl enterprise crdb-task progress <task_id> -q "progress_percent"
Get Task Logs
Retrieve logs for a CRDB task:
# Get task logs
redisctl enterprise crdb-task logs <task_id>
List Tasks by CRDB
List all tasks for a specific Active-Active database:
# List all tasks for a CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid>
# Filter by status for specific CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid> --status running
# Filter by type for specific CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid> --type sync
Task Control Operations
Cancel Task
Cancel a running CRDB task:
# Cancel with confirmation
redisctl enterprise crdb-task cancel <task_id>
# Cancel without confirmation
redisctl enterprise crdb-task cancel <task_id> --force
Retry Failed Task
Retry a failed CRDB task:
redisctl enterprise crdb-task retry <task_id>
Note: Retry functionality may not be available for all task types or Redis Enterprise versions.
Pause/Resume Tasks
Pause and resume CRDB tasks:
# Pause a running task
redisctl enterprise crdb-task pause <task_id>
# Resume a paused task
redisctl enterprise crdb-task resume <task_id>
Note: Pause/resume functionality may not be supported for all task types.
Task Types
Common CRDB task types include:
- sync - Data synchronization between clusters
- migration - Data migration operations
- backup - CRDB backup operations
- restore - CRDB restore operations
- rebalance - Shard rebalancing across clusters
- schema_change - Schema modifications across participating clusters
- conflict_resolution - Resolving data conflicts between clusters
Task Statuses
CRDB tasks can have the following statuses:
- pending - Task is queued for execution
- running - Task is currently executing
- completed - Task completed successfully
- failed - Task failed with errors
- canceled - Task was canceled by user
- paused - Task is paused (if supported)
Examples
Monitor CRDB Synchronization
# List all sync tasks
redisctl enterprise crdb-task list --type sync
# Check status of specific sync task
TASK_ID="task-12345"
redisctl enterprise crdb-task status $TASK_ID
# Monitor progress
watch -n 5 "redisctl enterprise crdb-task progress $TASK_ID"
Handle Failed Migration
# Find failed migration tasks
redisctl enterprise crdb-task list --type migration --status failed
# Get error details
redisctl enterprise crdb-task get <failed_task_id> -q "error"
# Retry the migration
redisctl enterprise crdb-task retry <failed_task_id>
Monitor CRDB Backup
# Start monitoring backup task
CRDB_UID=1
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --type backup --status running
# Get progress updates
BACKUP_TASK="backup-task-123"
while [ "$(redisctl enterprise crdb-task status $BACKUP_TASK)" = "running" ]; do
echo "Progress: $(redisctl enterprise crdb-task progress $BACKUP_TASK -q progress_percent)%"
sleep 10
done
Cancel Long-Running Task
# Find long-running tasks
redisctl enterprise crdb-task list --status running -o table
# Cancel specific task
redisctl enterprise crdb-task cancel <task_id> --force
Practical Scripts
Task Monitoring Script
#!/bin/bash
# Monitor all CRDB tasks for a specific database
CRDB_UID=$1
if [ -z "$CRDB_UID" ]; then
echo "Usage: $0 <crdb_uid>"
exit 1
fi
echo "Monitoring tasks for CRDB $CRDB_UID..."
while true; do
clear
echo "=== CRDB $CRDB_UID Tasks ==="
echo ""
# Get running tasks
echo "Running Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status running -o table
# Get failed tasks
echo -e "\nFailed Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status failed -o table
# Get completed tasks (last 5)
echo -e "\nRecent Completed Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status completed -q "tasks[:5]" -o table
sleep 30
done
Task Health Check
#!/bin/bash
# Check health of all CRDB tasks
echo "CRDB Task Health Report"
echo "======================="
# Check for failed tasks
FAILED_COUNT=$(redisctl enterprise crdb-task list --status failed -q "tasks | length")
echo "Failed tasks: $FAILED_COUNT"
if [ "$FAILED_COUNT" -gt 0 ]; then
echo "Failed task details:"
redisctl enterprise crdb-task list --status failed -q "tasks[].{id: task_id, type: type, error: error_message}"
fi
# Check for stuck tasks (running > 1 hour)
echo -e "\nLong-running tasks (>1 hour):"
redisctl enterprise crdb-task list --status running -q "tasks[?duration_seconds > \`3600\`]"
# Check task distribution by type
echo -e "\nTask distribution by type:"
for type in sync migration backup restore; do
COUNT=$(redisctl enterprise crdb-task list --type $type -q "tasks | length")
echo " $type: $COUNT"
done
Automated Task Retry
#!/bin/bash
# Automatically retry failed tasks
# Get all failed tasks
FAILED_TASKS=$(redisctl enterprise crdb-task list --status failed -q "tasks[].task_id" -o json | jq -r '.[]')
for task_id in $FAILED_TASKS; do
echo "Retrying task $task_id..."
# Get task type for logging
TASK_TYPE=$(redisctl enterprise crdb-task get $task_id -q "type")
# Attempt retry
if redisctl enterprise crdb-task retry $task_id; then
echo "Successfully initiated retry for $TASK_TYPE task $task_id"
else
echo "Failed to retry $TASK_TYPE task $task_id - manual intervention required"
fi
sleep 5
done
Integration with CRDB Commands
CRDB task commands work alongside regular CRDB commands:
# Create a CRDB (returns task_id)
TASK_ID=$(redisctl enterprise crdb create --data @crdb.json -q "task_id")
# Monitor the creation task
redisctl enterprise crdb-task progress $TASK_ID
# Wait for completion
while [ "$(redisctl enterprise crdb-task status $TASK_ID)" = "running" ]; do
sleep 10
done
# Check if successful
if [ "$(redisctl enterprise crdb-task status $TASK_ID)" = "completed" ]; then
echo "CRDB created successfully"
else
echo "CRDB creation failed"
redisctl enterprise crdb-task get $TASK_ID -q "error"
fi
Best Practices
- Monitor Critical Tasks - Set up monitoring for backup and migration tasks
- Handle Failures Promptly - Check failed tasks regularly and retry or escalate
- Track Long-Running Tasks - Monitor tasks that run longer than expected
- Use Filtering - Filter by status and type to focus on relevant tasks
- Automate Monitoring - Create scripts to track task health
- Log Task History - Keep records of completed and failed tasks for auditing
Troubleshooting
Tasks Not Listed
# Verify CRDB exists
redisctl enterprise crdb list
# Check if tasks endpoint is available
redisctl enterprise api get /crdb_tasks
Cannot Cancel Task
# Check task status first
redisctl enterprise crdb-task get <task_id> -q "status"
# Only running tasks can be canceled
# Completed or failed tasks cannot be canceled
Retry Not Available
Some task types or Redis Enterprise versions may not support retry:
- Check Redis Enterprise version compatibility
- Consider creating a new task instead of retrying
- Review task configuration for issues
Progress Not Updating
# Check if task supports progress reporting
redisctl enterprise crdb-task get <task_id> -q "supports_progress"
# Some quick tasks may complete before progress is reported
Related Commands
enterprise crdb
- CRDB management operationsenterprise action
- General action/task monitoringenterprise database
- Regular database operationsapi enterprise
- Direct API access for advanced operations
Alert Management Commands
Manage alerts for Redis Enterprise clusters, nodes, and databases.
Overview
The alerts commands provide comprehensive monitoring and management of alerts across your Redis Enterprise deployment. You can:
- List and filter alerts by type and severity
- Get details on specific alerts
- Manage alert settings
- View alerts at cluster, node, and database levels
Commands
List All Alerts
List all alerts across the cluster with optional filtering:
# List all alerts
redisctl enterprise alerts list
# Filter by alert type (cluster, node, bdb)
redisctl enterprise alerts list --filter-type cluster
# Filter by severity (info, warning, error, critical)
redisctl enterprise alerts list --severity error
# Combine filters
redisctl enterprise alerts list --filter-type node --severity warning
Get Specific Alert
Get details for a specific alert by UID:
redisctl enterprise alerts get 123
Cluster Alerts
View alerts at the cluster level:
# Get all cluster alerts
redisctl enterprise alerts cluster
# Get specific cluster alert by name
redisctl enterprise alerts cluster --alert cluster_license_about_to_expire
Node Alerts
View alerts for nodes:
# Get all node alerts
redisctl enterprise alerts node
# Get alerts for specific node
redisctl enterprise alerts node 1
# Get specific alert for a node
redisctl enterprise alerts node 1 --alert node_ephemeral_storage
Database Alerts
View alerts for databases:
# Get all database alerts
redisctl enterprise alerts database
# Get alerts for specific database
redisctl enterprise alerts database 1
# Get specific alert for a database
redisctl enterprise alerts database 1 --alert bdb_backup_failed
Alert Settings
Manage alert configuration settings:
# Get current alert settings
redisctl enterprise alerts settings-get
# Update alert settings
redisctl enterprise alerts settings-update --data '{
"cluster_license_about_to_expire": {
"enabled": true,
"threshold": "30"
}
}'
# Update from file
redisctl enterprise alerts settings-update --data @alert-settings.json
# Update from stdin
echo '{"node_ephemeral_storage": {"enabled": true, "threshold": "80"}}' | \
redisctl enterprise alerts settings-update --data -
Output Formats
All commands support multiple output formats:
# JSON output (default)
redisctl enterprise alerts list -o json
# YAML output
redisctl enterprise alerts list -o yaml
# Table output
redisctl enterprise alerts list -o table
JMESPath Filtering
Use JMESPath queries to filter and transform output:
# Get only alert names
redisctl enterprise alerts list -q '[].name'
# Get alerts with severity error or critical
redisctl enterprise alerts list -q "[?severity=='error' || severity=='critical']"
# Get alert count by type
redisctl enterprise alerts list -q 'length(@)'
# Get specific fields
redisctl enterprise alerts settings-get -q 'node_ephemeral_storage'
Common Use Cases
Monitor Critical Alerts
# List all critical alerts
redisctl enterprise alerts list --severity critical -o table
# Check for license expiration
redisctl enterprise alerts cluster --alert cluster_license_about_to_expire
Alert Monitoring Script
#!/bin/bash
# Monitor for critical alerts
CRITICAL_ALERTS=$(redisctl enterprise alerts list --severity critical -o json)
if [ $(echo "$CRITICAL_ALERTS" | jq 'length') -gt 0 ]; then
echo "Critical alerts found:"
echo "$CRITICAL_ALERTS" | jq -r '.[] | "\(.type): \(.name) - \(.description)"'
exit 1
fi
Adjust Alert Thresholds
# Set more aggressive storage thresholds
redisctl enterprise alerts settings-update --data '{
"node_ephemeral_storage": {
"enabled": true,
"threshold": "60"
},
"node_persistent_storage": {
"enabled": true,
"threshold": "60"
}
}'
Check Database Health
# Get all database alerts for monitoring
for db_id in $(redisctl enterprise database list -q '[].uid'); do
echo "Checking database $db_id..."
redisctl enterprise alerts database $db_id
done
Alert Types
Cluster Alerts
cluster_ca_cert_about_to_expire
- CA certificate expiration warningcluster_certs_about_to_expire
- SSL certificate expiration warningcluster_license_about_to_expire
- License expiration warningcluster_node_operation_failed
- Node operation failurecluster_ocsp_query_failed
- OCSP query failurecluster_ocsp_status_revoked
- Certificate revoked via OCSP
Node Alerts
node_checks_error
- Node health check errorsnode_ephemeral_storage
- Ephemeral storage threshold exceedednode_free_flash
- Flash storage threshold exceedednode_internal_certs_about_to_expire
- Internal certificate expirationnode_persistent_storage
- Persistent storage threshold exceeded
Database Alerts
bdb_backup_failed
- Database backup failurebdb_crdt_sync_error
- Active-Active synchronization errorbdb_high_latency
- High latency detectedbdb_high_memory
- Memory usage threshold exceededbdb_replica_sync_error
- Replica synchronization error
Notes
- Alert thresholds are configured in the cluster settings
- Some alerts have configurable thresholds (e.g., storage, certificate expiration)
- Critical alerts should be addressed immediately
- Use profiles to manage multiple Redis Enterprise deployments:
redisctl -p production enterprise alerts list --severity critical
Usage Report
The usage report commands provide access to Redis Enterprise cluster usage data, important for license compliance, capacity planning, and resource utilization analysis.
Available Commands
Get Usage Report
Retrieve the current usage report for the cluster:
# Get full usage report
redisctl enterprise usage-report get
# Get usage report as YAML
redisctl enterprise usage-report get -o yaml
# Extract specific metrics
redisctl enterprise usage-report get -q '{cluster: cluster_name, databases: total_databases, memory_gb: total_memory_gb}'
# Get database-specific usage
redisctl enterprise usage-report get -q 'databases[].{name: name, memory: memory_mb, shards: shard_count}'
Export Usage Report
Export usage report to a file for reporting or analysis:
# Export to JSON file
redisctl enterprise usage-report export --output usage-report.json
# Export to CSV for spreadsheet analysis
redisctl enterprise usage-report export --output usage-report.csv --format csv
# Export with specific date in filename
redisctl enterprise usage-report export -o "usage-$(date +%Y%m%d).json"
# Export filtered data
redisctl enterprise usage-report export -o databases.json -q 'databases'
Output Examples
Usage Report Structure
{
"cluster_name": "production-cluster",
"cluster_uid": "cluster-12345",
"report_date": "2024-03-15T10:00:00Z",
"license": {
"type": "enterprise",
"expiry": "2025-01-01T00:00:00Z",
"shards_limit": 100,
"memory_limit_gb": 1024
},
"usage": {
"total_databases": 25,
"total_shards": 75,
"total_memory_gb": 512,
"total_nodes": 5,
"total_cpus": 40
},
"databases": [
{
"uid": 1,
"name": "cache-db",
"memory_mb": 8192,
"shard_count": 4,
"replication": true,
"persistence": "aof",
"modules": ["search", "json"]
},
{
"uid": 2,
"name": "session-store",
"memory_mb": 4096,
"shard_count": 2,
"replication": false,
"persistence": "none",
"modules": []
}
],
"nodes": [
{
"uid": 1,
"address": "node1.cluster.local",
"cpus": 8,
"memory_gb": 128,
"databases": 5,
"shards": 15
}
]
}
Common Use Cases
License Compliance
Monitor usage against license limits:
# Check current usage vs limits
redisctl enterprise usage-report get -q '{
shards_used: usage.total_shards,
shards_limit: license.shards_limit,
shards_available: license.shards_limit - usage.total_shards,
memory_used_gb: usage.total_memory_gb,
memory_limit_gb: license.memory_limit_gb,
memory_available_gb: license.memory_limit_gb - usage.total_memory_gb
}'
# Check license expiry
redisctl enterprise usage-report get -q 'license.expiry'
# Alert if approaching limits
usage=$(redisctl enterprise usage-report get -q '{
shard_pct: (usage.total_shards / license.shards_limit * 100),
memory_pct: (usage.total_memory_gb / license.memory_limit_gb * 100)
}')
Capacity Planning
Analyze resource utilization for capacity planning:
# Get growth metrics
redisctl enterprise usage-report export -o usage-$(date +%Y%m).json
# Database memory distribution
redisctl enterprise usage-report get -q 'databases | sort_by(@, &memory_mb) | reverse(@)[:10]' -o table
# Shards per database
redisctl enterprise usage-report get -q 'databases[].{name: name, shards: shard_count}' -o table
# Node utilization
redisctl enterprise usage-report get -q 'nodes[].{node: address, memory_gb: memory_gb, databases: databases, shards: shards}' -o table
Module Usage Analysis
Track module adoption and usage:
# List databases with modules
redisctl enterprise usage-report get -q 'databases[?length(modules) > `0`].{name: name, modules: modules}'
# Count module usage
redisctl enterprise usage-report get -q 'databases[].modules[] | group_by(@) | [].{module: [0], count: length(@)}'
# Find databases with specific module
redisctl enterprise usage-report get -q 'databases[?contains(modules, `search`)].name'
Regular Reporting
Create automated usage reports:
#!/bin/bash
# Monthly usage report script
REPORT_DIR="/var/reports/redis"
DATE=$(date +%Y%m%d)
MONTH=$(date +%B-%Y)
# Create report directory
mkdir -p "$REPORT_DIR"
# Export full report
redisctl enterprise usage-report export -o "$REPORT_DIR/usage-$DATE.json"
# Create summary CSV
redisctl enterprise usage-report get -q '{
date: report_date,
databases: usage.total_databases,
shards: usage.total_shards,
memory_gb: usage.total_memory_gb,
nodes: usage.total_nodes
}' | jq -r '[.date, .databases, .shards, .memory_gb, .nodes] | @csv' >> "$REPORT_DIR/usage-summary.csv"
# Email report
echo "Redis Enterprise Usage Report for $MONTH" | \
mail -s "Redis Usage Report - $MONTH" \
-a "$REPORT_DIR/usage-$DATE.json" \
ops-team@company.com
Chargeback/Showback
Generate department or team usage reports:
# Assuming database names include team identifiers
# e.g., "team-a-cache", "team-b-sessions"
# Group databases by team
for team in team-a team-b team-c; do
echo "Usage for $team:"
redisctl enterprise usage-report get \
-q "databases[?contains(name, '$team')].{name: name, memory_mb: memory_mb, shards: shard_count}" \
-o table
done
# Calculate team memory usage
redisctl enterprise usage-report get -q 'databases[].{
team: split(name, `-`)[0],
memory_mb: memory_mb
}' | jq -s 'group_by(.team) | map({
team: .[0].team,
total_memory_mb: map(.memory_mb) | add,
database_count: length
})'
Export Formats
JSON Export
Full structured data for programmatic processing:
# Export and process with jq
redisctl enterprise usage-report export -o report.json
cat report.json | jq '.databases | length'
# Export and upload to S3
redisctl enterprise usage-report export -o /tmp/usage.json
aws s3 cp /tmp/usage.json s3://bucket/redis-reports/$(date +%Y/%m)/usage.json
CSV Export
Tabular format for spreadsheet analysis:
# Export to CSV
redisctl enterprise usage-report export -o report.csv -f csv
# Export specific data as CSV
redisctl enterprise usage-report get -q 'databases' | \
jq -r '["name","memory_mb","shards"], (.[] | [.name, .memory_mb, .shard_count]) | @csv' > databases.csv
# Import to Google Sheets
redisctl enterprise usage-report export -o /tmp/usage.csv -f csv
gcloud auth login
gdrive upload /tmp/usage.csv
Integration Examples
Monitoring Systems
Send usage metrics to monitoring systems:
# Prometheus metrics format
redisctl enterprise usage-report get -o json | jq -r '
"redis_cluster_databases \(.usage.total_databases)",
"redis_cluster_shards \(.usage.total_shards)",
"redis_cluster_memory_gb \(.usage.total_memory_gb)",
"redis_cluster_nodes \(.usage.total_nodes)",
"redis_license_shards_limit \(.license.shards_limit)",
"redis_license_memory_limit_gb \(.license.memory_limit_gb)"
' | curl -X POST http://pushgateway:9091/metrics/job/redis-usage --data-binary @-
# Datadog metrics
redisctl enterprise usage-report get -o json | \
python -c "
import json, sys
from datadog import initialize, api
data = json.load(sys.stdin)
api.Metric.send([
{'metric': 'redis.usage.databases', 'points': data['usage']['total_databases']},
{'metric': 'redis.usage.shards', 'points': data['usage']['total_shards']},
{'metric': 'redis.usage.memory_gb', 'points': data['usage']['total_memory_gb']}
])
"
Ticketing Systems
Create tickets for capacity warnings:
#!/bin/bash
# Check usage and create tickets
USAGE=$(redisctl enterprise usage-report get -o json)
SHARD_PCT=$(echo $USAGE | jq '.usage.total_shards / .license.shards_limit * 100')
MEMORY_PCT=$(echo $USAGE | jq '.usage.total_memory_gb / .license.memory_limit_gb * 100')
if (( $(echo "$SHARD_PCT > 80" | bc -l) )); then
echo "High shard usage: ${SHARD_PCT}%" | \
gh issue create --title "Redis Cluster: High Shard Usage Alert" \
--body "Shard usage is at ${SHARD_PCT}% of licensed capacity"
fi
if (( $(echo "$MEMORY_PCT > 80" | bc -l) )); then
echo "High memory usage: ${MEMORY_PCT}%" | \
jira create --project OPS --type Alert \
--summary "Redis Cluster: High Memory Usage" \
--description "Memory usage is at ${MEMORY_PCT}% of licensed capacity"
fi
Best Practices
- Regular Exports: Schedule regular exports for historical tracking
- Automated Monitoring: Set up automated checks for license limits
- Trend Analysis: Compare reports over time to identify growth patterns
- Capacity Alerts: Configure alerts when approaching license limits
- Cost Attribution: Use naming conventions to enable chargeback/showback
- Archive Reports: Keep historical reports for compliance and auditing
Troubleshooting
Report Generation Issues
If usage reports fail to generate:
# Check cluster status
redisctl enterprise cluster get -q 'name'
# Verify authentication
redisctl enterprise auth test
# Check with raw API
redisctl api enterprise get /v1/usage_report
Export Failures
When exports fail:
# Check write permissions
touch test-file.json && rm test-file.json
# Verify disk space
df -h .
# Try different format
redisctl enterprise usage-report export -o report.json
redisctl enterprise usage-report export -o report.csv -f csv
Related Commands
redisctl enterprise cluster
- View cluster informationredisctl enterprise database list
- List all databasesredisctl enterprise stats
- View detailed statisticsredisctl enterprise node list
- View node resources
Support Package Commands (Phase 2)
Enhanced support package generation with improved UX, async operations, and intelligent defaults.
Overview
The support-package
command group provides a dedicated, user-friendly interface for generating Redis Enterprise support packages. This is the recommended way to collect diagnostic information for Redis Support tickets.
Why Use Support Package Commands?
While debug-info
commands provide the core functionality, support-package
commands offer:
- Better UX: Clear progress indicators and helpful output
- Smart defaults: Automatic timestamps and intelligent file naming
- Pre-flight checks: Disk space and permission verification
- Async support: Handle long-running operations gracefully
- Next steps: Clear guidance on uploading to support
Available Commands
Generate Cluster Support Package
# Quick generation with all defaults
redisctl enterprise support-package cluster
# Custom output location
redisctl enterprise support-package cluster -o /tmp/support.tar.gz
# Skip pre-flight checks (not recommended)
redisctl enterprise support-package cluster --skip-checks
# Use new API endpoints (Redis Enterprise 7.4+)
redisctl enterprise support-package cluster --use-new-api
# Optimize package size (reduces by ~20-30%)
redisctl enterprise support-package cluster --optimize
# Show optimization details
redisctl enterprise support-package cluster --optimize --optimize-verbose
# Upload directly to Redis Support (Files.com)
export REDIS_ENTERPRISE_FILES_API_KEY="your-api-key"
redisctl enterprise support-package cluster --upload
# Upload without saving locally
redisctl enterprise support-package cluster --upload --no-save
# Optimize and upload in one command
redisctl enterprise support-package cluster --optimize --upload --no-save
Example Output:
Redis Enterprise Support Package
================================
Cluster: prod-cluster-01
Version: 7.2.4
Nodes: 3
Databases: 5
Output: ./support-package-cluster-20240115T143000.tar.gz
Generating support package...
⠋ Collecting cluster data...
✓ Support package created successfully
File: support-package-cluster-20240115T143000.tar.gz
Size: 487.3 MB
Time: 154s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Generate Database Support Package
# Support package for specific database
redisctl enterprise support-package database 1
# Custom output with database name
redisctl enterprise support-package database 1 \
-o production-db-issue.tar.gz
# For Active-Active database
redisctl enterprise support-package database 5 --use-new-api
Example Output:
Redis Enterprise Support Package
================================
Database: 1
Name: production-cache
Output: ./support-package-database-1-20240115T143000.tar.gz
Generating support package...
⠋ Collecting database 1 data...
✓ Database support package created successfully
File: support-package-database-1-20240115T143000.tar.gz
Size: 125.7 MB
Time: 45s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Generate Node Support Package
# All nodes
redisctl enterprise support-package node
# Specific node
redisctl enterprise support-package node 2
# Custom output for node issue
redisctl enterprise support-package node 2 \
-o node2-memory-issue.tar.gz
Example Output:
Redis Enterprise Support Package
================================
Node: 2
Address: 10.0.1.2
Output: ./support-package-node-2-20240115T143000.tar.gz
Generating support package...
⠋ Collecting node 2 data...
✓ Node support package created successfully
File: support-package-node-2-20240115T143000.tar.gz
Size: 89.3 MB
Time: 32s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Package Optimization
Support packages can be large (500MB-2GB+). The --optimize
flag reduces package size by 20-30% through:
- Log truncation: Keeps most recent 1000 lines per log file (configurable)
- Redundant data removal: Removes duplicate or unnecessary files
- Nested archive cleanup: Removes nested .gz files
Basic Optimization
# Optimize with defaults
redisctl enterprise support-package cluster --optimize
# Customize log retention
redisctl enterprise support-package cluster --optimize --log-lines 5000
# Show detailed optimization stats
redisctl enterprise support-package cluster --optimize --optimize-verbose
Optimization Output
Optimization: 487.3 MB → 358.2 MB (26.5% reduction)
Files processed: 847
Files truncated: 142
Files removed: 23
When to Use Optimization
Use optimization when:
- Package size exceeds upload limits
- Network bandwidth is limited
- Storage space is constrained
- Only recent log data is needed
Skip optimization when:
- Full historical logs are needed for issue diagnosis
- Investigating intermittent issues from the past
- Redis Support specifically requests unoptimized packages
Direct Upload to Redis Support
Upload support packages directly to Files.com for Redis Support tickets, eliminating manual upload steps.
Setup Files.com API Key
Get your Files.com API key from Redis Support, then configure it:
# Option 1: Environment variable (recommended for CI/CD)
export REDIS_ENTERPRISE_FILES_API_KEY="your-api-key"
# Option 2: Secure keyring storage (requires secure-storage feature)
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --use-keyring
# Option 3: Global config file (plaintext)
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --global
# Option 4: Per-profile config
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --profile enterprise-prod
Upload Commands
# Generate and upload
redisctl enterprise support-package cluster --upload
# Upload without local copy (saves disk space)
redisctl enterprise support-package cluster --upload --no-save
# Optimize before upload (recommended)
redisctl enterprise support-package cluster --optimize --upload --no-save
# Database-specific package
redisctl enterprise support-package database 1 --optimize --upload
Upload Output
Generating support package...
Uploading to Files.com: /RLEC_Customers/Uploads/support-package-cluster-20240115T143000.tar.gz
Size: 358234567 bytes
✓ Support package created successfully
Uploaded to: RLEC_Customers/Uploads/support-package-cluster-20240115T143000.tar.gz
Size: 341.7 MB
Time: 124s
API Key Priority
The Files.com API key is resolved in this order:
REDIS_ENTERPRISE_FILES_API_KEY
environment variable- Profile-specific
files_api_key
in config - Global
files_api_key
in config - System keyring (if secure-storage feature enabled)
REDIS_FILES_API_KEY
environment variable (fallback)
Secure API Key Storage
With the secure-storage
feature, API keys are stored in your OS keyring:
- macOS: Keychain
- Windows: Credential Manager
- Linux: Secret Service (GNOME Keyring, KWallet)
# Install with secure storage
cargo install redisctl --features secure-storage
# Store key securely
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --use-keyring
# Verify storage
redisctl files-key get
# Output: Key found in keyring: your-ke...key4
# Remove when no longer needed
redisctl files-key remove --keyring
The config file only stores a reference:
files_api_key = "keyring:files-api-key"
Pre-flight Checks
The command automatically performs safety checks before generating packages:
Disk Space Check
Warning: Low disk space detected (< 1GB available)
Continue anyway? (y/N):
File Overwrite Protection
Warning: File support-package.tar.gz already exists
Overwrite? (y/N):
Permission Verification
Error: Cannot write to directory /restricted/path
Please choose a different location or check permissions
To skip all checks (not recommended for production):
redisctl enterprise support-package cluster --skip-checks
Async Operations
For large clusters, support package generation can take several minutes:
With Wait (Default)
# Wait for completion with default timeout (10 minutes)
redisctl enterprise support-package cluster --wait
# Custom timeout (30 minutes for very large clusters)
redisctl enterprise support-package cluster --wait --wait-timeout 1800
Without Wait
# Start generation and return immediately
redisctl enterprise support-package cluster --no-wait
# Output:
# Task ID: abc123-def456-789
# Check status: redisctl enterprise support-package status abc123-def456-789
Check Status
redisctl enterprise support-package status abc123-def456-789
# Output:
# Support Package Generation Status
# =================================
# Task ID: abc123-def456-789
# Status: in_progress
# Progress: 65%
# Message: Collecting node 3 data...
List Available Packages
redisctl enterprise support-package list
Note: Most Redis Enterprise versions don't store generated packages on the server. This command is a placeholder for future functionality.
Smart File Naming
The command uses intelligent defaults for file names:
Type | Pattern | Example |
---|---|---|
Cluster | support-package-cluster-{timestamp}.tar.gz | support-package-cluster-20240115T143000.tar.gz |
Database | support-package-database-{uid}-{timestamp}.tar.gz | support-package-database-1-20240115T143000.tar.gz |
Node | support-package-node-{uid}-{timestamp}.tar.gz | support-package-node-2-20240115T143000.tar.gz |
All Nodes | support-package-nodes-{timestamp}.tar.gz | support-package-nodes-20240115T143000.tar.gz |
Timestamps use ISO format for easy sorting: YYYYMMDDTHHMMSS
Best Practices
1. Organized Collection
#!/bin/bash
# Create case-specific directory
CASE_ID="CASE-12345"
mkdir -p "./support-$CASE_ID"
# Collect all relevant packages
redisctl enterprise support-package cluster \
-o "./support-$CASE_ID/cluster.tar.gz"
redisctl enterprise support-package database 1 \
-o "./support-$CASE_ID/database-1.tar.gz"
# Create summary
echo "Case: $CASE_ID" > "./support-$CASE_ID/README.txt"
echo "Issue: Database 1 high latency" >> "./support-$CASE_ID/README.txt"
echo "Collected: $(date)" >> "./support-$CASE_ID/README.txt"
2. Automated Daily Collection
#!/bin/bash
# Daily support package collection for monitoring
OUTPUT_DIR="/backup/support-packages"
RETENTION_DAYS=7
# Generate with date-based naming
redisctl enterprise support-package cluster \
-o "$OUTPUT_DIR/daily-$(date +%Y%m%d).tar.gz"
# Clean up old packages
find "$OUTPUT_DIR" -name "daily-*.tar.gz" \
-mtime +$RETENTION_DAYS -delete
3. Pre-incident Collection
# Collect baseline before maintenance
redisctl enterprise support-package cluster \
-o "baseline-pre-upgrade-$(date +%Y%m%d).tar.gz"
# Perform upgrade...
# Collect post-change package
redisctl enterprise support-package cluster \
-o "post-upgrade-$(date +%Y%m%d).tar.gz"
Integration with Support Workflow
1. Generate Package
redisctl enterprise support-package cluster
2. Verify Package
# Check file size and type
ls -lh support-package-*.tar.gz
file support-package-*.tar.gz
# Quick content verification
tar -tzf support-package-*.tar.gz | head -20
3. Upload to Support
- Navigate to https://support.redis.com/upload
- Select your case number
- Upload the tar.gz file directly
- Add description of the issue
4. Clean Up
# Remove local copy after successful upload
rm support-package-*.tar.gz
Troubleshooting
Package Generation Fails
# Check cluster connectivity
redisctl enterprise cluster get
# Verify credentials
redisctl profile list
# Try with explicit credentials
export REDIS_ENTERPRISE_URL="https://your-cluster:9443"
export REDIS_ENTERPRISE_USER="your-user"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true"
Timeout Issues
# Increase timeout for large clusters
redisctl enterprise support-package cluster \
--wait --wait-timeout 3600 # 1 hour
Permission Denied
# Use a writable directory
redisctl enterprise support-package cluster \
-o /tmp/support.tar.gz
# Or fix permissions
chmod 755 ./output-directory
Comparison with debug-info
Feature | debug-info | support-package |
---|---|---|
Binary download | ✅ | ✅ |
Progress indicators | ✅ | ✅ Enhanced |
Pre-flight checks | ❌ | ✅ |
Smart naming | Basic | Advanced |
Async operations | ❌ | ✅ |
Status checking | ❌ | ✅ |
Clear next steps | ❌ | ✅ |
Cluster info display | ❌ | ✅ |
CI/CD Integration with JSON Output
The support-package commands fully support structured JSON output for automation and CI/CD pipelines.
Basic JSON Output
# Generate package with JSON output
redisctl enterprise support-package cluster -o json
# Output:
{
"success": true,
"package_type": "cluster",
"file_path": "support-package-cluster-20240115T143000.tar.gz",
"file_size": 510234567,
"file_size_display": "487.3 MB",
"elapsed_seconds": 154,
"cluster_name": "prod-cluster-01",
"cluster_version": "7.2.4-92",
"message": "Support package created successfully",
"timestamp": "2024-01-15T14:32:34Z"
}
CI/CD Script Examples
Automated Collection on Failure
#!/bin/bash
# collect-support-on-failure.sh
# Run tests
if ! ./run-tests.sh; then
echo "Tests failed, collecting support package..."
# Generate support package with JSON output
result=$(redisctl enterprise support-package cluster -o json)
# Check if successful
if [ $(echo "$result" | jq -r '.success') = "true" ]; then
file_path=$(echo "$result" | jq -r '.file_path')
file_size=$(echo "$result" | jq -r '.file_size_display')
echo "Support package created: $file_path ($file_size)"
# Upload to artifact storage
aws s3 cp "$file_path" "s3://support-packages/$(date +%Y%m%d)/"
# Create support ticket
curl -X POST https://support.redis.com/api/tickets \
-H "Authorization: Bearer $SUPPORT_TOKEN" \
-d @- <<EOF
{
"title": "CI Test Failure - $(date)",
"priority": "high",
"attachment": "$file_path",
"metadata": $(echo "$result" | jq -c .)
}
EOF
# Clean up local file
rm "$file_path"
else
echo "Failed to create support package"
echo "$result" | jq -r '.error'
exit 1
fi
fi
GitHub Actions Integration
name: Support Package Collection
on:
workflow_dispatch:
schedule:
- cron: '0 0 * * 0' # Weekly on Sunday
jobs:
collect-support:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install redisctl
run: |
curl -L https://github.com/joshrotenberg/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
sudo mv redisctl /usr/local/bin/
- name: Configure Redis Enterprise credentials
run: |
redisctl profile set enterprise \
--deployment enterprise \
--url ${{ secrets.REDIS_ENTERPRISE_URL }} \
--username ${{ secrets.REDIS_ENTERPRISE_USER }} \
--password ${{ secrets.REDIS_ENTERPRISE_PASSWORD }} \
--insecure
- name: Collect support package
id: support
run: |
# Generate package with JSON output
OUTPUT=$(redisctl enterprise support-package cluster -o json)
echo "$OUTPUT" > support-result.json
# Extract key fields
SUCCESS=$(echo "$OUTPUT" | jq -r '.success')
FILE_PATH=$(echo "$OUTPUT" | jq -r '.file_path')
FILE_SIZE=$(echo "$OUTPUT" | jq -r '.file_size_display')
# Set outputs for next steps
echo "success=$SUCCESS" >> $GITHUB_OUTPUT
echo "file_path=$FILE_PATH" >> $GITHUB_OUTPUT
echo "file_size=$FILE_SIZE" >> $GITHUB_OUTPUT
- name: Upload artifact
if: steps.support.outputs.success == 'true'
uses: actions/upload-artifact@v4
with:
name: support-package-${{ github.run_id }}
path: ${{ steps.support.outputs.file_path }}
retention-days: 30
- name: Create issue on large package
if: steps.support.outputs.success == 'true'
run: |
FILE_SIZE_BYTES=$(jq -r '.file_size' support-result.json)
# If package is over 1GB, create an issue
if [ "$FILE_SIZE_BYTES" -gt 1073741824 ]; then
gh issue create \
--title "Large support package detected" \
--body "Support package size: ${{ steps.support.outputs.file_size }}" \
--label monitoring
fi
Jenkins Pipeline
pipeline {
agent any
stages {
stage('Health Check') {
steps {
script {
def clusterHealth = sh(
script: 'redisctl enterprise cluster get -o json',
returnStdout: true
).trim()
def health = readJSON text: clusterHealth
if (health.data.state != 'active') {
echo "Cluster unhealthy, generating support package..."
def supportResult = sh(
script: 'redisctl enterprise support-package cluster -o json',
returnStdout: true
).trim()
def support = readJSON text: supportResult
if (support.success) {
archiveArtifacts artifacts: support.file_path
// Send notification
emailext (
subject: "Redis Cluster Issue - Support Package Generated",
body: """
Cluster State: ${health.data.state}
Support Package: ${support.file_path}
Size: ${support.file_size_display}
Generated at: ${support.timestamp}
""",
to: 'ops-team@company.com'
)
}
}
}
}
}
}
}
Terraform Integration
# Generate support package before infrastructure changes
resource "null_resource" "pre_change_support" {
provisioner "local-exec" {
command = <<-EOT
# Generate support package and capture output
OUTPUT=$(redisctl enterprise support-package cluster -o json)
# Save to state bucket
if [ $(echo "$OUTPUT" | jq -r '.success') = "true" ]; then
FILE=$(echo "$OUTPUT" | jq -r '.file_path')
aws s3 cp "$FILE" "s3://terraform-state/support-packages/pre-${timestamp()}/"
fi
EOT
}
triggers = {
always_run = timestamp()
}
}
Parsing JSON Output in Different Languages
Python
import json
import subprocess
# Generate support package
result = subprocess.run(
['redisctl', 'enterprise', 'support-package', 'cluster', '-o', 'json'],
capture_output=True,
text=True
)
# Parse JSON output
data = json.loads(result.stdout)
if data['success']:
print(f"Package created: {data['file_path']}")
print(f"Size: {data['file_size_display']}")
print(f"Time taken: {data['elapsed_seconds']} seconds")
# Upload to monitoring system
metrics.send('support_package.size', data['file_size'])
metrics.send('support_package.generation_time', data['elapsed_seconds'])
else:
print(f"Error: {data.get('error', 'Unknown error')}")
Node.js
const { exec } = require('child_process');
const fs = require('fs');
// Generate support package
exec('redisctl enterprise support-package cluster -o json', (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error.message}`);
return;
}
const result = JSON.parse(stdout);
if (result.success) {
console.log(`Package created: ${result.file_path}`);
console.log(`Size: ${result.file_size_display}`);
// Upload to cloud storage
uploadToS3(result.file_path).then(() => {
// Clean up local file
fs.unlinkSync(result.file_path);
});
}
});
Monitoring and Alerting
#!/bin/bash
# monitor-support-package.sh
# Generate package and check size
result=$(redisctl enterprise support-package cluster -o json)
if [ $(echo "$result" | jq -r '.success') = "true" ]; then
size_bytes=$(echo "$result" | jq -r '.file_size')
elapsed=$(echo "$result" | jq -r '.elapsed_seconds')
# Send metrics to monitoring system
curl -X POST http://metrics.internal/api/v1/metrics \
-H "Content-Type: application/json" \
-d @- <<EOF
{
"metrics": [
{
"name": "redis.support_package.size_bytes",
"value": $size_bytes,
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
},
{
"name": "redis.support_package.generation_seconds",
"value": $elapsed,
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
]
}
EOF
# Alert if package is too large
if [ "$size_bytes" -gt 2147483648 ]; then # 2GB
curl -X POST http://alerts.internal/api/v1/alert \
-H "Content-Type: application/json" \
-d "{\"severity\": \"warning\", \"message\": \"Large support package: $(echo "$result" | jq -r '.file_size_display')\"}"
fi
fi
Related Commands
- Debug Info Commands - Lower-level diagnostic collection
- Logs Commands - View logs without full package
- Cluster Commands - Check cluster health
- Database Commands - Database management
Debug Info Commands
Collect diagnostic information and support packages for troubleshooting Redis Enterprise clusters.
Overview
Debug info commands gather comprehensive diagnostic data from Redis Enterprise clusters, nodes, and databases. As of Phase 1 improvements, these commands now properly download binary tar.gz support packages that can be directly uploaded to Redis Support.
Available Commands
Collect Cluster Support Package
# Download cluster-wide support package (recommended)
redisctl enterprise debug-info all
# With custom output file
redisctl enterprise debug-info all --file /tmp/cluster-support.tar.gz
# Use new API endpoint (for Redis Enterprise 7.4+)
redisctl enterprise debug-info all --use-new-api
Output: Downloads a tar.gz file containing:
- Complete cluster configuration
- All node information and logs
- Database configurations
- System metrics and diagnostics
- Network configuration
- Performance data
Default filename: support-package-cluster-{timestamp}.tar.gz
Collect Node Support Package
# Download support package for all nodes
redisctl enterprise debug-info node
# Download for specific node
redisctl enterprise debug-info node 1
# With custom output
redisctl enterprise debug-info node 1 --file /tmp/node1-support.tar.gz
Output: Downloads a tar.gz file containing:
- Node configuration and state
- System resources and metrics
- Local log files
- Process information
- Network configuration
Default filename:
- All nodes:
support-package-nodes-{timestamp}.tar.gz
- Specific node:
support-package-node-{uid}-{timestamp}.tar.gz
Collect Database Support Package
# Download support package for specific database
redisctl enterprise debug-info database 1
# With custom output
redisctl enterprise debug-info database 1 --file /tmp/db1-support.tar.gz
# Use new API endpoint
redisctl enterprise debug-info database 1 --use-new-api
Output: Downloads a tar.gz file containing:
- Database configuration
- Shard distribution and state
- Replication information
- Performance metrics
- Recent operations and logs
Default filename: support-package-db-{uid}-{timestamp}.tar.gz
Binary Download Support (Phase 1)
Starting with v0.5.1, all debug-info commands properly handle binary responses:
# Downloads actual tar.gz file (not JSON)
redisctl enterprise debug-info all
# Verify the downloaded file
file support-package-cluster-*.tar.gz
# Output: gzip compressed data, from Unix
# Extract and view contents
tar -tzf support-package-cluster-*.tar.gz | head
API Endpoint Compatibility
The tool supports both old (deprecated) and new API endpoints:
Command | Old Endpoint (default) | New Endpoint (--use-new-api) |
---|---|---|
all | /v1/debuginfo/all | /v1/cluster/debuginfo |
node | /v1/debuginfo/node | /v1/nodes/{uid}/debuginfo |
database | /v1/debuginfo/all/bdb/{uid} | /v1/bdbs/{uid}/debuginfo |
Note: Old endpoints are deprecated as of Redis Enterprise 7.4. Use --use-new-api
for newer clusters.
Common Use Cases
Quick Support Package for Troubleshooting
# Generate support package with automatic naming
redisctl enterprise debug-info all
# Output shows:
# ✓ Support package created successfully
# File: support-package-cluster-20250916-110539.tar.gz
# Size: 305.7 KB
Preparing for Support Ticket
# 1. Generate cluster support package
redisctl enterprise debug-info all --file support-case-12345.tar.gz
# 2. Verify the file
ls -lh support-case-12345.tar.gz
file support-case-12345.tar.gz
# 3. Upload to Redis Support portal
# Reference your case number: 12345
Database-Specific Issues
# Generate package for problematic database
redisctl enterprise debug-info database 1
# The package includes database-specific logs and metrics
# Upload directly to support ticket
Automated Collection Script
#!/bin/bash
# Collect support packages for all components
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
OUTPUT_DIR="./support-$TIMESTAMP"
mkdir -p "$OUTPUT_DIR"
echo "Collecting cluster support package..."
redisctl enterprise debug-info all \
--file "$OUTPUT_DIR/cluster.tar.gz"
echo "Collecting node support packages..."
for node_id in 1 2 3; do
redisctl enterprise debug-info node $node_id \
--file "$OUTPUT_DIR/node-$node_id.tar.gz"
done
echo "Support packages saved to $OUTPUT_DIR"
Important Notes
Security Considerations
- Support packages contain sensitive information (hostnames, IPs, configurations)
- Review contents before sharing if needed
- Delete local copies after uploading to support
- Use secure channels for transmission
Performance Impact
- Package generation may temporarily impact cluster performance
- Large clusters can generate packages over 1GB
- Run during maintenance windows when possible
- Network bandwidth considerations for remote clusters
File Management
- Files are saved in current directory by default
- Use
--file
to specify custom location - Automatic timestamp prevents overwriting
- Clean up old support packages regularly
Progress Indicators
The tool now shows progress during package generation:
⠋ Generating support package...
✓ Support package created successfully
File: support-package-cluster-20250916-110539.tar.gz
Size: 305.7 KB
Troubleshooting
Authentication Errors
If you get authentication errors, ensure correct credentials:
# Check your profile
redisctl profile list
# Use environment variables for testing
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="your_password"
export REDIS_ENTERPRISE_INSECURE="true"
Large File Sizes
For very large support packages:
# Stream directly to compressed file
redisctl enterprise debug-info all --file >(gzip -9 > support.tar.gz)
# Split large files for upload
split -b 100M support-package.tar.gz support-part-
Verify Package Contents
# List contents without extracting
tar -tzf support-package-cluster-*.tar.gz
# Extract specific files
tar -xzf support-package-cluster-*.tar.gz logs/
# View package info
gzip -l support-package-cluster-*.tar.gz
Related Commands
- Support Package Commands - Enhanced support package workflow (Phase 2)
- Logs Commands - View cluster logs directly
- Stats Commands - Monitor performance metrics
- Cluster Commands - Check cluster health
Diagnostics
The diagnostics commands provide tools for monitoring and troubleshooting Redis Enterprise cluster health, running diagnostic checks, and generating diagnostic reports.
Overview
Redis Enterprise includes a built-in diagnostics system that performs various health checks on the cluster, nodes, and databases. These checks help identify potential issues before they become critical problems.
Available Commands
Get Diagnostics Configuration
Retrieve the current diagnostics configuration:
# Get full diagnostics config
redisctl enterprise diagnostics get
# Get specific configuration fields
redisctl enterprise diagnostics get -q "enabled_checks"
Update Diagnostics Configuration
Modify diagnostics settings:
# Update from JSON file
redisctl enterprise diagnostics update --data @diagnostics-config.json
# Update from stdin
echo '{"check_interval": 300}' | redisctl enterprise diagnostics update --data -
# Disable specific checks
redisctl enterprise diagnostics update --data '{"disabled_checks": ["memory_check", "disk_check"]}'
Run Diagnostics Checks
Trigger diagnostic checks manually:
# Run all diagnostics
redisctl enterprise diagnostics run
# Run with specific parameters
redisctl enterprise diagnostics run --data '{"checks": ["connectivity", "resources"]}'
List Available Checks
View all available diagnostic checks:
# List all checks
redisctl enterprise diagnostics list-checks
# Output as table
redisctl enterprise diagnostics list-checks -o table
Get Latest Report
Retrieve the most recent diagnostics report:
# Get latest report
redisctl enterprise diagnostics last-report
# Get specific sections
redisctl enterprise diagnostics last-report -q "cluster_health"
Get Specific Report
Retrieve a diagnostics report by ID:
# Get report by ID
redisctl enterprise diagnostics get-report <report_id>
# Get report summary only
redisctl enterprise diagnostics get-report <report_id> -q "summary"
List All Reports
View all available diagnostics reports:
# List all reports
redisctl enterprise diagnostics list-reports
# List recent reports only
redisctl enterprise diagnostics list-reports --data '{"limit": 10}'
# Filter by date range
redisctl enterprise diagnostics list-reports --data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}'
Diagnostic Check Types
Common diagnostic checks include:
-
Resource Checks
- Memory utilization
- CPU usage
- Disk space
- Network bandwidth
-
Cluster Health
- Node connectivity
- Replication status
- Shard distribution
- Quorum status
-
Database Health
- Endpoint availability
- Persistence status
- Backup status
- Module functionality
-
Security Checks
- Certificate expiration
- Authentication status
- Encryption settings
- ACL configuration
Configuration Examples
Enable Automatic Diagnostics
{
"enabled": true,
"auto_run": true,
"check_interval": 3600,
"retention_days": 30,
"email_alerts": true,
"alert_recipients": ["ops@example.com"]
}
Configure Check Thresholds
{
"thresholds": {
"memory_usage_percent": 80,
"disk_usage_percent": 85,
"cpu_usage_percent": 75,
"certificate_expiry_days": 30
}
}
Disable Specific Checks
{
"disabled_checks": [
"backup_validation",
"module_check"
],
"check_timeout": 30
}
Practical Examples
Daily Health Check Script
#!/bin/bash
# Run daily diagnostics and email report
# Run diagnostics
redisctl enterprise diagnostics run
# Get latest report
REPORT=$(redisctl enterprise diagnostics last-report)
# Check for critical issues
CRITICAL=$(echo "$REPORT" | jq '.issues | map(select(.severity == "critical")) | length')
if [ "$CRITICAL" -gt 0 ]; then
# Send alert for critical issues
echo "$REPORT" | mail -s "Redis Enterprise: Critical Issues Found" ops@example.com
fi
Monitor Cluster Health
# Continuous health monitoring
watch -n 60 'redisctl enterprise diagnostics last-report -q "summary" -o table'
Generate Monthly Report
# Get all reports for the month
redisctl enterprise diagnostics list-reports \
--data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}' \
-o json > monthly-diagnostics.json
# Extract key metrics
jq '.[] | {date: .timestamp, health_score: .summary.health_score}' monthly-diagnostics.json
Pre-Maintenance Check
# Run comprehensive diagnostics before maintenance
redisctl enterprise diagnostics run --data '{
"comprehensive": true,
"include_logs": true,
"validate_backups": true
}'
# Wait for completion and check results
sleep 30
redisctl enterprise diagnostics last-report -q "ready_for_maintenance"
Report Structure
Diagnostics reports typically include:
{
"report_id": "diag-12345",
"timestamp": "2024-01-15T10:30:00Z",
"cluster_id": "cluster-1",
"summary": {
"health_score": 95,
"total_checks": 50,
"passed": 48,
"warnings": 1,
"failures": 1
},
"cluster_health": {
"nodes": [...],
"databases": [...],
"replication": {...}
},
"resource_usage": {
"memory": {...},
"cpu": {...},
"disk": {...}
},
"issues": [
{
"severity": "warning",
"component": "node-2",
"message": "Disk usage at 82%",
"recommendation": "Consider adding storage"
}
],
"recommendations": [...]
}
Best Practices
- Schedule Regular Checks - Run diagnostics daily or weekly
- Monitor Trends - Track health scores over time
- Set Up Alerts - Configure email alerts for critical issues
- Archive Reports - Keep historical reports for trend analysis
- Pre-Maintenance Checks - Always run diagnostics before maintenance
- Custom Thresholds - Adjust thresholds based on your environment
Integration with Monitoring
The diagnostics system can be integrated with external monitoring tools:
# Export to Prometheus format
redisctl enterprise diagnostics last-report -q "metrics" | \
prometheus-push-gateway
# Send to logging system
redisctl enterprise diagnostics last-report | \
logger -t redis-diagnostics
# Create JIRA ticket for issues
ISSUES=$(redisctl enterprise diagnostics last-report -q "issues")
if [ -n "$ISSUES" ]; then
create-jira-ticket --project OPS --summary "Redis Diagnostics Issues" --description "$ISSUES"
fi
Troubleshooting
Diagnostics Not Running
# Check if diagnostics are enabled
redisctl enterprise diagnostics get -q "enabled"
# Enable diagnostics
redisctl enterprise diagnostics update --data '{"enabled": true}'
Reports Not Generated
# Check last run time
redisctl enterprise diagnostics get -q "last_run"
# Trigger manual run
redisctl enterprise diagnostics run
Missing Checks
# List disabled checks
redisctl enterprise diagnostics get -q "disabled_checks"
# Re-enable all checks
redisctl enterprise diagnostics update --data '{"disabled_checks": []}'
Related Commands
enterprise cluster
- Cluster management and healthenterprise stats
- Performance statisticsenterprise logs
- System logs and eventsenterprise action
- Monitor diagnostic task progress
License Management Commands
Manage Redis Enterprise licenses with comprehensive tools for compliance monitoring, multi-instance management, and automated workflows.
Overview
The license commands provide powerful capabilities for managing Redis Enterprise licenses:
- View and update license information
- Monitor expiration across multiple instances
- Generate compliance reports
- Bulk license updates across deployments
- Automated monitoring and alerting
Core License Commands
Get License Information
# Get full license details
redisctl enterprise license get
# Get specific fields with JMESPath
redisctl enterprise license get -q 'expiration_date'
redisctl enterprise license get -q '{name: cluster_name, expires: expiration_date}'
Update License
# Update with JSON data
redisctl enterprise license update --data '{
"license": "YOUR_LICENSE_KEY_HERE"
}'
# Update from file
redisctl enterprise license update --data @new-license.json
# Update from stdin
echo '{"license": "..."}' | redisctl enterprise license update --data -
Upload License File
# Upload a license file directly
redisctl enterprise license upload --file /path/to/license.txt
# Supports both raw license text and JSON format
redisctl enterprise license upload --file license.json
Validate License
# Validate license before applying
redisctl enterprise license validate --data @license.json
# Validate from stdin
cat license.txt | redisctl enterprise license validate --data -
Check License Expiration
# Get expiration information
redisctl enterprise license expiry
# Check if expiring soon
redisctl enterprise license expiry -q 'warning'
# Get days remaining
redisctl enterprise license expiry -q 'days_remaining'
View Licensed Features
# List all licensed features
redisctl enterprise license features
# Check specific features
redisctl enterprise license features -q 'flash_enabled'
redisctl enterprise license features -q 'modules'
License Usage Report
# Get current usage vs limits
redisctl enterprise license usage
# Get RAM usage
redisctl enterprise license usage -q 'ram'
# Check shard availability
redisctl enterprise license usage -q 'shards.available'
Multi-Instance License Workflows
License Audit Across All Profiles
# Audit all configured Redis Enterprise instances
redisctl enterprise workflow license audit
# Show only expiring licenses (within 30 days)
redisctl enterprise workflow license audit --expiring
# Show only expired licenses
redisctl enterprise workflow license audit --expired
# Export as JSON for processing
redisctl enterprise workflow license audit -o json > license-audit.json
Bulk License Updates
# Update license across all enterprise profiles
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json
# Update specific profiles
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west,staging" \
--data @new-license.json
# Dry run to see what would be updated
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json \
--dry-run
License Compliance Report
# Generate comprehensive compliance report
redisctl enterprise workflow license report
# Export as CSV for spreadsheets
redisctl enterprise workflow license report --format csv > compliance-report.csv
# Generate JSON report for automation
redisctl enterprise workflow license report -o json
License Monitoring
# Monitor all profiles for expiring licenses
redisctl enterprise workflow license monitor
# Custom warning threshold (default 30 days)
redisctl enterprise workflow license monitor --warning-days 60
# Exit with error code if any licenses are expiring (for CI/CD)
redisctl enterprise workflow license monitor --fail-on-warning
Automation Examples
CI/CD License Check
#!/bin/bash
# Check license status in CI/CD pipeline
if ! redisctl enterprise workflow license monitor --warning-days 14 --fail-on-warning; then
echo "ERROR: License issues detected!"
exit 1
fi
License Expiration Script
#!/bin/bash
# Email alert for expiring licenses
AUDIT=$(redisctl enterprise workflow license audit --expiring -o json)
COUNT=$(echo "$AUDIT" | jq 'length')
if [ "$COUNT" -gt 0 ]; then
echo "Warning: $COUNT licenses expiring soon!" | \
mail -s "Redis Enterprise License Alert" admin@company.com
echo "$AUDIT" | jq -r '.[] |
"Profile: \(.profile) - Expires: \(.expiration_date) (\(.days_remaining) days)"'
fi
Monthly Compliance Report
#!/bin/bash
# Generate monthly compliance report
REPORT_DATE=$(date +%Y-%m)
REPORT_FILE="license-compliance-${REPORT_DATE}.csv"
# Generate CSV report
redisctl enterprise workflow license report --format csv > "$REPORT_FILE"
# Email the report
echo "Please find attached the monthly license compliance report." | \
mail -s "Redis License Report - $REPORT_DATE" \
-a "$REPORT_FILE" \
compliance@company.com
Automated License Renewal
#!/bin/bash
# Automatically apply new license when available
LICENSE_FILE="/secure/path/new-license.json"
if [ -f "$LICENSE_FILE" ]; then
# Validate the license first
if redisctl enterprise license validate --data @"$LICENSE_FILE"; then
# Apply to all production instances
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west" \
--data @"$LICENSE_FILE"
# Archive the applied license
mv "$LICENSE_FILE" "/secure/path/applied/$(date +%Y%m%d)-license.json"
else
echo "ERROR: Invalid license file!"
exit 1
fi
fi
Profile Management for Multi-Instance
Setup Multiple Profiles
# Add production profiles
redisctl profile set prod-east \
--deployment-type enterprise \
--url https://redis-east.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_EAST
redisctl profile set prod-west \
--deployment-type enterprise \
--url https://redis-west.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_WEST
# Add staging profile
redisctl profile set staging \
--deployment-type enterprise \
--url https://redis-staging.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_STAGING
Check License Per Profile
# Check specific profile
redisctl -p prod-east enterprise license expiry
redisctl -p prod-west enterprise license usage
redisctl -p staging enterprise license features
Common Use Cases
Pre-Renewal Planning
# Get usage across all instances for capacity planning
for profile in $(redisctl profile list -q '[].name'); do
echo "=== Profile: $profile ==="
redisctl -p "$profile" enterprise license usage -o yaml
done
License Synchronization
# Ensure all instances have the same license
MASTER_LICENSE=$(redisctl -p prod-east enterprise license get -o json)
echo "$MASTER_LICENSE" | \
redisctl enterprise workflow license bulk-update \
--profiles "prod-west,staging,dev" \
--data -
Compliance Dashboard Data
# Generate JSON data for dashboard
{
echo '{"timestamp": "'$(date -Iseconds)'",'
echo '"instances": '
redisctl enterprise workflow license audit -o json
echo '}'
} > dashboard-data.json
Output Formats
All commands support multiple output formats:
# JSON output (default)
redisctl enterprise license get -o json
# YAML output
redisctl enterprise license get -o yaml
# Table output
redisctl enterprise license get -o table
JMESPath Filtering
Use JMESPath queries to extract specific information:
# Get expiration dates for all profiles
redisctl enterprise workflow license audit -q '[].{profile: profile, expires: expiration_date}'
# Filter only expiring licenses
redisctl enterprise workflow license audit -q "[?expiring_soon==`true`]"
# Get usage percentages
redisctl enterprise license usage -q '{
ram_used_pct: (ram.used_gb / ram.limit_gb * `100`),
shards_used_pct: (shards.used / shards.limit * `100`)
}'
Troubleshooting
Common Issues
-
License validation fails
# Check license format redisctl enterprise license validate --data @license.json
-
Bulk update fails for some profiles
# Use dry-run to identify issues redisctl enterprise workflow license bulk-update --profiles all --data @license.json --dry-run
-
Monitoring shows unexpected results
# Verify profile configurations redisctl profile list # Test connection to each profile for p in $(redisctl profile list -q '[].name'); do echo "Testing $p..." redisctl -p "$p" enterprise cluster get -q 'name' || echo "Failed: $p" done
Notes
- License files can be in JSON format or raw license text
- Workflow commands operate on all configured enterprise profiles
- Use
--dry-run
for bulk operations to preview changes - Monitor commands can integrate with CI/CD pipelines using exit codes
- CSV export format is ideal for spreadsheet analysis and reporting
- All sensitive license data should be handled securely
Database Migration
The migration commands provide tools for database import/export operations and migration status tracking in Redis Enterprise.
Available Commands
Get Migration Status
Check the status of a specific migration operation:
# Get migration status
redisctl enterprise migration get 12345
# Get migration status as YAML
redisctl enterprise migration get 12345 -o yaml
# Extract specific fields
redisctl enterprise migration get 12345 -q '{status: status, progress: progress_percentage}'
# Check if migration is complete
redisctl enterprise migration get 12345 -q 'status == "completed"'
Export Database
Export database data for backup or migration:
# Export database
redisctl enterprise migration export 1
# Export and save task ID
TASK_ID=$(redisctl enterprise migration export 1 -q 'task_id')
# Monitor export progress
redisctl enterprise action get $TASK_ID
# Export with specific options (via database commands)
redisctl enterprise database export 1 --data '{
"export_type": "rdb",
"compression": "gzip"
}'
Import Database
Import data into a database:
# Import from RDB file URL
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/backup.rdb.gz",
"import_type": "rdb"
}
EOF
# Import from another database
redisctl enterprise migration import 2 --data '{
"source_type": "database",
"source_database_uid": 1
}'
# Import from file
redisctl enterprise migration import 3 --data @import-config.json
Output Examples
Migration Status
{
"uid": 12345,
"status": "in_progress",
"type": "import",
"database_uid": 1,
"started": "2024-03-15T10:00:00Z",
"progress_percentage": 65,
"estimated_completion": "2024-03-15T10:30:00Z",
"bytes_transferred": 1073741824,
"total_bytes": 1649267441
}
Export Response
{
"task_id": "task-export-67890",
"status": "queued",
"database_uid": 1,
"export_location": "s3://backups/db1-20240315.rdb.gz"
}
Import Response
{
"task_id": "task-import-11111",
"status": "started",
"database_uid": 2,
"source": "https://storage.example.com/backup.rdb.gz"
}
Common Use Cases
Database Backup
Create and manage database backups:
# Export database for backup
redisctl enterprise migration export 1
# Check export status
redisctl enterprise action list -q "[?contains(name, 'export')]"
# Download exported file (if accessible)
EXPORT_URL=$(redisctl enterprise action get <task_id> -q 'result.export_url')
curl -o backup.rdb.gz "$EXPORT_URL"
Database Cloning
Clone a database within the cluster:
# Export source database
EXPORT_TASK=$(redisctl enterprise migration export 1 -q 'task_id')
# Wait for export to complete
redisctl enterprise action wait $EXPORT_TASK
# Get export location
EXPORT_LOC=$(redisctl enterprise action get $EXPORT_TASK -q 'result.location')
# Import to new database
cat <<EOF | redisctl enterprise migration import 2 --data -
{
"source_type": "internal",
"source_location": "$EXPORT_LOC"
}
EOF
Cross-Cluster Migration
Migrate databases between clusters:
# On source cluster: Export database
redisctl enterprise migration export 1
# Note the export location
# Transfer file to destination cluster storage
# (Use appropriate method: S3, FTP, SCP, etc.)
# On destination cluster: Import database
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/export.rdb.gz",
"skip_verify_ssl": false
}
EOF
Scheduled Backups
Automate regular database exports:
#!/bin/bash
# backup.sh - Daily backup script
DBS=$(redisctl enterprise database list -q '[].uid' -o json | jq -r '.[]')
for DB in $DBS; do
echo "Backing up database $DB"
TASK=$(redisctl enterprise migration export $DB -q 'task_id')
# Store task IDs for monitoring
echo "$TASK:$DB:$(date +%Y%m%d)" >> backup-tasks.log
done
# Monitor all backup tasks
while read line; do
TASK=$(echo $line | cut -d: -f1)
DB=$(echo $line | cut -d: -f2)
STATUS=$(redisctl enterprise action get $TASK -q 'status')
echo "Database $DB backup: $STATUS"
done < backup-tasks.log
Migration Monitoring
Track migration progress and handle issues:
# List all migration-related tasks
redisctl enterprise action list -q "[?contains(name, 'migration') || contains(name, 'import') || contains(name, 'export')]"
# Monitor specific migration
MIGRATION_ID=12345
while true; do
STATUS=$(redisctl enterprise migration get $MIGRATION_ID -q 'status')
PROGRESS=$(redisctl enterprise migration get $MIGRATION_ID -q 'progress_percentage')
echo "Status: $STATUS, Progress: $PROGRESS%"
[ "$STATUS" = "completed" ] && break
sleep 10
done
# Check for errors
redisctl enterprise migration get $MIGRATION_ID -q 'error'
Error Handling
Handle migration failures:
# Check migration error details
redisctl enterprise migration get <uid> -q '{status: status, error: error_message, failed_at: failed_timestamp}'
# List failed migrations
redisctl enterprise action list -q "[?status == 'failed' && contains(name, 'migration')]"
# Retry failed import
FAILED_CONFIG=$(redisctl enterprise migration get <uid> -q 'configuration')
echo "$FAILED_CONFIG" | redisctl enterprise migration import <bdb_uid> --data -
Best Practices
- Pre-Migration Checks: Verify source and target compatibility
- Test Migrations: Always test with non-production data first
- Monitor Progress: Track migration status throughout the process
- Verify Data: Confirm data integrity after migration
- Schedule Wisely: Run large migrations during maintenance windows
- Keep Backups: Maintain backups before starting migrations
Troubleshooting
Import Failures
When imports fail:
# Check database status
redisctl enterprise database get <bdb_uid> -q 'status'
# Verify available memory
redisctl enterprise database get <bdb_uid> -q '{memory_size: memory_size, used_memory: used_memory}'
# Check cluster resources
redisctl enterprise cluster get -q 'resources'
# Review error logs
redisctl enterprise logs get --filter "database=$BDB_UID"
Export Issues
When exports fail:
# Check disk space on nodes
redisctl enterprise node list -q '[].{node: uid, disk_free: disk_free_size}'
# Verify database is accessible
redisctl enterprise database get <bdb_uid> -q 'status'
# Check export permissions
redisctl enterprise database get <bdb_uid> -q 'backup_configuration'
Related Commands
redisctl enterprise database
- Database management including import/exportredisctl enterprise action
- Track migration tasksredisctl enterprise cluster
- Check cluster resourcesredisctl enterprise logs
- View migration-related logs
Actions (Async Tasks)
Actions in Redis Enterprise represent asynchronous operations or tasks that are running or have completed. The action commands allow you to monitor and manage these background operations.
Overview
Many Redis Enterprise operations are asynchronous, returning an action ID that can be used to track progress. Actions include database creation/deletion, backup operations, imports/exports, and cluster maintenance tasks.
Available Commands
List All Actions
List all actions in the cluster with optional filtering:
# List all actions
redisctl enterprise action list
# Filter by status
redisctl enterprise action list --status completed
redisctl enterprise action list --status running
# Filter by type
redisctl enterprise action list --type bdb_backup
# Combine filters
redisctl enterprise action list --status running --type bdb_import
# Output as table
redisctl enterprise action list -o table
Get Action Details
Get detailed information about a specific action:
# Get action by UID
redisctl enterprise action get <action_uid>
# Get action with specific fields using JMESPath
redisctl enterprise action get <action_uid> -q "status"
Check Action Status
Quick status check for an action (returns just the status field):
redisctl enterprise action status <action_uid>
Cancel Running Action
Cancel a running action:
redisctl enterprise action cancel <action_uid>
List Actions for Database
List all actions for a specific database:
redisctl enterprise action list-for-bdb <bdb_uid>
# Filter by status for specific database
redisctl enterprise action list-for-bdb <bdb_uid> --status running
Action Types
Common action types you'll encounter:
bdb_create
- Database creationbdb_delete
- Database deletionbdb_update
- Database configuration updatebdb_backup
- Database backup operationbdb_import
- Database import operationbdb_export
- Database export operationcrdb_create
- Active-Active database creationnode_join
- Node joining clustercluster_recovery
- Cluster recovery operation
Action Statuses
Actions can have the following statuses:
queued
- Action is queued for executionrunning
- Action is currently executingcompleted
- Action completed successfullyfailed
- Action failed with errorscanceled
- Action was canceled
Examples
Monitor Database Creation
# Create a database (returns action_uid)
ACTION_UID=$(redisctl enterprise database create --data @db.json -q "action_uid")
# Check status
redisctl enterprise action status $ACTION_UID
# Get full details when complete
redisctl enterprise action get $ACTION_UID
List Recent Failed Actions
# List failed actions in table format
redisctl enterprise action list --status failed -o table
# Get details of a failed action
redisctl enterprise action get <failed_action_uid> -q "{error: error_message, started: start_time}"
Cancel Long-Running Import
# List running imports
redisctl enterprise action list --status running --type bdb_import
# Cancel specific import
redisctl enterprise action cancel <import_action_uid>
Monitor All Database Actions
# Watch all actions for a database
watch -n 5 "redisctl enterprise action list-for-bdb 1 -o table"
Integration with Async Operations
The action commands work seamlessly with the --wait
flag available on create/update/delete operations:
# This uses action monitoring internally
redisctl enterprise database create --data @db.json --wait
# Equivalent to manually monitoring:
ACTION_UID=$(redisctl enterprise database create --data @db.json -q "action_uid")
while [ "$(redisctl enterprise action status $ACTION_UID)" = "running" ]; do
sleep 5
done
API Versions
The action commands support both v1 and v2 API endpoints:
- v2 endpoints (
/v2/actions
) are preferred when available - v1 endpoints (
/v1/actions
) are used as fallback - Both return the same data structure
Best Practices
- Always check action status for async operations before proceeding
- Use filtering to reduce output when listing many actions
- Save action UIDs from create/update operations for tracking
- Set up monitoring for critical long-running actions
- Check failed actions for error details to diagnose issues
Related Commands
enterprise database
- Database operations that create actionsenterprise cluster
- Cluster operations that create actionsenterprise crdb
- Active-Active operations that create actions
Enterprise Workflows
Workflows are multi-step operations that automate complex Redis Enterprise management tasks. They combine multiple API calls, handle asynchronous operations, and provide progress feedback.
Available Workflows
List Workflows
# List all available workflows
redisctl enterprise workflow list
# JSON output for scripting
redisctl enterprise workflow list --output json
Initialize Cluster
The init-cluster
workflow automates the complete setup of a new Redis Enterprise cluster, including bootstrapping and optional database creation.
# Initialize with default settings
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword"
# Initialize with custom cluster name and database
redisctl enterprise workflow init-cluster \
--name "production-cluster" \
--username "admin@redis.local" \
--password "YourSecurePassword" \
--database-name "my-database" \
--database-memory-gb 2
# Skip database creation
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword" \
--skip-database
Parameters
Parameter | Description | Default |
---|---|---|
--name | Cluster name | redis-cluster |
--username | Admin username | admin@redis.local |
--password | Admin password (required) | - |
--skip-database | Skip creating default database | false |
--database-name | Name for default database | default-db |
--database-memory-gb | Memory size in GB for database | 1 |
--wait | Wait for operations to complete | true |
--wait-timeout | Maximum wait time in seconds | 600 |
What it does
- Checks cluster status - Verifies if cluster needs initialization
- Bootstraps cluster - Creates cluster with specified name and credentials
- Waits for stabilization - Ensures cluster is ready for operations
- Creates database (optional) - Sets up initial database with specified configuration
- Verifies connectivity - Tests database with PING command
Output Formats
Workflows support structured output for automation:
# JSON output
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123" \
--output json
# YAML output
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123" \
--output yaml
Example JSON output:
{
"success": true,
"message": "Cluster initialized successfully",
"outputs": {
"cluster_name": "redis-cluster",
"username": "admin@cluster.local",
"database_created": true,
"database_name": "default-db"
}
}
Docker Development
For testing workflows with Docker:
# Start Redis Enterprise container
docker compose up -d
# Wait for container to be ready
sleep 10
# Initialize cluster
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123"
# Clean up
docker compose down -v
Environment Variables
Workflows respect standard environment variables:
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_INSECURE="true"
# Password can be set via environment
export REDIS_ENTERPRISE_INIT_PASSWORD="Redis123"
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local"
Error Handling
Workflows provide clear error messages and maintain partial progress:
- If cluster is already initialized, workflow reports success without re-bootstrapping
- If database creation fails, cluster remains initialized and can be managed manually
- Network failures include retry logic with configurable timeouts
Future Workflows
Additional workflows are planned for common operations:
- upgrade-cluster - Orchestrate cluster version upgrades
- backup-restore - Automated backup and restore operations
- migrate-database - Database migration between clusters
- security-hardening - Apply security best practices
See the Workflows Feature Guide for architectural details and information about creating custom workflows.
Job Scheduler
The job scheduler commands allow you to manage and configure scheduled background jobs in Redis Enterprise. These jobs handle critical maintenance tasks like backups, log rotation, certificate renewal, and health checks.
Overview
Redis Enterprise runs several scheduled jobs automatically to maintain cluster health and perform routine maintenance. The job scheduler commands let you view and customize the schedule and configuration of these jobs.
Available Commands
Get Configuration
Retrieve the current job scheduler configuration:
# Get all job scheduler settings
redisctl enterprise job-scheduler get
# Get specific job configuration using JMESPath
redisctl enterprise job-scheduler get -q "backup_job_settings"
# Output as table
redisctl enterprise job-scheduler get -o table
Update Configuration
Modify job scheduler settings:
# Update from JSON file
redisctl enterprise job-scheduler update --data @scheduler-config.json
# Update from stdin
echo '{"backup_job_settings": {"cron_expression": "*/10 * * * *"}}' | \
redisctl enterprise job-scheduler update --data -
# Update inline
redisctl enterprise job-scheduler update --data '{
"log_rotation_job_settings": {
"cron_expression": "0 */6 * * *",
"enabled": true
}
}'
Scheduled Job Types
Backup Job
Manages automatic database backups:
{
"backup_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true
}
}
Database Usage Report
Generates usage statistics for databases:
{
"bdb_usage_report_job_settings": {
"cron_expression": "0 */1 * * *",
"enabled": true,
"file_retention_days": 365
}
}
Certificate Rotation
Handles automatic certificate renewal:
{
"cert_rotation_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true,
"expiry_days_before_rotation": 60
}
}
Log Rotation
Manages log file rotation and cleanup:
{
"log_rotation_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true
}
}
Node Health Checks
Performs periodic node health validation:
{
"node_checks_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true
}
}
Redis Cleanup
Cleans up temporary Redis data:
{
"redis_cleanup_job_settings": {
"cron_expression": "0 * * * *"
}
}
CCS Log Rotation
Rotates cluster configuration service logs:
{
"rotate_ccs_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true,
"file_suffix": "5min",
"rotate_max_num": 24
}
}
Cron Expression Format
Job schedules use standard cron expression format:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
│ │ │ │ │
* * * * *
Common Patterns
*/5 * * * *
- Every 5 minutes0 * * * *
- Every hour0 0 * * *
- Daily at midnight0 2 * * 0
- Weekly on Sunday at 2 AM0 0 1 * *
- Monthly on the 1st at midnight
Examples
Adjust Backup Frequency
Change backups from every 5 minutes to every 30 minutes:
redisctl enterprise job-scheduler update --data '{
"backup_job_settings": {
"cron_expression": "*/30 * * * *"
}
}'
Configure Aggressive Log Rotation
Rotate logs every hour and keep fewer files:
redisctl enterprise job-scheduler update --data '{
"log_rotation_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true
},
"rotate_ccs_job_settings": {
"cron_expression": "0 * * * *",
"file_suffix": "hourly",
"rotate_max_num": 12
}
}'
Extend Certificate Renewal Window
Check certificates 90 days before expiry:
redisctl enterprise job-scheduler update --data '{
"cert_rotation_job_settings": {
"expiry_days_before_rotation": 90
}
}'
Reduce Database Report Retention
Keep usage reports for only 30 days:
redisctl enterprise job-scheduler update --data '{
"bdb_usage_report_job_settings": {
"file_retention_days": 30
}
}'
Configuration Templates
Production Environment
High-frequency backups with extended retention:
{
"backup_job_settings": {
"cron_expression": "*/15 * * * *",
"enabled": true
},
"bdb_usage_report_job_settings": {
"cron_expression": "0 0 * * *",
"enabled": true,
"file_retention_days": 730
},
"cert_rotation_job_settings": {
"cron_expression": "0 0 * * *",
"enabled": true,
"expiry_days_before_rotation": 90
},
"log_rotation_job_settings": {
"cron_expression": "0 */4 * * *",
"enabled": true
}
}
Development Environment
Less frequent operations to reduce overhead:
{
"backup_job_settings": {
"cron_expression": "0 */6 * * *",
"enabled": true
},
"bdb_usage_report_job_settings": {
"cron_expression": "0 0 * * 0",
"enabled": true,
"file_retention_days": 7
},
"node_checks_job_settings": {
"cron_expression": "0 */12 * * *",
"enabled": true
}
}
Monitoring Job Execution
Jobs create actions that can be monitored:
# Check recent backup jobs
redisctl enterprise action list --type backup_job
# Monitor job execution
watch -n 60 'redisctl enterprise action list --status running -o table'
Best Practices
- Balance Frequency vs Load - More frequent jobs provide better protection but increase system load
- Align with Maintenance Windows - Schedule intensive jobs during low-traffic periods
- Monitor Job Success - Regularly check that scheduled jobs complete successfully
- Test Configuration Changes - Verify new schedules work as expected before production deployment
- Document Custom Schedules - Keep notes on why default schedules were modified
Limitations
- Some jobs cannot be disabled (marked as internal scheduled jobs)
- Cron expressions must be valid or the update will fail
- Changes take effect at the next scheduled run
- Job execution history is available through the actions API
Troubleshooting
Jobs Not Running
# Check if job is enabled
redisctl enterprise job-scheduler get -q "backup_job_settings.enabled"
# Verify cron expression
redisctl enterprise job-scheduler get -q "backup_job_settings.cron_expression"
Failed Job Updates
# Check current configuration
redisctl enterprise job-scheduler get
# Validate JSON before updating
echo '{"backup_job_settings": {"enabled": true}}' | jq .
# Try update with valid configuration
redisctl enterprise job-scheduler update --data '{"backup_job_settings": {"enabled": true}}'
Related Commands
enterprise action
- Monitor job execution statusenterprise cluster
- Cluster configuration that affects jobsenterprise database
- Database backup operationsenterprise logs
- View logs generated by scheduled jobs
Bootstrap Commands
Initialize and manage Redis Enterprise cluster setup.
Overview
Bootstrap commands handle the initial setup and configuration of Redis Enterprise clusters, including node initialization, cluster creation, and joining existing clusters.
Available Commands
Get Bootstrap Status
redisctl enterprise bootstrap status
Returns the current bootstrap status and node information:
- Bootstrap state (not_started, in_progress, completed)
- Local node details (architecture, memory, storage paths)
- Available network addresses
- Supported database versions
Create New Cluster
redisctl enterprise bootstrap create-cluster --data '{
"cluster_name": "my-cluster",
"rack_aware": false,
"license": "...",
"nodes": [...]
}'
Initialize a new Redis Enterprise cluster with the specified configuration.
Join Existing Cluster
redisctl enterprise bootstrap join-cluster --data '{
"cluster_address": "192.168.1.100",
"username": "admin@redis.local",
"password": "password",
"replace_node": false
}'
Join this node to an existing Redis Enterprise cluster.
Validate Configuration
# Validate cluster creation config
redisctl enterprise bootstrap validate create_cluster --data '{...}'
# Validate join cluster config
redisctl enterprise bootstrap validate join_cluster --data '{...}'
Pre-flight validation of bootstrap configurations before execution.
Common Use Cases
Initial Cluster Setup
# 1. Check bootstrap status
redisctl enterprise bootstrap status
# 2. Validate configuration
redisctl enterprise bootstrap validate create_cluster --data @cluster-config.json
# 3. Create the cluster
redisctl enterprise bootstrap create-cluster --data @cluster-config.json
Adding Nodes to Cluster
# 1. On new node, check status
redisctl enterprise bootstrap status
# 2. Join the cluster
redisctl enterprise bootstrap join-cluster --data '{
"cluster_address": "node1.redis.local",
"username": "admin@redis.local",
"password": "${REDIS_PASSWORD}"
}'
Output Examples
Bootstrap Status
{
"bootstrap_status": {
"state": "completed",
"start_time": "2025-09-15T00:18:27Z",
"end_time": "2025-09-15T00:18:49Z"
},
"local_node_info": {
"uid": "1",
"architecture": "x86_64",
"total_memory": 8217473024,
"cores": 14,
"persistent_storage_path": "/var/opt/redislabs/persist",
"ephemeral_storage_path": "/var/opt/redislabs/tmp",
"os_version": "Red Hat Enterprise Linux 9.6"
}
}
Important Notes
- Bootstrap operations are typically one-time actions during initial cluster setup
- Most bootstrap operations require root or sudo privileges
- Always validate configurations before applying them
- Bootstrap operations cannot be undone - ensure backups exist
Related Commands
- Cluster Commands - Manage cluster after bootstrap
- Node Commands - Manage individual nodes
- Auth Commands - Configure authentication after bootstrap
Proxy Management
The proxy commands provide management and monitoring capabilities for Redis Enterprise proxy processes. Proxies handle connection management, load balancing, and request routing between clients and Redis database shards.
Available Commands
List Proxies
List all proxy processes in the cluster:
# List all proxies
redisctl enterprise proxy list
# List proxies as a table
redisctl enterprise proxy list -o table
# Filter to active proxies
redisctl enterprise proxy list -q "[?status == 'active']"
# List proxies by node
redisctl enterprise proxy list -q "[?node_uid == \`1\`]"
# Get proxy IDs and their nodes
redisctl enterprise proxy list -q "[].{id: uid, node: node_uid, status: status}"
Get Proxy Details
Get detailed information about a specific proxy:
# Get proxy details
redisctl enterprise proxy get 1
# Get proxy in YAML format
redisctl enterprise proxy get 1 -o yaml
# Extract specific fields
redisctl enterprise proxy get 1 -q '{uid: uid, port: port, threads: threads}'
# Check proxy load
redisctl enterprise proxy get 1 -q '{connections: connections, cpu: cpu_usage}'
Update Proxy Configuration
Update configuration for a specific proxy:
# Update proxy configuration from file
redisctl enterprise proxy update 1 --data @proxy-config.json
# Update proxy with inline JSON
redisctl enterprise proxy update 1 --data '{"threads": 8, "max_connections": 10000}'
# Update proxy from stdin
echo '{"threads": 4}' | redisctl enterprise proxy update 1 --data -
# Update with specific settings
cat <<EOF | redisctl enterprise proxy update 1 --data -
{
"threads": 8,
"max_connections": 10000,
"tcp_keepalive": 60,
"tcp_backlog": 512
}
EOF
Update All Proxies
Update configuration for all proxies simultaneously:
# Update all proxies from file
redisctl enterprise proxy update-all --data @global-proxy-config.json
# Update all proxies with inline configuration
redisctl enterprise proxy update-all --data '{"threads": 8}'
# Apply global proxy policy
cat <<EOF | redisctl enterprise proxy update-all --data -
{
"threads": 8,
"max_connections": 10000,
"tcp_keepalive": 60,
"timeout": 300
}
EOF
Output Examples
Proxy List
[
{
"uid": 1,
"node_uid": 1,
"port": 8080,
"status": "active",
"threads": 4,
"connections": 245,
"cpu_usage": 12.5,
"memory_usage": 128,
"databases": [1, 2, 3]
},
{
"uid": 2,
"node_uid": 2,
"port": 8080,
"status": "active",
"threads": 4,
"connections": 189,
"cpu_usage": 10.2,
"memory_usage": 115,
"databases": [1, 2, 3]
}
]
Proxy Details
{
"uid": 1,
"node_uid": 1,
"port": 8080,
"status": "active",
"threads": 4,
"max_connections": 10000,
"current_connections": 245,
"total_connections": 1234567,
"cpu_usage": 12.5,
"memory_usage": 128,
"tcp_keepalive": 60,
"tcp_backlog": 512,
"timeout": 300,
"databases": [
{
"bdb_uid": 1,
"name": "cache-db",
"connections": 89
},
{
"bdb_uid": 2,
"name": "session-db",
"connections": 67
}
],
"stats": {
"requests_per_sec": 5432,
"operations_per_sec": 8901,
"latency_avg": 0.8,
"errors_per_sec": 0.1
}
}
Common Use Cases
Monitoring Proxy Load
Monitor proxy load and performance:
# Check proxy connections across cluster
redisctl enterprise proxy list -q "[].{proxy: uid, node: node_uid, connections: connections}" -o table
# Find overloaded proxies
redisctl enterprise proxy list -q "[?connections > \`1000\`]"
# Monitor CPU usage
redisctl enterprise proxy list -q "[?cpu_usage > \`50\`].{proxy: uid, cpu: cpu_usage}"
# Check memory usage
redisctl enterprise proxy list -q "[].{proxy: uid, memory_mb: memory_usage}" -o table
Performance Tuning
Optimize proxy performance:
# Increase threads for high-load proxies
for proxy in $(redisctl enterprise proxy list -q "[?cpu_usage > \`75\`].uid" -o json | jq -r '.[]'); do
echo "Updating proxy $proxy"
redisctl enterprise proxy update "$proxy" --data '{"threads": 8}'
done
# Update connection limits
redisctl enterprise proxy update-all --data '{"max_connections": 20000}'
# Apply optimized settings
cat <<EOF | redisctl enterprise proxy update-all --data -
{
"threads": 8,
"max_connections": 15000,
"tcp_keepalive": 30,
"tcp_backlog": 1024,
"timeout": 600
}
EOF
Troubleshooting
Diagnose proxy issues:
# Find proxies with errors
redisctl enterprise proxy list -q "[?status != 'active']"
# Check proxy distribution
redisctl enterprise proxy list -q "[].node_uid" | jq -s 'group_by(.) | map({node: .[0], count: length})'
# Monitor connection distribution
for proxy in 1 2 3; do
echo "Proxy $proxy:"
redisctl enterprise proxy get "$proxy" -q 'databases[].{db: name, connections: connections}' -o table
done
# Check proxy resource usage
redisctl enterprise proxy list -q "[].{proxy: uid, cpu: cpu_usage, memory: memory_usage, connections: connections}" -o table
Capacity Planning
Plan proxy capacity:
# Calculate total connections
redisctl enterprise proxy list -q "[].connections" | jq -s 'add'
# Get average connections per proxy
redisctl enterprise proxy list -q "[].connections" | jq -s 'add/length'
# Find proxies near connection limit
redisctl enterprise proxy list -q "[?connections > max_connections * \`0.8\`].{proxy: uid, usage_pct: (connections / max_connections * \`100\`)}"
# Resource utilization summary
redisctl enterprise proxy list -q "{total_proxies: length(@), avg_cpu: avg([].cpu_usage), avg_memory: avg([].memory_usage), total_connections: sum([].connections)}"
Configuration Examples
Basic Proxy Configuration
{
"threads": 4,
"max_connections": 10000,
"timeout": 300
}
High-Performance Configuration
{
"threads": 16,
"max_connections": 50000,
"tcp_keepalive": 30,
"tcp_backlog": 2048,
"timeout": 600,
"tcp_nodelay": true
}
Resource-Constrained Configuration
{
"threads": 2,
"max_connections": 5000,
"tcp_keepalive": 120,
"tcp_backlog": 256,
"timeout": 120
}
Best Practices
- Load Distribution: Ensure proxies are evenly distributed across nodes
- Thread Tuning: Set threads based on CPU cores and expected load
- Connection Limits: Set appropriate connection limits based on available resources
- Monitoring: Regularly monitor proxy metrics for performance issues
- Gradual Changes: Test configuration changes on individual proxies before applying globally
- Resource Planning: Plan proxy resources based on expected client connections
Integration with Monitoring
Export proxy metrics for monitoring systems:
# Export metrics to monitoring system
redisctl enterprise proxy list -o json | \
jq '.[] | {
timestamp: now,
proxy_id: .uid,
node_id: .node_uid,
connections: .connections,
cpu_usage: .cpu_usage,
memory_usage: .memory_usage
}' | \
curl -X POST http://metrics-collector/ingest -d @-
# Create Prometheus-compatible metrics
redisctl enterprise proxy list -q "[].{proxy: uid, metric: @}" | \
jq -r '.[] | "
redis_proxy_connections{proxy=\"\(.proxy)\"} \(.metric.connections)
redis_proxy_cpu_usage{proxy=\"\(.proxy)\"} \(.metric.cpu_usage)
redis_proxy_memory_mb{proxy=\"\(.proxy)\"} \(.metric.memory_usage)
"'
Troubleshooting
High CPU Usage
When proxies show high CPU usage:
# Identify high-CPU proxies
redisctl enterprise proxy list -q "[?cpu_usage > \`80\`]"
# Check thread configuration
redisctl enterprise proxy get <uid> -q 'threads'
# Increase threads
redisctl enterprise proxy update <uid> --data '{"threads": 8}'
# Monitor after change
watch -n 5 "redisctl enterprise proxy get <uid> -q 'cpu_usage'"
Connection Issues
When experiencing connection problems:
# Check connection limits
redisctl enterprise proxy list -q "[].{proxy: uid, current: connections, max: max_connections, pct: (connections / max_connections * \`100\`)}"
# Find proxies at capacity
redisctl enterprise proxy list -q "[?connections >= max_connections * \`0.95\`]"
# Increase connection limits
redisctl enterprise proxy update <uid> --data '{"max_connections": 20000}'
Related Commands
redisctl enterprise node
- View nodes hosting proxiesredisctl enterprise database
- Manage databases served by proxiesredisctl enterprise stats
- View detailed statistics including proxy metricsredisctl enterprise cluster
- View cluster-wide proxy configuration
Endpoint Management
The endpoint commands provide access to Redis Enterprise database endpoint statistics and availability monitoring.
Note: Redis Enterprise manages most endpoint configurations through database commands. These commands provide monitoring and statistics capabilities.
Available Commands
Get Endpoint Statistics
Get aggregate statistics for all database endpoints in the cluster:
# Get all endpoint statistics
redisctl enterprise endpoint stats
# Get statistics as YAML
redisctl enterprise endpoint stats -o yaml
# Filter to specific metrics
redisctl enterprise endpoint stats -q '[].{name: endpoint_name, connections: current_connections}'
# Get statistics for endpoints with high connection counts
redisctl enterprise endpoint stats -q "[?current_connections > `100`]"
The statistics include:
- Connection metrics (current, total, failed)
- Request/response rates
- Latency information
- Error counts
- Bandwidth usage
Check Endpoint Availability
Check the availability status of a specific database endpoint:
# Check endpoint availability for database 1
redisctl enterprise endpoint availability 1
# Get availability as table
redisctl enterprise endpoint availability 1 -o table
# Extract specific availability information
redisctl enterprise endpoint availability 1 -q 'available'
Availability information includes:
- Current availability status
- Node availability
- Shard distribution
- Failover status
- Connection health
Output Examples
Endpoint Statistics
[
{
"endpoint_name": "redis-12345.cluster.local:16379",
"bdb_uid": 1,
"current_connections": 45,
"total_connections": 12543,
"failed_connections": 2,
"requests_per_sec": 5432,
"responses_per_sec": 5430,
"avg_latency_ms": 0.8,
"bandwidth_in_mbps": 12.5,
"bandwidth_out_mbps": 8.3,
"errors_per_sec": 0.1
}
]
Endpoint Availability
{
"bdb_uid": 1,
"available": true,
"endpoints": [
{
"addr": "redis-12345.cluster.local:16379",
"node": 1,
"role": "master",
"status": "active"
}
],
"shards_placement": "optimal",
"last_failover": null
}
Common Use Cases
Monitoring Endpoint Health
Monitor endpoint statistics and set up alerts:
# Check endpoints with high error rates
redisctl enterprise endpoint stats -q "[?errors_per_sec > `10`]"
# Monitor endpoints with connection issues
redisctl enterprise endpoint stats -q "[?failed_connections > `0`].{name: endpoint_name, failed: failed_connections}"
# Check latency across all endpoints
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, latency: avg_latency_ms}" -o table
Availability Monitoring
Check database endpoint availability during maintenance:
# Check availability for critical databases
for db in 1 2 3; do
echo "Database $db:"
redisctl enterprise endpoint availability $db -q 'available'
done
# Get detailed availability for troubleshooting
redisctl enterprise endpoint availability 1 -o yaml
Performance Analysis
Analyze endpoint performance metrics:
# Get top endpoints by connection count
redisctl enterprise endpoint stats -q "reverse(sort_by([],¤t_connections))[:5]" -o table
# Find endpoints with bandwidth issues
redisctl enterprise endpoint stats -q "[?bandwidth_in_mbps > `100` || bandwidth_out_mbps > `100`]"
# Compare request/response rates
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, req_rate: requests_per_sec, resp_rate: responses_per_sec, diff: requests_per_sec - responses_per_sec}"
Integration with Monitoring
Export endpoint metrics for monitoring systems:
# Export to monitoring format
redisctl enterprise endpoint stats -o json > endpoint_metrics.json
# Create CSV for analysis
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, connections: current_connections, latency: avg_latency_ms, errors: errors_per_sec}" | jq -r '["endpoint","connections","latency","errors"], (.[] | [.endpoint, .connections, .latency, .errors]) | @csv'
# Stream to monitoring pipeline
while true; do
redisctl enterprise endpoint stats -q '[].{timestamp: now(), metrics: @}' | \
curl -X POST http://metrics-collector/ingest -d @-
sleep 60
done
Troubleshooting
High Connection Counts
If endpoints show high connection counts:
# Identify affected endpoints
redisctl enterprise endpoint stats -q "[?current_connections > `1000`]"
# Check database configuration
redisctl enterprise database get <bdb_uid> -q '{max_connections: max_connections, current: @ | current_connections}'
# Monitor connection trends
for i in {1..10}; do
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, connections: current_connections}" -o table
sleep 30
done
Availability Issues
When endpoints report availability problems:
# Check specific database endpoint
redisctl enterprise endpoint availability <bdb_uid>
# Verify node status
redisctl enterprise node list -q "[?status != 'active']"
# Check shard distribution
redisctl enterprise database get <bdb_uid> -q 'shards_placement'
Best Practices
- Regular Monitoring: Set up regular checks of endpoint statistics to catch issues early
- Baseline Metrics: Establish baseline performance metrics for comparison
- Alert Thresholds: Configure alerts based on your specific workload patterns
- Correlation: Correlate endpoint metrics with database and node statistics
- Capacity Planning: Use connection and bandwidth metrics for capacity planning
Related Commands
redisctl enterprise database
- Manage databases and their endpointsredisctl enterprise stats
- View detailed statisticsredisctl enterprise node
- Check node status affecting endpointsredisctl enterprise cluster
- View cluster-wide endpoint configuration
Service Management Commands
Configure and manage internal Redis Enterprise services.
Overview
Service commands provide control over Redis Enterprise's internal services including the cluster manager, proxy, statistics collector, and other system components.
Available Commands
List Services
redisctl enterprise services list
Shows all available services and their current status.
Get Service Configuration
redisctl enterprise services get <service_name>
Retrieves configuration for a specific service.
Update Service Configuration
redisctl enterprise services update <service_name> --data '{
"enabled": true,
"port": 8080,
"log_level": "info"
}'
Restart Service
redisctl enterprise services restart <service_name>
Restarts a specific service across the cluster.
Get Service Status
redisctl enterprise services status <service_name>
Shows detailed status information for a service.
Enable Service
redisctl enterprise services enable <service_name>
Enables a previously disabled service.
Disable Service
redisctl enterprise services disable <service_name>
Disables a service (use with caution).
Common Services
Service | Description | Critical |
---|---|---|
cm_server | Cluster Manager Server | Yes |
crdb_coordinator | Active-Active Coordinator | For CRDB |
crdb_worker | Active-Active Worker | For CRDB |
mdns_server | Multicast DNS Server | No |
pdns_server | PowerDNS Server | Yes |
saslauthd | SASL Authentication | For LDAP |
stats_archiver | Statistics Archiver | No |
cnm_http | Cluster Node Manager | Yes |
cnm_https | Secure CNM | Yes |
Common Use Cases
Checking Service Health
# List all services with status
redisctl enterprise services list -o table
# Check specific critical service
redisctl enterprise services status cm_server
# Get services in JSON for monitoring
redisctl enterprise services list -o json | jq '.[] | select(.status != "running")'
Troubleshooting Service Issues
# 1. Check service status
redisctl enterprise services status pdns_server
# 2. Review service configuration
redisctl enterprise services get pdns_server
# 3. Restart if needed
redisctl enterprise services restart pdns_server
# 4. Verify after restart
sleep 10
redisctl enterprise services status pdns_server
Managing Statistics Collection
# Check stats archiver
redisctl enterprise services get stats_archiver
# Adjust retention settings
redisctl enterprise services update stats_archiver --data '{
"retention_days": 30,
"collection_interval": 60
}'
# Restart to apply changes
redisctl enterprise services restart stats_archiver
LDAP Service Management
# Enable SASL for LDAP authentication
redisctl enterprise services enable saslauthd
# Configure SASL service
redisctl enterprise services update saslauthd --data '{
"mechanisms": ["ldap"],
"ldap_servers": "ldap://ldap.company.com",
"ldap_search_base": "dc=company,dc=com"
}'
# Restart SASL service
redisctl enterprise services restart saslauthd
Service Configuration Examples
Cluster Manager Configuration
{
"enabled": true,
"port": 9443,
"bind_address": "0.0.0.0",
"log_level": "info",
"max_connections": 1000,
"timeout": 30
}
DNS Service Configuration
{
"enabled": true,
"port": 53,
"cache_size": 10000,
"negative_ttl": 60,
"query_timeout": 2,
"recursion": false
}
Monitoring Scripts
Service Health Check
#!/bin/bash
# Monitor critical services
CRITICAL_SERVICES="cm_server pdns_server cnm_https"
for service in $CRITICAL_SERVICES; do
STATUS=$(redisctl enterprise services status $service -q 'status')
if [[ "$STATUS" != "running" ]]; then
echo "ALERT: Service $service is $STATUS"
# Send notification
fi
done
Service Performance Monitoring
# Track service resource usage
redisctl enterprise services list -o json | jq -r '.[] |
"\(.name): CPU=\(.cpu_usage)% MEM=\(.memory_mb)MB"'
Safety Considerations
Critical Services
Never disable these services:
cm_server
- Cluster managercnm_http/https
- Node managementpdns_server
- DNS resolution
Pre-Restart Checks
# Before restarting a service
# 1. Check cluster health
redisctl enterprise cluster status
# 2. Verify no ongoing operations
redisctl enterprise action list
# 3. Consider maintenance window
echo "Current load:"
redisctl enterprise stats cluster -q 'operations_per_second'
Service Dependencies
Some services depend on others:
saslauthd
requires LDAP configurationcrdb_*
services require Active-Active setupstats_archiver
requires sufficient disk space
Troubleshooting
Service Won't Start
# Check logs
redisctl enterprise logs list --filter "service_name=$SERVICE"
# Verify configuration
redisctl enterprise services get $SERVICE
# Check system resources
df -h # Disk space
free -m # Memory
Service Consuming High Resources
# Get detailed status
redisctl enterprise services status $SERVICE -o json
# Check configuration limits
redisctl enterprise services get $SERVICE -q 'resource_limits'
# Adjust if needed
redisctl enterprise services update $SERVICE --data '{
"max_memory": "2G",
"max_cpu": 2
}'
Output Examples
Service List Output
[
{
"name": "cm_server",
"status": "running",
"enabled": true,
"pid": 1234,
"uptime": "7d 2h 15m",
"cpu_usage": 2.5,
"memory_mb": 512
},
{
"name": "pdns_server",
"status": "running",
"enabled": true,
"pid": 1235,
"uptime": "7d 2h 15m",
"cpu_usage": 0.5,
"memory_mb": 128
}
]
Service Status Output
{
"name": "cm_server",
"status": "running",
"enabled": true,
"configuration": {
"port": 9443,
"log_level": "info"
},
"statistics": {
"requests_processed": 1000000,
"errors": 0,
"average_response_ms": 50
},
"health": {
"status": "healthy",
"last_check": "2025-09-15T10:30:00Z"
}
}
Related Commands
- Cluster Commands - Cluster-wide operations
- Node Commands - Node-specific management
- Logs Commands - Service log viewing
Cluster Manager Settings
Cluster Manager (CM) settings control various cluster-wide behaviors and policies in Redis Enterprise. These settings affect how the cluster operates, manages resources, and handles various operations.
Overview
CM settings provide configuration for:
- Resource management policies
- Operational behaviors
- System defaults
- Performance tuning
- Security policies
- Maintenance settings
Warning: Modifying CM settings affects the entire cluster. Changes should be made carefully and tested in non-production environments first.
Available Commands
Get Settings
Retrieve current cluster manager settings:
# Get all settings
redisctl enterprise cm-settings get
# Get specific setting using JMESPath
redisctl enterprise cm-settings get --setting "timezone"
# Get nested settings
redisctl enterprise cm-settings get --setting "backup_job_settings.enabled"
# Output as YAML
redisctl enterprise cm-settings get -o yaml
Update Settings
Update cluster manager settings:
# Update from JSON file
redisctl enterprise cm-settings set --data @settings.json
# Update from stdin
echo '{"timezone": "America/New_York"}' | redisctl enterprise cm-settings set --data -
# Update with force (skip confirmation)
redisctl enterprise cm-settings set --data @settings.json --force
Update Specific Setting
Update a single setting value:
# Update timezone
redisctl enterprise cm-settings set-value timezone --value "Europe/London"
# Update nested setting
redisctl enterprise cm-settings set-value backup_job_settings.enabled --value true
# Update with force
redisctl enterprise cm-settings set-value timezone --value "UTC" --force
Reset Settings
Reset settings to cluster defaults:
# Reset all settings (with confirmation)
redisctl enterprise cm-settings reset
# Reset without confirmation
redisctl enterprise cm-settings reset --force
Export/Import Settings
Export and import settings for backup or migration:
# Export to file
redisctl enterprise cm-settings export --output settings-backup.json
# Export to stdout
redisctl enterprise cm-settings export --output -
# Import from file
redisctl enterprise cm-settings import --file @settings-backup.json
# Import from stdin
cat settings.json | redisctl enterprise cm-settings import --file -
Validate Settings
Validate settings file before importing:
# Validate settings file
redisctl enterprise cm-settings validate --file @settings.json
# Validate from stdin
echo '{"timezone": "UTC"}' | redisctl enterprise cm-settings validate --file -
List Categories
View available setting categories:
# List all categories
redisctl enterprise cm-settings list-categories
# Output as table
redisctl enterprise cm-settings list-categories -o table
Get Category Settings
Get all settings within a specific category:
# Get all backup-related settings
redisctl enterprise cm-settings get-category backup_job_settings
# Get specific field from category
redisctl enterprise cm-settings get-category backup_job_settings -q "cron_expression"
Common Settings
Time Zone Configuration
{
"timezone": "UTC"
}
Common timezone values:
UTC
- Coordinated Universal TimeAmerica/New_York
- Eastern TimeAmerica/Los_Angeles
- Pacific TimeEurope/London
- British TimeAsia/Tokyo
- Japan Time
Backup Job Settings
{
"backup_job_settings": {
"enabled": true,
"cron_expression": "0 2 * * *",
"retention_days": 7
}
}
Resource Management
{
"resource_management": {
"memory_reserve_percent": 15,
"cpu_reserve_percent": 10,
"max_databases_per_node": 100
}
}
Security Settings
{
"security": {
"password_complexity": "high",
"session_timeout_minutes": 30,
"max_login_attempts": 5,
"audit_logging": true
}
}
Examples
Backup Current Settings
#!/bin/bash
# Backup current settings with timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="cm_settings_backup_${TIMESTAMP}.json"
redisctl enterprise cm-settings export --output "$BACKUP_FILE"
echo "Settings backed up to: $BACKUP_FILE"
Configure for Production
# Production settings template
cat << EOF > production-settings.json
{
"timezone": "UTC",
"backup_job_settings": {
"enabled": true,
"cron_expression": "0 2 * * *",
"retention_days": 30
},
"security": {
"audit_logging": true,
"password_complexity": "high"
},
"resource_management": {
"memory_reserve_percent": 20
}
}
EOF
# Apply production settings
redisctl enterprise cm-settings import --file @production-settings.json
Compare Settings Between Clusters
#!/bin/bash
# Compare settings between two clusters
# Export from cluster 1
redisctl profile use cluster1
redisctl enterprise cm-settings export --output cluster1-settings.json
# Export from cluster 2
redisctl profile use cluster2
redisctl enterprise cm-settings export --output cluster2-settings.json
# Compare
diff -u cluster1-settings.json cluster2-settings.json
Audit Settings Changes
#!/bin/bash
# Track settings changes over time
AUDIT_DIR="cm_settings_audit"
mkdir -p "$AUDIT_DIR"
# Get current settings
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
redisctl enterprise cm-settings get > "$AUDIT_DIR/settings_${TIMESTAMP}.json"
# Show changes from last audit
if [ -f "$AUDIT_DIR/settings_latest.json" ]; then
diff -u "$AUDIT_DIR/settings_latest.json" "$AUDIT_DIR/settings_${TIMESTAMP}.json"
fi
# Update latest link
ln -sf "settings_${TIMESTAMP}.json" "$AUDIT_DIR/settings_latest.json"
Safe Settings Update
#!/bin/bash
# Safely update settings with validation and backup
NEW_SETTINGS="$1"
if [ -z "$NEW_SETTINGS" ]; then
echo "Usage: $0 <settings-file>"
exit 1
fi
# Validate new settings
echo "Validating settings..."
if ! redisctl enterprise cm-settings validate --file "@$NEW_SETTINGS"; then
echo "Settings validation failed!"
exit 1
fi
# Backup current settings
echo "Backing up current settings..."
redisctl enterprise cm-settings export --output settings-backup-$(date +%s).json
# Apply new settings
echo "Applying new settings..."
redisctl enterprise cm-settings import --file "@$NEW_SETTINGS"
echo "Settings updated successfully"
Settings Migration
Export from Source Cluster
# Export all settings
redisctl enterprise cm-settings export --output source-settings.json
# Review exported settings
jq '.' source-settings.json
Import to Target Cluster
# Validate before import
redisctl enterprise cm-settings validate --file @source-settings.json
# Import settings
redisctl enterprise cm-settings import --file @source-settings.json --force
Best Practices
- Always backup before changes - Export current settings before modifications
- Test in non-production - Validate changes in test environments first
- Document changes - Keep records of what was changed and why
- Use version control - Store settings files in Git for tracking
- Validate before import - Always validate settings files before importing
- Monitor after changes - Watch cluster behavior after settings updates
Troubleshooting
Settings Not Applied
# Check if settings were saved
redisctl enterprise cm-settings get
# Verify specific setting
redisctl enterprise cm-settings get --setting "your.setting.path"
# Check cluster logs for errors
redisctl enterprise logs list --type error
Invalid Settings Format
# Validate JSON syntax
jq '.' settings.json
# Validate against schema
redisctl enterprise cm-settings validate --file @settings.json
Reset to Defaults
If settings cause issues:
# Reset all settings to defaults
redisctl enterprise cm-settings reset --force
# Restart cluster services if needed
redisctl enterprise cluster restart-services
Permission Denied
CM settings require admin privileges:
# Check user permissions
redisctl enterprise user whoami
# Ensure admin role
redisctl enterprise user get <user_id> -q "role"
Related Commands
enterprise cluster
- Cluster configuration and managemententerprise job-scheduler
- Job scheduling configurationenterprise diagnostics
- Cluster diagnosticsapi enterprise
- Direct API access for advanced operations
OCSP Certificate Validation Commands
Manage Online Certificate Status Protocol (OCSP) for certificate validation in Redis Enterprise.
Overview
OCSP commands configure and manage certificate revocation checking for enhanced security in Redis Enterprise clusters. OCSP provides real-time certificate validation without requiring Certificate Revocation Lists (CRLs).
Available Commands
Get OCSP Configuration
redisctl enterprise ocsp get
Retrieves current OCSP settings including:
- OCSP functionality status (enabled/disabled)
- Responder URL
- Query frequency
- Recovery settings
- Response timeout
Update OCSP Configuration
redisctl enterprise ocsp update --data '{
"ocsp_functionality": true,
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600,
"response_timeout": 5,
"recovery_frequency": 60,
"recovery_max_tries": 5
}'
Get OCSP Status
redisctl enterprise ocsp status
Shows the current operational status of OCSP validation.
Test OCSP Validation
redisctl enterprise ocsp test
Tests OCSP configuration and certificate validation.
Enable OCSP
redisctl enterprise ocsp enable
Quick command to enable OCSP validation with current settings.
Disable OCSP
redisctl enterprise ocsp disable
Quick command to disable OCSP validation.
Configuration Examples
Basic OCSP Setup
{
"ocsp_functionality": true,
"responder_url": "http://ocsp.digicert.com",
"query_frequency": 3600,
"response_timeout": 5
}
High-Security Configuration
{
"ocsp_functionality": true,
"responder_url": "https://ocsp.internal.company.com",
"query_frequency": 900,
"response_timeout": 3,
"recovery_frequency": 30,
"recovery_max_tries": 10,
"require_ocsp_response": true,
"cache_response": true,
"cache_duration": 3600
}
Common Use Cases
Initial OCSP Setup
# 1. Check current configuration
redisctl enterprise ocsp get
# 2. Configure OCSP responder
redisctl enterprise ocsp update --data '{
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600
}'
# 3. Test configuration
redisctl enterprise ocsp test
# 4. Enable OCSP
redisctl enterprise ocsp enable
# 5. Verify status
redisctl enterprise ocsp status
Troubleshooting Certificate Issues
# Check if OCSP is causing connection issues
redisctl enterprise ocsp status
# Temporarily disable for testing
redisctl enterprise ocsp disable
# Test certificates manually
openssl ocsp -issuer issuer.crt \
-cert server.crt \
-url http://ocsp.company.com \
-resp_text
# Re-enable after fixing
redisctl enterprise ocsp enable
Monitoring OCSP Health
#!/bin/bash
# Monitor OCSP status and alert on failures
while true; do
STATUS=$(redisctl enterprise ocsp status -q 'validation_status')
if [[ "$STATUS" != "healthy" ]]; then
echo "OCSP validation unhealthy: $STATUS"
# Send alert
fi
sleep 300
done
Configuration Parameters
Parameter | Description | Default |
---|---|---|
ocsp_functionality | Enable/disable OCSP | false |
responder_url | OCSP responder endpoint | "" |
query_frequency | Seconds between OCSP queries | 3600 |
response_timeout | Timeout for OCSP responses | 1 |
recovery_frequency | Retry interval on failure | 60 |
recovery_max_tries | Maximum retry attempts | 5 |
Security Best Practices
-
Use HTTPS for OCSP Responder
redisctl enterprise ocsp update --data '{ "responder_url": "https://ocsp.company.com" }'
-
Configure Appropriate Timeouts
- Balance between security and availability
- Consider network latency to responder
-
Monitor OCSP Health
- Set up alerts for OCSP failures
- Track response times and success rates
-
Test Before Production
- Verify responder connectivity
- Test with actual certificates
- Check failover behavior
Troubleshooting
OCSP Responder Unreachable
# Check network connectivity
curl -I http://ocsp.company.com
# Verify DNS resolution
nslookup ocsp.company.com
# Test with OpenSSL
openssl ocsp -url http://ocsp.company.com -timeout 5
Certificate Validation Failures
# Get detailed status
redisctl enterprise ocsp status -o json
# Check logs for OCSP errors
redisctl enterprise logs list --filter "OCSP"
# Test specific certificate
redisctl enterprise ocsp test --data '{
"certificate": "-----BEGIN CERTIFICATE-----..."
}'
Performance Impact
# Monitor query times
redisctl enterprise ocsp status -q 'average_response_time'
# Adjust query frequency if needed
redisctl enterprise ocsp update --data '{
"query_frequency": 7200
}'
Output Examples
Configuration Output
{
"ocsp_functionality": true,
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600,
"response_timeout": 5,
"recovery_frequency": 60,
"recovery_max_tries": 5,
"last_check": "2025-09-15T10:30:00Z",
"next_check": "2025-09-15T11:30:00Z"
}
Status Output
{
"enabled": true,
"validation_status": "healthy",
"certificates_checked": 12,
"certificates_valid": 12,
"certificates_revoked": 0,
"last_success": "2025-09-15T10:30:00Z",
"failures_count": 0
}
Related Commands
- Cluster Commands - Cluster security settings
- Auth Commands - Authentication configuration
- Certificate Commands - Certificate management
DNS Suffix Management
The suffix commands allow you to manage DNS suffixes for Redis Enterprise database endpoints. DNS suffixes provide custom domain names for database endpoints, useful for multi-tenant deployments and organizing databases by environment or purpose.
Available Commands
List DNS Suffixes
List all configured DNS suffixes in the cluster:
# List all DNS suffixes
redisctl enterprise suffix list
# List suffixes as a table
redisctl enterprise suffix list -o table
# Filter to active suffixes
redisctl enterprise suffix list -q "[?status == 'active']"
# Get suffix names only
redisctl enterprise suffix list -q "[].name"
Get DNS Suffix Details
Get detailed information about a specific DNS suffix:
# Get suffix details
redisctl enterprise suffix get example.redis.local
# Get suffix in YAML format
redisctl enterprise suffix get example.redis.local -o yaml
# Extract specific fields
redisctl enterprise suffix get example.redis.local -q '{name: name, dns_servers: dns_servers}'
# Check if suffix is in use
redisctl enterprise suffix get example.redis.local -q 'in_use'
Output Examples
DNS Suffix List
[
{
"name": "prod.redis.local",
"status": "active",
"dns_servers": ["10.0.1.53", "10.0.2.53"],
"databases": 5,
"created": "2024-01-15T10:30:00Z"
},
{
"name": "dev.redis.local",
"status": "active",
"dns_servers": ["10.0.3.53"],
"databases": 12,
"created": "2024-02-20T14:15:00Z"
}
]
DNS Suffix Details
{
"name": "prod.redis.local",
"status": "active",
"dns_servers": ["10.0.1.53", "10.0.2.53"],
"dns_zone": "redis.local",
"ttl": 60,
"databases": [
{
"bdb_uid": 1,
"name": "cache-db",
"endpoint": "cache-db.prod.redis.local:16379"
},
{
"bdb_uid": 2,
"name": "session-db",
"endpoint": "session-db.prod.redis.local:16380"
}
],
"in_use": true,
"created": "2024-01-15T10:30:00Z",
"modified": "2024-03-01T09:45:00Z"
}
Common Use Cases
Environment-Based Suffixes
Organize databases by environment using DNS suffixes:
# List production suffixes
redisctl enterprise suffix list -q "[?contains(name, 'prod')]"
# List development suffixes
redisctl enterprise suffix list -q "[?contains(name, 'dev')]"
# Check staging suffix configuration
redisctl enterprise suffix get staging.redis.local
Multi-Tenant Deployments
Manage suffixes for multi-tenant scenarios:
# List suffixes by tenant
redisctl enterprise suffix list -q "[?contains(name, 'tenant')]" -o table
# Get tenant-specific suffix
redisctl enterprise suffix get tenant-a.redis.local
# Count databases per suffix
redisctl enterprise suffix list -q "[].{suffix: name, database_count: databases}"
DNS Configuration Verification
Verify DNS suffix configurations:
# Check DNS servers for all suffixes
redisctl enterprise suffix list -q "[].{name: name, servers: dns_servers}"
# Find suffixes with specific DNS server
redisctl enterprise suffix list -q "[?contains(dns_servers, '10.0.1.53')]"
# Verify TTL settings
redisctl enterprise suffix list -q "[].{name: name, ttl: ttl}" -o table
Integration Examples
Database Creation with Suffix
When creating databases, specify the DNS suffix:
# Create database with specific suffix
cat <<EOF | redisctl enterprise database create --data -
{
"name": "app-cache",
"memory_size": 1073741824,
"dns_suffix_name": "prod.redis.local"
}
EOF
# Verify database endpoint
redisctl enterprise database get <bdb_uid> -q 'endpoint'
Monitoring Suffix Usage
Monitor DNS suffix utilization:
# Check suffix usage
for suffix in $(redisctl enterprise suffix list -q "[].name" -o json | jq -r '.[]'); do
echo "Suffix: $suffix"
redisctl enterprise suffix get "$suffix" -q 'databases' | jq length
done
# Find unused suffixes
redisctl enterprise suffix list -q "[?databases == \`0\`].name"
# Get suffix with most databases
redisctl enterprise suffix list -q "max_by(@, &databases).{name: name, count: databases}"
DNS Server Management
Manage DNS server configurations:
# List all unique DNS servers
redisctl enterprise suffix list -q "[].dns_servers[]" | jq -s 'unique'
# Find suffixes by DNS server count
redisctl enterprise suffix list -q "[?length(dns_servers) > \`1\`]"
# Check DNS server availability
for server in $(redisctl enterprise suffix list -q "[].dns_servers[]" | jq -r 'unique[]'); do
echo "Checking DNS server: $server"
dig @$server test.redis.local +short
done
Best Practices
- Naming Convention: Use consistent naming patterns for suffixes (e.g.,
<environment>.<domain>
) - DNS Server Redundancy: Configure multiple DNS servers for high availability
- TTL Settings: Set appropriate TTL values based on your DNS infrastructure
- Environment Separation: Use different suffixes for different environments
- Documentation: Maintain documentation of suffix assignments and purposes
Troubleshooting
Suffix Not Resolving
If DNS suffixes are not resolving:
# Check suffix configuration
redisctl enterprise suffix get <suffix_name>
# Verify DNS servers
redisctl enterprise suffix get <suffix_name> -q 'dns_servers'
# Check database endpoints using the suffix
redisctl enterprise database list -q "[?dns_suffix_name == '<suffix_name>']"
# Test DNS resolution
dig @<dns_server> <database>.<suffix_name>
Database Endpoint Issues
When databases aren't accessible via suffix:
# Check database suffix assignment
redisctl enterprise database get <bdb_uid> -q 'dns_suffix_name'
# Verify suffix is active
redisctl enterprise suffix get <suffix_name> -q 'status'
# List all endpoints for suffix
redisctl enterprise suffix get <suffix_name> -q 'databases[].endpoint'
Related Commands
redisctl enterprise database
- Create and manage databases with DNS suffixesredisctl enterprise cluster
- View cluster-wide DNS configurationredisctl enterprise endpoint
- Monitor endpoint availability and statistics
JSON Schema
The JSON schema command provides access to the Redis Enterprise API schema definitions, useful for validation, documentation, and code generation.
Available Commands
Get JSON Schema
Retrieve the complete JSON schema for the Redis Enterprise API:
# Get full JSON schema
redisctl enterprise jsonschema get
# Get schema as YAML
redisctl enterprise jsonschema get -o yaml
# Extract specific schema definitions
redisctl enterprise jsonschema get -q 'definitions'
# Get schema for a specific resource
redisctl enterprise jsonschema get -q 'definitions.bdb'
# List all available definitions
redisctl enterprise jsonschema get -q 'definitions | keys(@)'
Output Examples
Schema Structure
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Redis Enterprise API Schema",
"version": "1.0.0",
"definitions": {
"bdb": {
"type": "object",
"properties": {
"uid": {
"type": "integer",
"description": "Database unique ID"
},
"name": {
"type": "string",
"description": "Database name"
},
"memory_size": {
"type": "integer",
"description": "Memory limit in bytes"
},
"shards_count": {
"type": "integer",
"minimum": 1,
"description": "Number of database shards"
}
},
"required": ["name", "memory_size"]
},
"node": {
"type": "object",
"properties": {
"uid": {
"type": "integer",
"description": "Node unique ID"
},
"addr": {
"type": "string",
"format": "ipv4",
"description": "Node IP address"
}
}
}
},
"paths": {
"/v1/bdbs": {
"post": {
"requestBody": {
"$ref": "#/definitions/bdb"
}
}
}
}
}
Common Use Cases
API Validation
Validate request payloads against the schema:
# Get schema for database creation
redisctl enterprise jsonschema get -q 'definitions.bdb'
# Extract required fields
redisctl enterprise jsonschema get -q 'definitions.bdb.required'
# Get property types
redisctl enterprise jsonschema get -q 'definitions.bdb.properties | to_entries[] | {property: .key, type: .value.type}'
Code Generation
Generate TypeScript or other language definitions:
# Export schema for code generation
redisctl enterprise jsonschema get -o json > redis-enterprise-schema.json
# Extract definitions for specific resources
redisctl enterprise jsonschema get -q 'definitions.{database: bdb, cluster: cluster, node: node}' > resources.json
# Generate TypeScript interfaces (using external tool)
redisctl enterprise jsonschema get | npx json-schema-to-typescript > redis-enterprise.d.ts
Documentation
Extract schema information for documentation:
# Get all resource definitions
redisctl enterprise jsonschema get -q 'definitions | keys(@)' -o json
# Get descriptions for properties
redisctl enterprise jsonschema get -q 'definitions.bdb.properties | to_entries[] | {property: .key, description: .value.description}'
# List all API paths
redisctl enterprise jsonschema get -q 'paths | keys(@)'
# Get operations for a path
redisctl enterprise jsonschema get -q 'paths."/v1/bdbs" | keys(@)'
Schema Discovery
Explore available schemas and their structures:
# List all top-level schema properties
redisctl enterprise jsonschema get -q 'keys(@)'
# Find schemas with specific properties
redisctl enterprise jsonschema get -q 'definitions | to_entries[] | select(.value.properties.memory_size) | .key'
# Get enum values for properties
redisctl enterprise jsonschema get -q 'definitions.*.properties.* | select(.enum) | {property: @, values: .enum}'
# Find required properties across all schemas
redisctl enterprise jsonschema get -q 'definitions | to_entries[] | {schema: .key, required: .value.required}'
Integration Examples
Validation Script
Create a validation script using the schema:
#!/bin/bash
# validate-payload.sh
SCHEMA=$(redisctl enterprise jsonschema get -q 'definitions.bdb')
PAYLOAD=$1
echo "$PAYLOAD" | jq --argjson schema "$SCHEMA" '
# Simple validation example
if .name == null then
error("name is required")
elif .memory_size == null then
error("memory_size is required")
else
.
end
'
OpenAPI Generation
Convert to OpenAPI specification:
# Extract and format for OpenAPI
redisctl enterprise jsonschema get -o json | jq '{
openapi: "3.0.0",
info: {
title: "Redis Enterprise API",
version: .version
},
components: {
schemas: .definitions
},
paths: .paths
}' > openapi.json
Schema Comparison
Compare schemas across versions:
# Save current schema
redisctl enterprise jsonschema get -o json > schema-current.json
# Later, compare with new version
redisctl enterprise jsonschema get -o json > schema-new.json
diff <(jq -S . schema-current.json) <(jq -S . schema-new.json)
# Find new properties
jq -r '.definitions | keys(@)' schema-new.json | \
comm -13 <(jq -r '.definitions | keys(@)' schema-current.json | sort) -
Best Practices
- Cache Schema: The schema doesn't change frequently, so cache it locally
- Version Control: Store schema snapshots in version control for tracking changes
- Validation: Use the schema to validate payloads before API calls
- Code Generation: Generate client code from schema for type safety
- Documentation: Keep schema-based documentation up to date
Troubleshooting
Schema Retrieval Issues
If schema retrieval fails:
# Check API connectivity
redisctl enterprise cluster get -q 'name'
# Try raw API access
redisctl api enterprise get /v1/jsonschema
# Check with curl
curl -k -u "$REDIS_ENTERPRISE_USER:$REDIS_ENTERPRISE_PASSWORD" \
https://$REDIS_ENTERPRISE_URL/v1/jsonschema
Schema Validation
Validate that the schema is well-formed:
# Check if valid JSON
redisctl enterprise jsonschema get | jq empty && echo "Valid JSON"
# Validate schema structure
redisctl enterprise jsonschema get | jq 'has("definitions") and has("$schema")'
# Check for required sections
redisctl enterprise jsonschema get -q '[has("definitions"), has("properties"), has("paths")] | all'
Related Commands
redisctl api enterprise
- Direct API access for testingredisctl enterprise database create
- Use schema for creating resourcesredisctl enterprise cluster
- Cluster configuration that follows schema
Redis Enterprise Examples
Real-world examples of managing Redis Enterprise clusters.
Cluster Setup
Bootstrap Single-Node Cluster
# 1. Bootstrap the cluster
redisctl api enterprise post /v1/bootstrap \
--data '{
"action": "create_cluster",
"cluster": {
"name": "Development Cluster"
},
"node": {
"paths": {
"persistent_path": "/var/opt/redislabs/persist",
"ephemeral_path": "/var/opt/redislabs/tmp"
}
},
"credentials": {
"username": "admin@cluster.local",
"password": "SecurePassword123!"
}
}'
# 2. Check bootstrap status
redisctl api enterprise get /v1/bootstrap
# 3. Apply license
redisctl enterprise license set --license-file license.key
# 4. Verify cluster is ready
redisctl enterprise cluster info
Add Nodes to Cluster
# On new node, join existing cluster
redisctl api enterprise post /v1/bootstrap \
--data '{
"action": "join_cluster",
"cluster": {
"nodes": ["192.168.1.100"]
},
"credentials": {
"username": "admin@cluster.local",
"password": "SecurePassword123!"
}
}'
# Check all nodes
redisctl enterprise node list -o table
# Verify node status
redisctl enterprise node get 2 -q '{status:status,role:role}'
Database Management
Create High-Performance Database
# Create database optimized for caching
redisctl api enterprise post /v1/bdbs \
--data '{
"name": "cache-db",
"memory_size": 10737418240,
"type": "redis",
"port": 12000,
"replication": true,
"shards_count": 4,
"shard_key_regex": ".*{(.*)}.*",
"eviction_policy": "allkeys-lru",
"persistence": "disabled",
"redis_version": "7.2"
}'
# Get connection string
redisctl enterprise database get 1 \
-q '"redis://:" + password + "@" + endpoints[0].addr[0] + ":" + (endpoints[0].port|tostring)'
Enable Modules
# Upload custom module
redisctl enterprise module upload \
--file /path/to/module.so \
--name "CustomModule" \
--version "1.0.0"
# Create database with modules
redisctl api enterprise post /v1/bdbs \
--data '{
"name": "feature-db",
"memory_size": 5368709120,
"port": 12001,
"module_list": [
{"module_name": "search", "module_args": ""},
{"module_name": "timeseries", "module_args": ""},
{"module_name": "json", "module_args": ""}
]
}'
User Management
Set Up RBAC
# Create custom role
redisctl api enterprise post /v1/roles \
--data '{
"name": "developer",
"management": "db_member",
"data_access": "read-write"
}'
# Create users with different roles
redisctl enterprise user create \
--email "admin@company.com" \
--password "AdminPass123!" \
--role "admin"
redisctl enterprise user create \
--email "dev@company.com" \
--password "DevPass123!" \
--role "db_member"
redisctl enterprise user create \
--email "viewer@company.com" \
--password "ViewPass123!" \
--role "db_viewer"
# List users and their roles
redisctl enterprise user list \
-q "[].{email:email,role:role}" \
-o table
Monitoring and Maintenance
Health Check Script
#!/bin/bash
# Comprehensive cluster health check
echo "=== Cluster Health Check ==="
# Cluster status
echo -e "\nCluster Status:"
redisctl enterprise cluster info -q '{name:name,status:status}'
# Node health
echo -e "\nNode Status:"
redisctl enterprise node list \
-q "[].{id:uid,address:addr,status:status,role:role}" \
-o table
# Database health
echo -e "\nDatabase Status:"
redisctl enterprise database list \
-q "[].{name:name,status:status,memory_used:used_memory}" \
-o table
# Check for alerts
echo -e "\nActive Alerts:"
redisctl api enterprise get /v1/cluster/alerts \
-q "[?state=='active'].{severity:severity,alert:alert_name}"
Performance Monitoring
# Get database metrics
DB_ID=1
redisctl api enterprise get /v1/bdbs/$DB_ID/stats \
-q '{
ops_per_sec: avg_ops_per_sec,
memory_used: used_memory,
connections: conns,
cpu_percent: cpu_user
}'
# Monitor in real-time
watch -n 5 'redisctl api enterprise get /v1/bdbs/1/stats \
-q "{ops: avg_ops_per_sec, memory: used_memory, cpu: cpu_user}"'
# Export metrics for Prometheus
redisctl api enterprise get /v1/bdbs/metrics?format=prometheus > metrics.txt
Backup and Recovery
Automated Backup
#!/bin/bash
# Backup all databases
BACKUP_DIR="/backups/$(date +%Y%m%d)"
mkdir -p $BACKUP_DIR
# List all databases and backup
redisctl enterprise database list -q "[].uid" | while read db_id; do
echo "Backing up database $db_id..."
# Trigger backup
redisctl api enterprise post /v1/bdbs/$db_id/actions/export \
--data '{
"location": "'"$BACKUP_DIR"'/db-'"$db_id"'.rdb"
}'
done
# Create backup manifest
redisctl enterprise database list > $BACKUP_DIR/manifest.json
Database Migration
# Export from source database
redisctl api enterprise post /v1/bdbs/1/actions/export \
--data '{
"location": "/tmp/export.rdb"
}'
# Wait for export to complete
sleep 10
# Import to new database
redisctl api enterprise post /v1/bdbs/2/actions/import \
--data '{
"location": "/tmp/export.rdb",
"sync": "merge"
}'
High Availability Setup
Configure Database HA
# Create HA database with rack awareness
redisctl api enterprise post /v1/bdbs \
--data '{
"name": "ha-database",
"memory_size": 10737418240,
"replication": true,
"replica_ha": true,
"rack_aware": true,
"shards_count": 3,
"shard_key_regex": ".*{(.*)}.*",
"proxy_policy": "all-master-shards"
}'
# Configure automatic failover
redisctl api enterprise put /v1/bdbs/1 \
--data '{
"replica_ha": true,
"replica_ha_grace": 60
}'
CRDB (Active-Active) Setup
# Create CRDB on first cluster
redisctl api enterprise post /v1/crdbs \
--data '{
"name": "global-cache",
"memory_size": 5368709120,
"port": 12100,
"replication": false,
"sharding": true,
"shard_count": 2
}'
# Get CRDB configuration for other clusters
CRDB_GUID=$(redisctl api enterprise get /v1/crdbs/1 -q crdb_guid)
# On second cluster, create participating instance
redisctl api enterprise post /v1/crdbs \
--data '{
"name": "global-cache",
"memory_size": 5368709120,
"port": 12100,
"crdb_guid": "'"$CRDB_GUID"'",
"instance": {
"cluster": {
"url": "https://cluster1.example.com:9443",
"credentials": {
"username": "admin@cluster.local",
"password": "password"
}
}
}
}'
Maintenance Operations
Rolling Restart
#!/bin/bash
# Perform rolling restart of all databases
redisctl enterprise database list -q "[].uid" | while read db_id; do
echo "Restarting database $db_id..."
# Restart database
redisctl api enterprise post /v1/bdbs/$db_id/actions/restart
# Wait for database to be active
while [ "$(redisctl enterprise database get $db_id -q status)" != "active" ]; do
sleep 5
done
echo "Database $db_id restarted successfully"
done
Cluster Upgrade Preparation
# Check upgrade readiness
echo "=== Pre-Upgrade Check ==="
# Check cluster version
redisctl enterprise cluster info -q '{version:version}'
# Check node versions
redisctl enterprise node list -q "[].{node:uid,version:software_version}"
# Check for active alerts
ALERTS=$(redisctl api enterprise get /v1/cluster/alerts -q "[?state=='active'] | length(@)")
if [ "$ALERTS" -gt 0 ]; then
echo "WARNING: Active alerts found. Resolve before upgrading."
redisctl api enterprise get /v1/cluster/alerts -q "[?state=='active']"
fi
# Backup critical databases
redisctl enterprise database list -q "[?contains(name, 'prod')].uid" | while read db_id; do
redisctl api enterprise post /v1/bdbs/$db_id/actions/export \
--data '{"location": "/backup/pre-upgrade-db-'"$db_id"'.rdb"}'
done
Human-Friendly Commands
These commands provide a typed, user-friendly interface to common Redis Enterprise operations.
Cluster Management
# Get cluster information
redisctl enterprise cluster info
# Update cluster settings
redisctl enterprise cluster update \
--name "Production Cluster" \
--rack-aware true
# Get cluster license
redisctl enterprise license get
# Update license
redisctl enterprise license set --license-file license.key
Database Management
# List all databases
redisctl enterprise database list
# Get database details
redisctl enterprise database get <db-id>
# Create database
redisctl enterprise database create \
--name "cache-db" \
--memory-size 10gb \
--port 12000 \
--replication true \
--shards-count 2
# Update database
redisctl enterprise database update <db-id> \
--memory-size 20gb \
--eviction-policy allkeys-lru
# Delete database
redisctl enterprise database delete <db-id>
Node Management
# List all nodes
redisctl enterprise node list
# Get node details
redisctl enterprise node get <node-id>
# Add node to cluster
redisctl enterprise node join \
--address 192.168.1.100 \
--username admin@cluster.local \
--password node-password
# Remove node
redisctl enterprise node remove <node-id>
# Get node stats
redisctl enterprise node stats <node-id>
User Management
# List users
redisctl enterprise user list
# Create user
redisctl enterprise user create \
--email "user@example.com" \
--password "secure-password" \
--role "db-member"
# Update user
redisctl enterprise user update <user-id> \
--role "cluster-admin"
# Delete user
redisctl enterprise user delete <user-id>
Role Management
# List roles
redisctl enterprise role list
# Get role details
redisctl enterprise role get <role-id>
# Create custom role
redisctl enterprise role create \
--name "db-viewer" \
--permissions "view-db,view-stats"
Module Management
# List available modules
redisctl enterprise module list
# Upload module
redisctl enterprise module upload \
--file redisgraph.so \
--name "RedisGraph" \
--version "2.8.0"
# Delete module
redisctl enterprise module delete <module-id>
Alert Configuration
# List configured alerts
redisctl enterprise alert list
# Get alert details
redisctl enterprise alert get <alert-id>
# Create alert
redisctl enterprise alert create \
--name "high-memory" \
--threshold 80 \
--email "ops@example.com"
# Update alert
redisctl enterprise alert update <alert-id> \
--threshold 90
# Delete alert
redisctl enterprise alert delete <alert-id>
Bootstrap Operations
# Bootstrap single node cluster
redisctl enterprise bootstrap create \
--cluster-name "Dev Cluster" \
--username "admin@cluster.local" \
--password "admin-password" \
--license-file license.key
# Get bootstrap status
redisctl enterprise bootstrap status
# Join existing cluster
redisctl enterprise bootstrap join \
--cluster-address 192.168.1.100 \
--username admin@cluster.local \
--password cluster-password
CRDB (Active-Active) Operations
# List CRDB databases
redisctl enterprise crdb list
# Get CRDB details
redisctl enterprise crdb get <crdb-id>
# Create CRDB
redisctl enterprise crdb create \
--name "global-cache" \
--memory-size 10gb \
--participating-clusters "1,2,3"
# Add participating cluster
redisctl enterprise crdb add-instance <crdb-id> \
--cluster-id 4 \
--memory-size 10gb
Statistics and Monitoring
# Get database statistics
redisctl enterprise database stats <db-id>
# Get node statistics
redisctl enterprise node stats <node-id>
# Get cluster statistics
redisctl enterprise cluster stats
# Export metrics
redisctl enterprise stats export \
--format prometheus \
--output metrics.txt
Output Formatting
All commands support output formatting:
# Table format
redisctl enterprise database list -o table
# JSON (default)
redisctl enterprise database list -o json
# YAML
redisctl enterprise database list -o yaml
# Filtered with JMESPath
redisctl enterprise database list \
-q "[?status=='active'].{name:name,memory:memory_size,port:port}"
Redis Enterprise Commands
Redis Enterprise commands are organized into three layers:
1. Human-Friendly Commands
High-level commands with typed parameters and structured output.
redisctl enterprise <resource> <action> [options]
See Human-Friendly Commands for the complete reference.
2. Raw API Access
Direct access to any REST endpoint when you need full control.
redisctl api enterprise <method> <path> [options]
See Raw API Access for details.
3. Workflows (Coming Soon)
Multi-step orchestrated operations for complex tasks:
- Cluster bootstrap and setup
- Node addition and removal
- Database migration workflows
- Upgrade procedures
Quick Reference
Most Common Commands
# Cluster
redisctl enterprise cluster info
redisctl enterprise cluster update --name "Production"
# Databases
redisctl enterprise database list
redisctl enterprise database get 1
redisctl enterprise database create --name "cache"
# Nodes
redisctl enterprise node list
redisctl enterprise node get 1
# Direct API
redisctl api enterprise get /v1/cluster
redisctl api enterprise post /v1/bdbs --data @database.json
Raw API Access
Direct access to any Redis Enterprise REST API endpoint.
Basic Usage
redisctl api enterprise <method> <path> [options]
Methods: get
, post
, put
, patch
, delete
Examples
GET Requests
# Get cluster information
redisctl api enterprise get /v1/cluster
# Get all databases
redisctl api enterprise get /v1/bdbs
# Get specific database
redisctl api enterprise get /v1/bdbs/1
# Get all nodes
redisctl api enterprise get /v1/nodes
# Get node statistics
redisctl api enterprise get /v1/nodes/1/stats
# Get with query parameters
redisctl api enterprise get "/v1/bdbs?fields=name,port,status"
POST Requests
# Create database (with JSON file)
redisctl api enterprise post /v1/bdbs \
--data @database.json
# Create database (with inline JSON)
redisctl api enterprise post /v1/bdbs \
--data '{
"name": "my-database",
"memory_size": 10737418240,
"port": 12000,
"replication": true
}'
# Bootstrap cluster
redisctl api enterprise post /v1/bootstrap \
--data '{
"action": "create_cluster",
"cluster": {
"name": "my-cluster"
},
"credentials": {
"username": "admin@cluster.local",
"password": "admin-password"
}
}'
PUT Requests
# Update database configuration
redisctl api enterprise put /v1/bdbs/1 \
--data '{"memory_size": 21474836480}'
# Update cluster settings
redisctl api enterprise put /v1/cluster \
--data '{"name": "Production Cluster"}'
DELETE Requests
# Delete database
redisctl api enterprise delete /v1/bdbs/1
# Remove node from cluster
redisctl api enterprise delete /v1/nodes/3
Common Endpoints
Cluster Management
/v1/cluster
- Cluster information and settings/v1/bootstrap
- Bootstrap operations/v1/license
- License management/v1/ocsp
- OCSP configuration/v1/cm_settings
- Cluster manager settings
Database Operations (BDB)
/v1/bdbs
- Database list and creation/v1/bdbs/{id}
- Database details and management/v1/bdbs/{id}/actions
- Database actions (flush, restart)/v1/bdbs/{id}/stats
- Database statistics
Node Management
/v1/nodes
- Node list/v1/nodes/{id}
- Node details/v1/nodes/{id}/actions
- Node actions/v1/nodes/{id}/stats
- Node statistics
User & Access Control
/v1/users
- User management/v1/roles
- Role definitions/v1/acl_roles
- Redis ACL roles/v1/ldap_mappings
- LDAP integration
Sharding & Replication
/v1/shards
- Shard management/v1/shards/{id}/actions
- Shard operations/v1/bdbs/{id}/endpoints
- Database endpoints
Active-Active (CRDB)
/v1/crdbs
- CRDB list and creation/v1/crdbs/{id}
- CRDB management/v1/crdbs/{id}/participating_clusters
- Participating clusters
Monitoring & Alerts
/v1/bdbs/{id}/alerts
- Database alerts/v1/nodes/{id}/alerts
- Node alerts/v1/cluster/alerts
- Cluster alerts/v1/logs
- System logs
Modules
/v1/modules
- Module management/v1/modules/{id}
- Module details
Working with Certificates
For self-signed certificates:
# Allow insecure connections
export REDIS_ENTERPRISE_INSECURE=true
# Or use system certificate store
redisctl api enterprise get /v1/cluster \
--ca-cert /path/to/ca.crt
Pagination
Many endpoints support pagination:
# Get databases with pagination
redisctl api enterprise get "/v1/bdbs?offset=0&limit=10"
# Get next page
redisctl api enterprise get "/v1/bdbs?offset=10&limit=10"
Filtering Results
# Get only specific fields
redisctl api enterprise get "/v1/bdbs?fields=name,port,status"
# Filter with JMESPath after retrieval
redisctl api enterprise get /v1/bdbs \
-q "[?status=='active'].{name:name,port:port}"
Async Operations
Some operations return task IDs:
# Create database (returns task)
TASK_ID=$(redisctl api enterprise post /v1/bdbs \
--data @database.json \
-q "task_id")
# Check task status
redisctl api enterprise get /v1/tasks/$TASK_ID
# Wait for completion
while [ "$(redisctl api enterprise get /v1/tasks/$TASK_ID -q status)" != "completed" ]; do
sleep 2
done
Error Handling
# Check HTTP status
if ! redisctl api enterprise get /v1/bdbs/999; then
echo "Database not found"
fi
# Get detailed error information
redisctl api enterprise get /v1/bdbs/999 --verbose
Tips
- Use
/v1/swagger
to get API documentation - Most IDs in Enterprise are integers (unlike Cloud's UUIDs)
- Memory sizes are in bytes
- Use
--insecure
for development clusters with self-signed certs - Check the Redis Enterprise API docs for detailed endpoint information
Output Formats
redisctl supports multiple output formats to suit different use cases.
Available Formats
Auto (Default)
Automatically selects the best format based on context:
# Auto-detects format
redisctl cloud database list
JSON
Machine-readable JSON output:
redisctl cloud database list -o json
YAML
Human-readable structured format:
redisctl cloud database list -o yaml
Table
Formatted tables for human consumption:
redisctl cloud database list -o table
JMESPath Filtering
Use the -q
flag for powerful JSON queries:
# Get all database names
redisctl cloud database list -q "[].name"
# Filter by status
redisctl cloud database list -q "[?status=='active']"
# Custom projections
redisctl cloud database list -q "[].{name: name, memory: planMemoryLimit}"
Advanced Queries
# Sort by memory
redisctl cloud database list -q "sort_by(@, &planMemoryLimit)"
# Find databases with specific modules
redisctl cloud database list -q "[?modules[?name=='RediSearch']]"
# Complex filtering (memory > 250MB)
redisctl cloud database list -q "[?planMemoryLimit > `250`].{name: name, region: region, memory: planMemoryLimit}"
Working with Other Tools
jq Integration
# Filter with jq
redisctl cloud database list -o json | jq '.[] | select(.name | contains("prod"))'
# Extract IDs
redisctl cloud database list -o json | jq -r '.[].databaseId'
yq for YAML
redisctl cloud database list -o yaml | yq '.[] | select(.status == "active")'
Scripting Examples
Batch Operations
# Get all database IDs
IDS=($(redisctl cloud database list -q "[].databaseId" -o json | jq -r '.[]'))
# Process each database
for ID in "${IDS[@]}"; do
redisctl cloud database get $ID
done
Output Redirection
# Save to file
redisctl cloud database list -o json > databases.json
# Append to log
redisctl cloud database list >> operations.log
# Error handling
redisctl cloud database list 2> errors.log || echo "Failed"
Environment Detection
redisctl automatically detects the output environment:
- Terminal: Defaults to table format for readability
- Pipe: Defaults to JSON for processing
- Redirect: Defaults to JSON for storage
Override with -o
flag when needed.
Format-Specific Features
Table Features
- Automatic column width adjustment
- Row highlighting for important data
- Pagination for large datasets
- Color support when terminal supports it
JSON Features
- Pretty-printed by default
- Compact mode available with
--compact
- Proper escaping for special characters
- Null values handled correctly
YAML Features
- Comments for clarity
- Multi-line string support
- Proper indentation
- Type preservation
Error Handling
Different formats handle errors differently:
JSON Errors
{
"error": "Authentication failed",
"details": "Invalid API key"
}
Table Errors
Error: Authentication failed
Details: Invalid API key
YAML Errors
error: Authentication failed
details: Invalid API key
Performance Considerations
- JSON: Fastest parsing, smallest size
- YAML: Human-readable, larger size
- Table: Terminal rendering overhead
Examples
Save Configuration
redisctl cloud database get 12345 -o yaml > database-config.yaml
Generate Reports
# CSV-like output for spreadsheets
redisctl cloud database list -o json | \
jq -r '.[] | [.name, .status, .memory] | @csv'
Monitor Changes
# Watch for inactive databases
watch -n 10 'redisctl cloud database list -o table -q "[?status!='"'"'active'"'"']"'
Tips and Tricks
-
Default Format: Set
REDISCTL_OUTPUT
environment variableexport REDISCTL_OUTPUT=json
-
Raw Output: Use
-r
or--raw
for unformatted outputredisctl cloud database list -q "[].id" -r
-
Silent Mode: Suppress non-essential output
redisctl cloud database create --data @db.json 2> errors.log
-
Pretty Print: Control JSON formatting
redisctl cloud database list 2>/dev/null
Complex Workflows
Health Dashboard
#!/bin/bash
while true; do
clear
echo "=== Database Health ==="
redisctl cloud database list -o table -q "[?status!='active']"
echo ""
echo "=== Resource Usage ==="
redisctl cloud database list -o json | \
jq -r '.[] | "\(.name): \(.usedMemoryInMb)MB / \(.memoryLimitInGb)GB"'
sleep 60
done
Automated Reporting
#!/bin/bash
REPORT_DATE=$(date +%Y-%m-%d)
REPORT_FILE="database-report-${REPORT_DATE}.json"
# Collect all database information
{
echo "{"
echo " \"report_date\": \"${REPORT_DATE}\","
echo " \"databases\": "
redisctl cloud database list -o json | jq -r '
map({
name: .name,
status: .status,
region: .region,
memory_gb: .memoryLimitInGb,
throughput: .throughputMeasurement
})
'
echo "}"
} > "$REPORT_FILE"
echo "Report saved to $REPORT_FILE"
Best Practices
- Use JSON for automation - Most reliable for parsing
- Use Table for human review - Easiest to read
- Use YAML for configuration - Best for config files
- Use JMESPath for filtering - More powerful than jq for simple queries
- Combine tools - Use redisctl with jq, yq, awk for complex processing
JMESPath Queries
JMESPath is a query language for JSON that allows you to extract and transform data from command output. redisctl supports JMESPath queries via the -q
or --query
flag.
Basic Usage
redisctl [command] -o json -q "query_expression"
Quick Start Examples
# Get just one field
redisctl enterprise cluster get -o json -q 'name'
# Output: "docker-cluster"
# Get multiple fields as object
redisctl enterprise database get 1 -o json -q '{name: name, port: port}'
# Output: {"name": "default-db", "port": 12000}
# Get field from all items in a list
redisctl enterprise database list -o json -q '[].name'
# Output: ["default-db", "cache-db", "persistent-db"]
# Filter list by condition
redisctl enterprise database list -o json -q "[?port > `12000`].name"
# Output: ["cache-db", "persistent-db"]
# Count items
redisctl enterprise database list -o json -q 'length(@)'
# Output: 3
Common Query Patterns
Select Specific Fields
# Get just database names and ports
redisctl cloud database list -o json -q "[].{name:name, port:port}"
# Output:
# [
# {"name": "cache-prod", "port": 12000},
# {"name": "sessions", "port": 12001}
# ]
Filter Results
# Find active databases only
redisctl cloud database list -o json -q "[?status=='active']"
# Databases with specific memory size
redisctl enterprise database list -o json -q "[?memory_size > `1073741824`]"
# Multiple conditions
redisctl cloud subscription list -o json \
-q "[?status=='active' && paymentMethodId=='12345']"
Array Operations
# First 3 results
redisctl cloud database list -o json -q "[0:3]"
# Last result
redisctl cloud database list -o json -q "[-1]"
# Count results
redisctl cloud database list -o json -q "length(@)"
Nested Data Access
# Access nested fields
redisctl cloud subscription get 123456 -o json \
-q "databases[].{id:databaseId, name:name}"
# Flatten nested arrays
redisctl enterprise cluster get -o json \
-q "nodes[].{node:name, shards:shards[].name}"
Advanced Queries
Sorting
# Sort by memory size descending
redisctl enterprise database list -o json \
-q "reverse(sort_by(@, &memory_size))"
# Sort by name
redisctl cloud database list -o json \
-q "sort_by(@, &name)"
Aggregations
# Sum total memory across databases
redisctl enterprise database list -o json \
-q "sum([].memory_size)"
# Get max port number
redisctl enterprise database list -o json \
-q "max([].port)"
Complex Transformations
# Group databases by status
redisctl cloud database list -o json \
-q "group_by(@, &status)"
# Multi-level filtering and projection
redisctl cloud subscription list -o json \
-q "[?databases[?status=='active']].{
subscription: name,
active_databases: databases[?status=='active'].name
}"
Enterprise-Specific Examples
Database Management
# Get all database names and their persistence settings
redisctl enterprise database list -o json \
-q '[].{name: name, persistence: data_persistence}'
# Find databases using AOF persistence
redisctl enterprise database list -o json \
-q "[?data_persistence=='aof'].name"
# Get database endpoints for connection strings
redisctl enterprise database get 1 -o json \
-q 'endpoints[0].{host: addr[0], port: port}'
# Monitor database creation status
redisctl enterprise database list -o json \
-q "[?status!='active'].{name: name, status: status}"
Node and Cluster Monitoring
# Get node addresses with their status
redisctl enterprise node list -o json \
-q '[].{address: addr, status: status, shards: shard_count}'
# Extract specific node details
redisctl enterprise node get 1 -o json \
-q '{address: addr, cores: cores, memory_gb: total_memory / `1073741824`}'
# Check cluster resource usage
redisctl enterprise cluster stats -o json \
-q '{cpu: cpu_usage, memory: memory_usage, databases: total_databases}'
# Get cluster version and license status
redisctl enterprise cluster get -o json \
-q '{name: name, version: software_version, licensed: !license_expired}'
Module Management
# List all module names and versions
redisctl enterprise module list -o json \
-q '[].{name: module_name, version: semantic_version}'
# Find specific module version
redisctl enterprise module list -o json \
-q "[?module_name=='search'].semantic_version | [0]"
# Get modules configured for a database
redisctl enterprise database get 1 -o json \
-q 'module_list[].{name: module_name, args: module_args}'
License and Compliance
# Check license expiration
redisctl enterprise license get -o json \
-q '{expired: expired, expires_on: expiration_date}'
# Count total shards across all databases
redisctl enterprise database list -o json \
-q 'sum([].shards_count)'
Alert Monitoring
# Count active alerts
redisctl api enterprise get /v1/cluster/alerts -o json \
-q 'length(@)'
# Get alert details if any exist
redisctl api enterprise get /v1/cluster/alerts -o json \
-q '[].{severity: severity, message: message}'
Cloud-Specific Examples
Find Resources by Tags
# Find subscriptions with specific tags
redisctl cloud subscription list -o json \
-q "[?tags.environment=='production']"
Monitor Resource Usage
# Get high memory usage databases
redisctl enterprise database stats all -o json \
-q "[?used_memory > `858993459`].{
name: name,
usage_percent: (used_memory / memory_size) * `100`
}"
Extract Connection Info
# Get connection strings for all databases
redisctl cloud database list -o json \
-q "[].{
name: name,
connection: join('', ['redis://', publicEndpoint, ':', to_string(port)])
}"
Audit Configuration
# Find databases without replication
redisctl enterprise database list -o json \
-q "[?replication == `false`].name"
# Check backup settings
redisctl enterprise database list -o json \
-q "[?backup_interval == `0`].{
name: name,
warning: 'No automatic backups configured'
}"
Query Testing Tips
Test with Sample Data
# Save output to test queries
redisctl cloud database list -o json > databases.json
# Test queries offline
cat databases.json | jq '.' | jmespath.py "[?status=='active'].name"
Debug Complex Queries
# Build queries incrementally
redisctl cloud subscription list -o json -q "@" # All data
redisctl cloud subscription list -o json -q "[0]" # First item
redisctl cloud subscription list -o json -q "[0].databases" # Databases of first
Common Gotchas
- String literals need quotes:
[?status=='active']
not[?status==active]
- Numbers use backticks:
[?port > \
12000`]not
[?port > 12000]` - Escape in shell: Use single quotes around queries to avoid shell interpretation
- Null handling: Use
[?field != null]
to filter out null values
Performance Considerations
- JMESPath queries are applied client-side after receiving the full response
- For large result sets, consider using API pagination parameters first
- Complex queries may impact performance on very large JSON responses
Reference
For complete JMESPath syntax, see:
Async Operations
The redisctl
CLI provides comprehensive support for asynchronous operations across both Redis Cloud and Redis Enterprise APIs. All create, update, and delete operations support the --wait
flag family for tracking long-running operations.
Overview
Many Redis Cloud API operations are asynchronous, returning immediately with a task ID while the operation continues in the background. The --wait
flags allow you to:
- Wait for operations to complete before returning
- Track progress with visual indicators
- Set custom timeouts for long operations
- Configure polling intervals
Wait Flag Options
Flag | Description | Default |
---|---|---|
--wait | Wait for operation to complete | Timeout: 600s |
--wait-timeout <seconds> | Custom timeout duration | 600 |
--wait-interval <seconds> | Polling interval | 10 |
Basic Usage
# Create database and wait for completion
redisctl cloud database create --subscription-id 12345 \
--data @database.json --wait
# With custom timeout for large operations
redisctl cloud database create --subscription-id 12345 \
--data @large-db.json --wait --wait-timeout 1800
# With faster polling for quick operations
redisctl cloud database update --subscription-id 12345 \
--database-id 67890 --data @updates.json \
--wait --wait-interval 2
Progress Tracking
When using the --wait
flag, redisctl provides real-time progress tracking:
Creating database...
⠋ Waiting for task 12345 to complete... (10s)
⠙ Status: processing (20s)
⠹ Status: processing (30s)
✓ Database creation completed successfully
Supported Operations
Async operations are supported across all major command categories:
- Database Operations - Create, update, delete, import, backup, migrate
- Subscription Management - Regular and fixed subscriptions
- Network Connectivity - VPC Peering, PSC, Transit Gateway
- ACL Management - Rules, roles, and users
- User & Account Management - Users and provider accounts
Error Handling
Timeout Behavior
If an operation exceeds the timeout:
- The CLI exits with an error
- The task continues running in the background
- You can check status using the task ID
# Operation times out
Error: Operation timed out after 600 seconds. Task 12345 is still running.
# Check task status manually
redisctl cloud task get 12345
Recovery Options
# Retry with longer timeout
redisctl cloud database create --data @database.json \
--wait --wait-timeout 1800
# Check task status without waiting
redisctl cloud task list --status pending
Best Practices
Choosing Timeouts
- Small operations: Default 600s is usually sufficient
- Large databases: Increase to 1800s (30 min) or more
- Bulk operations: Consider 3600s (1 hour) for very large datasets
- Network operations: May need longer timeouts in some regions
Polling Intervals
- Default (10s): Good balance for most operations
- Fast operations (2-5s): For operations you expect to complete quickly
- Long operations (30-60s): Reduce API calls for very long operations
Automation
The --wait
flags are designed for automation:
#!/bin/bash
# CI/CD pipeline example
set -e # Exit on error
# Create infrastructure
redisctl cloud subscription create --data @prod-sub.json \
--wait --wait-timeout 1800
SUB_ID=$(redisctl cloud subscription list -q "[0].id" -o json)
redisctl cloud database create --subscription-id $SUB_ID \
--data @prod-db.json --wait --wait-timeout 900
echo "Infrastructure ready!"
Parallel Operations
You can run multiple async operations in parallel:
#!/bin/bash
# Create multiple databases in parallel
for i in {1..5}; do
redisctl cloud database create --subscription-id 12345 \
--data @db-$i.json --wait &
done
# Wait for all background jobs
wait
echo "All databases created!"
Implementation Details
All async operations use the centralized handle_async_response
function which:
- Extracts task IDs from API responses
- Polls for task completion
- Provides consistent progress indicators
- Handles timeouts and errors uniformly
The system automatically detects task IDs from various response formats:
taskId
field in responselinks
array with task references- Nested task objects
Raw API Access
Direct access to any Redis Cloud REST API endpoint.
Basic Usage
redisctl api cloud <method> <path> [options]
Methods: get
, post
, put
, patch
, delete
Examples
GET Requests
# Get account information
redisctl api cloud get /
# Get all subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/12345
# Get databases in subscription
redisctl api cloud get /subscriptions/12345/databases
# Get with query parameters
redisctl api cloud get "/subscriptions?limit=10&offset=20"
POST Requests
# Create database (with JSON file)
redisctl api cloud post /subscriptions/12345/databases \
--data @database.json
# Create database (with inline JSON)
redisctl api cloud post /subscriptions/12345/databases \
--data '{
"name": "my-database",
"memoryLimitInGb": 1,
"modules": ["RedisJSON", "RediSearch"]
}'
# Create with data from stdin
echo '{"name": "test"}' | redisctl api cloud post /subscriptions/12345/databases \
--data @-
PUT Requests
# Update database
redisctl api cloud put /subscriptions/12345/databases/67890 \
--data '{"memoryLimitInGb": 2}'
DELETE Requests
# Delete database
redisctl api cloud delete /subscriptions/12345/databases/67890
Request Options
Headers
# Add custom headers
redisctl api cloud get / \
--header "X-Request-ID: abc123" \
--header "X-Custom: value"
Output Control
# Get raw response body only
redisctl api cloud get / --raw
# Include response headers
redisctl api cloud get / --include-headers
# Verbose output (shows request details)
redisctl api cloud get / --verbose
Working with Files
Request Body from File
# JSON file
redisctl api cloud post /subscriptions/12345/databases \
--data @create-database.json
# YAML file (converted to JSON)
redisctl api cloud post /subscriptions/12345/databases \
--data @create-database.yaml
Save Response to File
# Save response
redisctl api cloud get /subscriptions > subscriptions.json
# Pretty print and save
redisctl api cloud get /subscriptions | jq '.' > subscriptions.json
Common Endpoints
Account & Billing
/
- Account information (root endpoint)/payment-methods
- Payment methods/cloud-accounts
- Cloud provider accounts
Subscriptions
/subscriptions
- List subscriptions/subscriptions/{id}
- Subscription details/subscriptions/{id}/databases
- Databases in subscription/subscriptions/{id}/pricing
- Pricing information
Databases
/subscriptions/{sub}/databases
- List databases/subscriptions/{sub}/databases/{db}
- Database details/subscriptions/{sub}/databases/{db}/backup
- Backup operations/subscriptions/{sub}/databases/{db}/import
- Import data
Users & Access
/users
- User management/roles
- Role definitions/subscriptions/{id}/redis-acl
- ACL rules
Operations
/tasks/{id}
- Task status/logs
- System logs/metrics
- Performance metrics
Error Handling
API errors are returned with appropriate HTTP status codes:
# Check response code
redisctl api cloud get /invalid-endpoint
# Error: 404 Not Found
# Verbose mode shows full error
redisctl api cloud get /invalid-endpoint --verbose
# Shows full error response with details
Tips
- Use
--data @-
to read from stdin for piping - Use
-q
with JMESPath to filter responses - Use
--raw
to get just the response body for scripting - Check the Redis Cloud API docs for endpoint details
Secure Storage
redisctl supports secure credential storage using your operating system's native keyring service. This keeps your API keys and passwords encrypted instead of storing them as plaintext in configuration files.
Overview
When secure storage is enabled, redisctl integrates with:
- macOS: Keychain
- Windows: Windows Credential Store
- Linux: Secret Service API (GNOME Keyring, KWallet, etc.)
Installation
The secure storage feature is optional. To enable it, install redisctl with the secure-storage
feature:
# Build from source with secure storage
cargo install redisctl --features secure-storage
# Or build locally
cargo build --release --features secure-storage
Setting Up Secure Storage
Store Credentials in Keyring
Use the --use-keyring
flag when setting up profiles:
# Store Cloud API credentials securely
redisctl profile set cloud-prod \
--deployment-type cloud \
--api-key YOUR_API_KEY \
--api-secret YOUR_SECRET_KEY \
--use-keyring
# Store Enterprise credentials securely
redisctl profile set enterprise-prod \
--deployment-type enterprise \
--url https://cluster.example.com:9443 \
--username admin@cluster.local \
--password YOUR_PASSWORD \
--use-keyring
Verify Secure Storage
# Check profile configuration
redisctl profile get cloud-prod
# Output shows keyring references instead of plaintext:
# Profile: cloud-prod
# Deployment Type: cloud
# API Key: keyring:cloud-prod-api-key
# API Secret: keyring:cloud-prod-api-secret
How It Works
When you use --use-keyring
:
- Credentials are stored in your OS keyring service
- Config file contains references like
keyring:profile-field
instead of actual values - At runtime, redisctl retrieves credentials from the keyring
- No plaintext secrets are written to disk
Configuration File
Your config file (~/.config/redisctl/config.toml
) will look like:
[profiles.cloud-prod]
deployment_type = "cloud"
api_key = "keyring:cloud-prod-api-key"
api_secret = "keyring:cloud-prod-api-secret"
[profiles.enterprise-prod]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@cluster.local"
password = "keyring:enterprise-prod-password"
Migration from Plaintext
Migrate Existing Profiles
If you have existing profiles with plaintext credentials:
# Update existing profile to use keyring
redisctl profile set cloud-prod \
--api-key YOUR_API_KEY \
--api-secret YOUR_SECRET_KEY \
--use-keyring
# The command will:
# 1. Store credentials in keyring
# 2. Update config to use keyring references
# 3. Remove plaintext from config file
Bulk Migration Script
#!/bin/bash
# Migrate all profiles to secure storage
for profile in $(redisctl profile list | grep -v "Available profiles:" | awk '{print $1}'); do
echo "Migrating profile: $profile"
# Get current values (you'll need to provide these)
# This is a manual step for security reasons
read -p "Enter API key for $profile: " api_key
read -s -p "Enter API secret for $profile: " api_secret
echo
# Update profile with keyring storage
redisctl profile set "$profile" \
--api-key "$api_key" \
--api-secret "$api_secret" \
--use-keyring
done
Credential Resolution Order
redisctl resolves credentials in this order:
- Environment variables (highest priority)
REDIS_CLOUD_API_KEY
,REDIS_ENTERPRISE_PASSWORD
, etc.
- Keyring (if value starts with
keyring:
) - Config file plaintext (lowest priority)
Environment Variable Override
Environment variables always take precedence:
# Temporarily override keyring-stored credentials
export REDIS_CLOUD_API_KEY="temporary-key"
redisctl cloud subscription list # Uses env var, not keyring
Security Best Practices
Do's
- Use unique credentials per environment (dev, staging, prod)
- Enable keyring storage for all production profiles
- Regularly rotate API keys and passwords
- Set appropriate permissions on config file (0600)
- Use environment variables in CI/CD pipelines
Don'ts
- Don't commit config files with plaintext credentials
- Don't share keyring-stored credentials between users
- Don't use the same credentials across environments
- Don't disable secure storage for production profiles
Troubleshooting
Keyring Service Not Available
If you get a keyring error:
Error: No keyring service available
Solutions:
-
Linux: Install a keyring service
# Ubuntu/Debian sudo apt-get install gnome-keyring # Fedora/RHEL sudo dnf install gnome-keyring
-
macOS: Keychain should be available by default
-
Windows: Credential Store should be available by default
-
Fallback: Use environment variables instead
export REDIS_CLOUD_API_KEY="your-key" export REDIS_CLOUD_API_SECRET="your-secret"
Permission Denied
If you can't access stored credentials:
Error: Failed to access keyring: Permission denied
Solutions:
- Unlock your keyring (Linux)
- Check Keychain Access permissions (macOS)
- Run as the same user who stored the credentials
Lost Keyring Access
If you lose access to your keyring:
-
Reset the profile with new credentials:
redisctl profile set cloud-prod \ --api-key NEW_KEY \ --api-secret NEW_SECRET \ --use-keyring
-
Or switch to plaintext (not recommended for production):
redisctl profile set cloud-prod \ --api-key NEW_KEY \ --api-secret NEW_SECRET
Platform-Specific Notes
macOS
- Credentials are stored in the login keychain
- You may be prompted to allow access on first use
- Use Keychain Access app to manage stored credentials
Windows
- Credentials are stored in Windows Credential Manager
- Access via: Control Panel → Credential Manager → Windows Credentials
- Look for entries starting with "redisctl:"
Linux
- Requires a Secret Service provider (GNOME Keyring, KWallet, etc.)
- May need to unlock keyring on login
- Use
seahorse
or similar tools to manage credentials
Comparison: Secure vs Plaintext
Aspect | Secure Storage | Plaintext |
---|---|---|
Security | Encrypted in OS keyring | Visible in config file |
Portability | Tied to user account | Config file portable |
CI/CD | Use env vars | Can use config file |
Setup | Requires keyring service | No dependencies |
Recommended for | Production, personal use | Development only |
Advanced Usage
Custom Keyring Service Name
The service name in the keyring is "redisctl" by default. This cannot be customized in the current version.
Keyring Entry Format
Keyring entries are stored as:
- Service:
redisctl
- Account:
profile-field
(e.g.,cloud-prod-api-key
) - Password: The actual credential value
Direct Keyring Access
You can directly manage keyring entries using OS tools:
# macOS - using security command
security find-generic-password -s "redisctl" -a "cloud-prod-api-key"
# Linux - using secret-tool
secret-tool lookup service redisctl account cloud-prod-api-key
# Windows - PowerShell
$cred = Get-StoredCredential -Target "redisctl:cloud-prod-api-key"
$cred.GetNetworkCredential().Password
Next Steps
- Set up profiles with secure storage: Configuration
- Learn about environment variables: Environment Variables
- Explore security best practices: Security
CI/CD Integration
Integrate redisctl into your continuous integration and deployment pipelines.
Overview
This tutorial covers:
- Automated database provisioning
- Environment promotion
- Testing with Redis
- GitOps workflows
- Blue-green deployments
GitHub Actions
Database Provisioning Workflow
# .github/workflows/provision-redis.yml
name: Provision Redis Database
on:
workflow_dispatch:
inputs:
environment:
description: 'Environment to deploy to'
required: true
type: choice
options:
- development
- staging
- production
memory_gb:
description: 'Memory limit in GB'
required: true
default: '4'
jobs:
provision:
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment }}
steps:
- uses: actions/checkout@v3
- name: Install redisctl
run: |
curl -L https://github.com/joshrotenberg/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
sudo mv redisctl /usr/local/bin/
redisctl --version
- name: Configure credentials
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_CLOUD_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_CLOUD_API_SECRET }}
run: |
redisctl profile set ci-${{ github.event.inputs.environment }} \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
- name: Create database configuration
run: |
cat > database.json <<EOF
{
"name": "${{ github.event.inputs.environment }}-${{ github.sha }}",
"memoryLimitInGb": ${{ github.event.inputs.memory_gb }},
"protocol": "redis",
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru"
}
EOF
- name: Provision database
id: provision
run: |
OUTPUT=$(redisctl cloud database create \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--data @database.json \
--wait \
--output json)
DB_ID=$(echo "$OUTPUT" | jq -r .databaseId)
ENDPOINT=$(echo "$OUTPUT" | jq -r .publicEndpoint)
PASSWORD=$(echo "$OUTPUT" | jq -r .password)
echo "database_id=$DB_ID" >> $GITHUB_OUTPUT
echo "endpoint=$ENDPOINT" >> $GITHUB_OUTPUT
echo "::add-mask::$PASSWORD"
echo "password=$PASSWORD" >> $GITHUB_OUTPUT
- name: Update deployment configuration
run: |
# Update Kubernetes secret
kubectl create secret generic redis-${{ github.event.inputs.environment }} \
--from-literal=endpoint=${{ steps.provision.outputs.endpoint }} \
--from-literal=password=${{ steps.provision.outputs.password }} \
--dry-run=client -o yaml | kubectl apply -f -
- name: Run smoke tests
run: |
redis-cli -h ${{ steps.provision.outputs.endpoint }} \
-a ${{ steps.provision.outputs.password }} \
PING
Environment Promotion
# .github/workflows/promote-environment.yml
name: Promote Database Configuration
on:
workflow_dispatch:
inputs:
from_env:
description: 'Source environment'
required: true
type: choice
options:
- development
- staging
to_env:
description: 'Target environment'
required: true
type: choice
options:
- staging
- production
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Export source configuration
run: |
SOURCE_DB=$(redisctl cloud database get \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.from_env)] }} \
--output json)
# Extract configuration
echo "$SOURCE_DB" | jq '{
memoryLimitInGb: .memoryLimitInGb,
throughputMeasurement: .throughputMeasurement,
modules: .modules,
alerts: .alerts,
dataEvictionPolicy: .dataEvictionPolicy,
dataPersistence: .dataPersistence
}' > config.json
- name: Apply to target environment
run: |
redisctl cloud database update \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.to_env)] }} \
--data @config.json \
--wait
- name: Verify promotion
run: |
TARGET_CONFIG=$(redisctl cloud database get \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.to_env)] }} \
--output json)
# Compare configurations
SOURCE_MEMORY=$(cat config.json | jq .memoryLimitInGb)
TARGET_MEMORY=$(echo "$TARGET_CONFIG" | jq .memoryLimitInGb)
if [ "$SOURCE_MEMORY" != "$TARGET_MEMORY" ]; then
echo "Configuration mismatch!"
exit 1
fi
GitLab CI/CD
Pipeline Configuration
# .gitlab-ci.yml
stages:
- validate
- deploy
- test
- promote
variables:
REDISCTL_VERSION: "latest"
before_script:
- curl -L https://github.com/joshrotenberg/redisctl/releases/${REDISCTL_VERSION}/download/redisctl-linux-amd64.tar.gz | tar xz
- mv redisctl /usr/local/bin/
- redisctl profile set gitlab --deployment cloud --api-key "$REDIS_API_KEY" --api-secret "$REDIS_API_SECRET"
validate:config:
stage: validate
script:
- |
for file in configs/*.json; do
echo "Validating $file"
jq empty "$file" || exit 1
done
deploy:development:
stage: deploy
environment: development
script:
- |
redisctl cloud database update \
--subscription-id "$DEV_SUBSCRIPTION_ID" \
--database-id "$DEV_DATABASE_ID" \
--data @configs/development.json \
--wait
only:
- develop
deploy:staging:
stage: deploy
environment: staging
script:
- |
redisctl cloud database update \
--subscription-id "$STAGING_SUBSCRIPTION_ID" \
--database-id "$STAGING_DATABASE_ID" \
--data @configs/staging.json \
--wait
only:
- main
test:integration:
stage: test
script:
- |
# Get database endpoint
ENDPOINT=$(redisctl cloud database get \
--subscription-id "$DEV_SUBSCRIPTION_ID" \
--database-id "$DEV_DATABASE_ID" \
-q "publicEndpoint")
# Run tests
npm test -- --redis-endpoint="$ENDPOINT"
dependencies:
- deploy:development
promote:to:production:
stage: promote
environment: production
when: manual
script:
- |
# Export staging config
redisctl cloud database get \
--subscription-id "$STAGING_SUBSCRIPTION_ID" \
--database-id "$STAGING_DATABASE_ID" \
-o json > staging-config.json
# Apply to production
redisctl cloud database update \
--subscription-id "$PROD_SUBSCRIPTION_ID" \
--database-id "$PROD_DATABASE_ID" \
--data @staging-config.json \
--wait
only:
- main
Jenkins Pipeline
Jenkinsfile
// Jenkinsfile
pipeline {
agent any
environment {
REDIS_CLOUD_API_KEY = credentials('redis-cloud-api-key')
REDIS_CLOUD_API_SECRET = credentials('redis-cloud-api-secret')
}
stages {
stage('Setup') {
steps {
sh '''
curl -L https://github.com/joshrotenberg/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
chmod +x redisctl
./redisctl profile set jenkins \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
'''
}
}
stage('Provision Database') {
when {
expression { params.PROVISION_NEW == true }
}
steps {
script {
def dbConfig = readJSON file: 'database-config.json'
dbConfig.name = "${env.JOB_NAME}-${env.BUILD_NUMBER}"
writeJSON file: 'temp-config.json', json: dbConfig
def result = sh(
script: """
./redisctl cloud database create \
--subscription-id ${params.SUBSCRIPTION_ID} \
--data @temp-config.json \
--wait \
--output json
""",
returnStdout: true
).trim()
def db = readJSON text: result
env.DATABASE_ID = db.databaseId
env.DATABASE_ENDPOINT = db.publicEndpoint
}
}
}
stage('Run Tests') {
steps {
sh '''
export REDIS_ENDPOINT="${DATABASE_ENDPOINT}"
npm test
'''
}
}
stage('Cleanup') {
when {
expression { params.CLEANUP == true }
}
steps {
sh """
./redisctl cloud database delete \
--subscription-id ${params.SUBSCRIPTION_ID} \
--database-id ${env.DATABASE_ID} \
--wait
"""
}
}
}
post {
always {
cleanWs()
}
}
}
ArgoCD GitOps
Application Manifest
# argocd/redis-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-databases
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/redis-config
targetRevision: HEAD
path: environments/production
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: false
selfHeal: true
syncOptions:
- CreateNamespace=true
hooks:
- name: provision-redis
manifest: |
apiVersion: batch/v1
kind: Job
metadata:
name: provision-redis
spec:
template:
spec:
containers:
- name: redisctl
image: redisctl/redisctl:latest
command:
- /bin/sh
- -c
- |
redisctl cloud database update \
--subscription-id $SUBSCRIPTION_ID \
--database-id $DATABASE_ID \
--data @/config/database.json \
--wait
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: redis-config
Terraform Integration
Redis Resource Management
# terraform/redis.tf
terraform {
required_providers {
shell = {
source = "scottwinkler/shell"
version = "~> 1.7"
}
}
}
resource "shell_script" "redis_database" {
lifecycle_commands {
create = <<-EOT
redisctl cloud database create \
--subscription-id ${var.subscription_id} \
--data '${jsonencode(var.database_config)}' \
--wait \
--output json
EOT
read = <<-EOT
redisctl cloud database get \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--output json
EOT
update = <<-EOT
redisctl cloud database update \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--data '${jsonencode(var.database_config)}' \
--wait \
--output json
EOT
delete = <<-EOT
redisctl cloud database delete \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--wait
EOT
}
environment = {
REDIS_CLOUD_API_KEY = var.redis_api_key
REDIS_CLOUD_API_SECRET = var.redis_api_secret
}
}
output "redis_endpoint" {
value = jsondecode(shell_script.redis_database.output)["publicEndpoint"]
}
output "redis_password" {
value = jsondecode(shell_script.redis_database.output)["password"]
sensitive = true
}
Blue-Green Deployments
Deployment Script
#!/bin/bash
# blue-green-deploy.sh
set -euo pipefail
# Configuration
SUBSCRIPTION_ID="${SUBSCRIPTION_ID}"
BLUE_DB_ID="${BLUE_DATABASE_ID}"
GREEN_DB_ID="${GREEN_DATABASE_ID}"
LOAD_BALANCER="${LOAD_BALANCER_NAME}"
# Determine current active environment
CURRENT_ACTIVE=$(kubectl get service redis-active -o jsonpath='{.spec.selector.version}')
echo "Current active: $CURRENT_ACTIVE"
if [ "$CURRENT_ACTIVE" = "blue" ]; then
TARGET="green"
TARGET_DB_ID="$GREEN_DB_ID"
else
TARGET="blue"
TARGET_DB_ID="$BLUE_DB_ID"
fi
echo "Deploying to $TARGET environment (Database: $TARGET_DB_ID)"
# Update target database
echo "Updating $TARGET database configuration..."
redisctl cloud database update \
--subscription-id "$SUBSCRIPTION_ID" \
--database-id "$TARGET_DB_ID" \
--data @new-config.json \
--wait
# Run health checks
echo "Running health checks on $TARGET..."
ENDPOINT=$(redisctl cloud database get \
--subscription-id "$SUBSCRIPTION_ID" \
--database-id "$TARGET_DB_ID" \
-q "publicEndpoint")
for i in {1..10}; do
if redis-cli -h "$ENDPOINT" PING | grep -q PONG; then
echo "Health check passed"
break
fi
sleep 5
done
# Run smoke tests
echo "Running smoke tests..."
./run-smoke-tests.sh "$ENDPOINT"
# Switch traffic
echo "Switching traffic to $TARGET..."
kubectl patch service redis-active \
-p '{"spec":{"selector":{"version":"'$TARGET'"}}}'
# Monitor for errors
echo "Monitoring for errors..."
sleep 30
ERROR_COUNT=$(kubectl logs -l app=redis,version=$TARGET --tail=100 | grep -c ERROR || true)
if [ "$ERROR_COUNT" -gt 0 ]; then
echo "Errors detected! Rolling back..."
kubectl patch service redis-active \
-p '{"spec":{"selector":{"version":"'$CURRENT_ACTIVE'"}}}'
exit 1
fi
echo "Deployment successful! $TARGET is now active"
Best Practices
- Store credentials securely - Use secrets management systems
- Use dedicated CI/CD profiles - Don't reuse production credentials
- Implement rollback mechanisms - Always have a way to revert
- Test in staging first - Never deploy directly to production
- Monitor deployments - Watch for errors during and after deployment
- Use infrastructure as code - Version control your Redis configurations
- Implement approval gates - Require manual approval for production
- Audit all changes - Log who changed what and when
- Use immutable deployments - Create new resources rather than updating
- Automate validation - Test configurations before applying
Next Steps
Common Recipes
Note: We're developing a powerful workflow system that will automate many of these common tasks. Soon you'll be able to run pre-built workflows like
redisctl workflow run create-ha-database
instead of manually executing multiple commands. Stay tuned!
This page will contain common recipes and patterns for using redisctl effectively. In the meantime, check out:
Quick Examples
Create a High-Availability Database
# Coming soon as a workflow!
# redisctl workflow run create-ha-database --name prod-cache --size 2gb
# Current manual process:
redisctl cloud database create --data '{
"name": "prod-cache",
"memoryLimitInGb": 2,
"replication": true,
"dataPersistence": "aof-every-1-second"
}' --wait
Migrate Database Between Regions
# Coming soon as a workflow!
# redisctl workflow run migrate-database --source 12345 --target-region us-west-2
# Current manual process involves multiple steps...
Set Up Monitoring
# Coming soon as a workflow!
# redisctl workflow run setup-monitoring --database 12345 --prometheus-url http://prometheus:9090
Workflow System Preview
The upcoming workflow system will provide:
- Pre-built workflows for common operations
- Custom workflow definitions in YAML/JSON
- Parameterized templates for reusable patterns
- Conditional logic and error handling
- Progress tracking with detailed output
- Rollback capabilities for safety
Example workflow definition (coming soon):
name: create-ha-database
description: Create a high-availability database with best practices
parameters:
- name: database_name
required: true
- name: size_gb
default: 1
- name: region
default: us-east-1
steps:
- name: create_subscription
command: cloud subscription create
data:
name: "{{ database_name }}-subscription"
- name: create_database
command: cloud database create
data:
name: "{{ database_name }}"
memoryLimitInGb: "{{ size_gb }}"
replication: true
dataPersistence: aof-every-1-second
wait: true
- name: configure_alerts
command: cloud database alert create
data:
threshold: 80
metric: memory-usage
Current Best Practices
Until workflows are available, here are some patterns:
Use JSON Files for Complex Operations
# Save configuration in files
cat > database.json <<EOF
{
"name": "production-db",
"memoryLimitInGb": 4,
"replication": true
}
EOF
redisctl cloud database create --data @database.json --wait
Chain Commands with Shell Scripts
#!/bin/bash
# Create database and wait for completion
DB_ID=$(redisctl cloud database create --data @config.json --wait -o json | jq -r '.resourceId')
# Configure ACL
redisctl cloud acl create --database $DB_ID --data @acl.json
# Set up monitoring
redisctl cloud metrics enable --database $DB_ID
Use Profiles for Different Environments
# Development
redisctl --profile dev database list
# Staging
redisctl --profile staging database list
# Production
redisctl --profile prod database list
See Also
Disaster Recovery
Learn how to prepare for and recover from disasters using redisctl.
Topics Covered
- Backup strategies
- Recovery procedures
- Failover processes
- Data validation
- RTO/RPO planning
Full tutorial coming soon - see Managing Production Databases for backup examples in the meantime.
Setting Up Monitoring
Learn how to monitor Redis Cloud and Enterprise deployments using redisctl with various monitoring stacks.
Overview
Effective monitoring requires:
- Regular health checks
- Metric collection
- Alert configuration
- Dashboard visualization
- Log aggregation
Monitoring Architecture
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ redisctl │────▶│ Redis APIs │────▶│ Metrics │
│ Scripts │ │ Cloud/Ent. │ │ Exporters │
└─────────────┘ └──────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Grafana │◀────│ Prometheus │◀────│ Format │
│ Dashboards │ │ Storage │ │ Conversion │
└─────────────┘ └──────────────┘ └─────────────┘
Basic Health Monitoring
Health Check Script
Create a basic health monitor:
#!/bin/bash
# health-check.sh
set -euo pipefail
# Configuration
PROFILE="${REDIS_PROFILE:-prod-cloud}"
CHECK_INTERVAL="${CHECK_INTERVAL:-60}"
ALERT_WEBHOOK="${ALERT_WEBHOOK}"
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
}
send_alert() {
local level=$1
local message=$2
if [ -n "$ALERT_WEBHOOK" ]; then
curl -X POST "$ALERT_WEBHOOK" \
-H 'Content-Type: application/json' \
-d "{\"level\": \"$level\", \"message\": \"$message\"}"
fi
case $level in
ERROR) echo -e "${RED}[ERROR]${NC} $message" ;;
WARNING) echo -e "${YELLOW}[WARN]${NC} $message" ;;
INFO) echo -e "${GREEN}[INFO]${NC} $message" ;;
esac
}
check_databases() {
local subscription_id=$1
# Get all databases
local databases=$(redisctl --profile $PROFILE cloud database list \
--subscription-id $subscription_id \
-q "[].{id: databaseId, name: name, status: status}" 2>/dev/null)
if [ -z "$databases" ]; then
send_alert "ERROR" "Failed to fetch databases for subscription $subscription_id"
return 1
fi
echo "$databases" | jq -c '.[]' | while read db; do
local id=$(echo $db | jq -r .id)
local name=$(echo $db | jq -r .name)
local status=$(echo $db | jq -r .status)
if [ "$status" != "active" ]; then
send_alert "ERROR" "Database $name ($id) is not active: $status"
else
log "Database $name ($id) is healthy"
fi
done
}
# Main monitoring loop
while true; do
log "Starting health check..."
# Get all subscriptions
SUBSCRIPTIONS=$(redisctl --profile $PROFILE cloud subscription list \
-q "[].id" 2>/dev/null | jq -r '.[]')
for sub_id in $SUBSCRIPTIONS; do
check_databases $sub_id
done
log "Health check complete. Sleeping for ${CHECK_INTERVAL}s..."
sleep $CHECK_INTERVAL
done
Prometheus Integration
Metrics Exporter
Create a Prometheus exporter for Redis metrics:
#!/usr/bin/env python3
# redis_exporter.py
import json
import subprocess
import time
from prometheus_client import start_http_server, Gauge, Counter
import os
# Prometheus metrics
db_memory_used = Gauge('redis_memory_used_mb', 'Memory used in MB', ['database', 'subscription'])
db_memory_limit = Gauge('redis_memory_limit_gb', 'Memory limit in GB', ['database', 'subscription'])
db_connections = Gauge('redis_connections_used', 'Connections used', ['database', 'subscription'])
db_ops = Gauge('redis_operations_per_second', 'Operations per second', ['database', 'subscription'])
db_status = Gauge('redis_database_status', 'Database status (1=active, 0=inactive)', ['database', 'subscription'])
def get_databases(profile, subscription_id):
"""Fetch database list using redisctl"""
cmd = [
'redisctl', '--profile', profile, 'cloud', 'database', 'list',
'--subscription-id', str(subscription_id), '-o', 'json'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return json.loads(result.stdout)
except Exception as e:
print(f"Error fetching databases: {e}")
return []
def get_database_details(profile, subscription_id, database_id):
"""Fetch detailed database metrics"""
cmd = [
'redisctl', '--profile', profile, 'cloud', 'database', 'get',
'--subscription-id', str(subscription_id),
'--database-id', str(database_id),
'-o', 'json'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return json.loads(result.stdout)
except Exception as e:
print(f"Error fetching database {database_id}: {e}")
return None
def collect_metrics():
"""Collect metrics from all databases"""
profile = os.getenv('REDIS_PROFILE', 'prod-cloud')
subscriptions = os.getenv('REDIS_SUBSCRIPTIONS', '').split(',')
for sub_id in subscriptions:
if not sub_id:
continue
databases = get_databases(profile, sub_id)
for db in databases:
db_id = db.get('databaseId')
db_name = db.get('name', f'db-{db_id}')
# Get detailed metrics
details = get_database_details(profile, sub_id, db_id)
if not details:
continue
# Update Prometheus metrics
labels = {'database': db_name, 'subscription': sub_id}
db_memory_used.labels(**labels).set(details.get('memoryUsageInMB', 0))
db_memory_limit.labels(**labels).set(details.get('memoryLimitInGB', 0))
db_connections.labels(**labels).set(details.get('connectionsUsed', 0))
throughput = details.get('throughputMeasurement', {})
db_ops.labels(**labels).set(throughput.get('value', 0))
status_value = 1 if details.get('status') == 'active' else 0
db_status.labels(**labels).set(status_value)
print(f"Updated metrics for {db_name}")
def main():
"""Main exporter loop"""
port = int(os.getenv('EXPORTER_PORT', '9090'))
interval = int(os.getenv('SCRAPE_INTERVAL', '30'))
# Start Prometheus HTTP server
start_http_server(port)
print(f"Exporter listening on port {port}")
while True:
try:
collect_metrics()
except Exception as e:
print(f"Error collecting metrics: {e}")
time.sleep(interval)
if __name__ == '__main__':
main()
Prometheus Configuration
Configure Prometheus to scrape the exporter:
# prometheus.yml
global:
scrape_interval: 30s
evaluation_interval: 30s
scrape_configs:
- job_name: 'redis-metrics'
static_configs:
- targets: ['localhost:9090']
labels:
environment: 'production'
service: 'redis'
# Alert rules
rule_files:
- 'redis_alerts.yml'
Alert Rules
Define Prometheus alert rules:
# redis_alerts.yml
groups:
- name: redis_alerts
interval: 30s
rules:
- alert: RedisHighMemoryUsage
expr: |
(redis_memory_used_mb / (redis_memory_limit_gb * 1024)) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.database }}"
description: "Database {{ $labels.database }} is using {{ $value | humanizePercentage }} of available memory"
- alert: RedisDatabaseDown
expr: redis_database_status == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Database {{ $labels.database }} is down"
description: "Database {{ $labels.database }} has been inactive for more than 2 minutes"
- alert: RedisHighConnections
expr: redis_connections_used > 900
for: 5m
labels:
severity: warning
annotations:
summary: "High connection count on {{ $labels.database }}"
description: "Database {{ $labels.database }} has {{ $value }} active connections"
- alert: RedisLowThroughput
expr: redis_operations_per_second < 100
for: 10m
labels:
severity: info
annotations:
summary: "Low throughput on {{ $labels.database }}"
description: "Database {{ $labels.database }} has only {{ $value }} ops/sec"
Grafana Dashboards
Dashboard Configuration
Create a comprehensive Grafana dashboard:
{
"dashboard": {
"title": "Redis Production Monitoring",
"panels": [
{
"title": "Database Status",
"type": "stat",
"targets": [
{
"expr": "sum(redis_database_status)",
"legendFormat": "Active Databases"
}
]
},
{
"title": "Memory Usage",
"type": "graph",
"targets": [
{
"expr": "redis_memory_used_mb",
"legendFormat": "{{ database }}"
}
]
},
{
"title": "Operations/Second",
"type": "graph",
"targets": [
{
"expr": "redis_operations_per_second",
"legendFormat": "{{ database }}"
}
]
},
{
"title": "Connection Count",
"type": "graph",
"targets": [
{
"expr": "redis_connections_used",
"legendFormat": "{{ database }}"
}
]
}
]
}
}
Log Monitoring
Centralized Logging with ELK
Ship Redis logs to Elasticsearch:
#!/bin/bash
# ship-logs.sh
# For Redis Enterprise
redisctl enterprise logs list \
--profile prod-enterprise \
--output json | \
jq -c '.[] | {
"@timestamp": .time,
"level": .severity,
"message": .message,
"node": .node_uid,
"component": .component
}' | \
while read log; do
curl -X POST "http://elasticsearch:9200/redis-logs/_doc" \
-H 'Content-Type: application/json' \
-d "$log"
done
Logstash Configuration
Process logs with Logstash:
# logstash.conf
input {
exec {
command => "redisctl enterprise logs list --output json"
interval => 60
codec => "json"
}
}
filter {
date {
match => [ "time", "ISO8601" ]
target => "@timestamp"
}
mutate {
add_field => { "environment" => "production" }
}
if [severity] == "error" {
mutate {
add_tag => [ "alert" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "redis-logs-%{+YYYY.MM.dd}"
}
if "alert" in [tags] {
email {
to => "ops-team@example.com"
subject => "Redis Error Alert"
body => "Error detected: %{message}"
}
}
}
Alerting Integration
Slack Notifications
Send alerts to Slack:
#!/bin/bash
# slack-alert.sh
send_slack_alert() {
local level=$1
local message=$2
local webhook_url="${SLACK_WEBHOOK_URL}"
local color="good"
case $level in
ERROR) color="danger" ;;
WARNING) color="warning" ;;
esac
curl -X POST "$webhook_url" \
-H 'Content-Type: application/json' \
-d "{
\"attachments\": [{
\"color\": \"$color\",
\"title\": \"Redis Alert: $level\",
\"text\": \"$message\",
\"footer\": \"redisctl monitoring\",
\"ts\": $(date +%s)
}]
}"
}
# Monitor and alert
while true; do
STATUS=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "status")
if [ "$STATUS" != "active" ]; then
send_slack_alert "ERROR" "Database 789 is $STATUS"
fi
sleep 60
done
PagerDuty Integration
Integrate with PagerDuty for critical alerts:
#!/usr/bin/env python3
# pagerduty_alert.py
import pdpyras
import subprocess
import json
import os
def check_redis_health():
"""Check Redis database health"""
cmd = [
'redisctl', 'cloud', 'database', 'list',
'--subscription-id', os.getenv('SUBSCRIPTION_ID'),
'-o', 'json'
]
result = subprocess.run(cmd, capture_output=True, text=True)
databases = json.loads(result.stdout)
alerts = []
for db in databases:
if db['status'] != 'active':
alerts.append({
'database': db['name'],
'status': db['status'],
'id': db['databaseId']
})
return alerts
def send_pagerduty_alert(session, alerts):
"""Send alert to PagerDuty"""
for alert in alerts:
session.trigger_incident(
summary=f"Redis database {alert['database']} is {alert['status']}",
source="redisctl-monitoring",
severity="error",
custom_details=alert
)
def main():
api_key = os.getenv('PAGERDUTY_API_KEY')
session = pdpyras.APISession(api_key)
alerts = check_redis_health()
if alerts:
send_pagerduty_alert(session, alerts)
if __name__ == '__main__':
main()
Custom Metrics Collection
Performance Baseline
Establish performance baselines:
#!/bin/bash
# baseline.sh
# Collect baseline metrics for 24 hours
DURATION=86400
INTERVAL=60
OUTPUT="baseline_$(date +%Y%m%d).csv"
echo "timestamp,database,ops,latency,memory,cpu" > $OUTPUT
END=$(($(date +%s) + DURATION))
while [ $(date +%s) -lt $END ]; do
TIMESTAMP=$(date +%s)
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-o json | \
jq -r "\"$TIMESTAMP,prod-db,\(.throughputMeasurement.value),\(.latency),\(.memoryUsageInMB),\(.cpuUsagePercentage)\"" \
>> $OUTPUT
sleep $INTERVAL
done
# Analyze baseline
echo "Baseline collection complete. Analyzing..."
python3 analyze_baseline.py $OUTPUT
Automation with Cron
Schedule monitoring tasks:
# crontab -e
# Health check every 5 minutes
*/5 * * * * /opt/monitoring/health-check.sh
# Collect metrics every minute
* * * * * /opt/monitoring/collect-metrics.sh
# Daily report
0 8 * * * /opt/monitoring/daily-report.sh
# Weekly capacity planning
0 0 * * 0 /opt/monitoring/capacity-planning.sh
# Backup monitoring config
0 2 * * * /opt/monitoring/backup-monitoring.sh
Best Practices
- Monitor proactively - Set up alerts before issues occur
- Use multiple data sources - Combine metrics, logs, and traces
- Set appropriate thresholds - Avoid alert fatigue
- Automate responses - Use runbooks for common issues
- Track trends - Look for patterns over time
- Test alert paths - Ensure alerts reach the right people
- Document procedures - Have clear escalation paths
- Review regularly - Update monitoring as systems evolve
Next Steps
Network Security
Secure your Redis deployments with proper network configuration.
Topics Covered
- VPC peering setup
- Private endpoints
- Security groups
- SSL/TLS configuration
- Access control
Full tutorial coming soon - see Network Connectivity for basic setup.
Managing Production Databases
This tutorial covers best practices for managing Redis databases in production using redisctl.
Prerequisites
- redisctl installed and configured
- Appropriate API credentials with production access
- Understanding of Redis concepts (memory, persistence, replication)
Setting Up Production Profiles
First, create separate profiles for different environments:
# Development environment
redisctl profile set dev-cloud \
--deployment cloud \
--api-key "$DEV_API_KEY" \
--api-secret "$DEV_API_SECRET"
# Staging environment
redisctl profile set staging-cloud \
--deployment cloud \
--api-key "$STAGING_API_KEY" \
--api-secret "$STAGING_API_SECRET"
# Production environment
redisctl profile set prod-cloud \
--deployment cloud \
--api-key "$PROD_API_KEY" \
--api-secret "$PROD_API_SECRET"
# Set production as default
redisctl profile default prod-cloud
Creating a Production Database
Step 1: Prepare Database Configuration
Create a production database configuration file:
{
"name": "prod-cache-01",
"memoryLimitInGb": 16,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 100000
},
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{
"name": "RedisJSON"
},
{
"name": "RediSearch"
}
],
"alerts": [
{
"name": "dataset-size",
"value": 80
},
{
"name": "throughput-higher-than",
"value": 90000
},
{
"name": "throughput-lower-than",
"value": 1000
},
{
"name": "latency",
"value": 5
}
],
"backup": {
"interval": 6,
"enabled": true
},
"clustering": {
"enabled": true,
"shardCount": 3
}
}
Step 2: Create the Database
# Create database and wait for completion
redisctl cloud database create \
--subscription-id 123456 \
--data @prod-database.json \
--wait \
--wait-timeout 900
# Verify creation
redisctl cloud database list --subscription-id 123456 -o table
Step 3: Configure Network Access
Set up VPC peering for secure access:
# Create VPC peering
redisctl cloud connectivity create-vpc \
--subscription-id 123456 \
--data @vpc-peering.json \
--wait
# Verify connection
redisctl cloud connectivity list-vpc --subscription-id 123456
Monitoring Production Databases
Health Checks
Create a monitoring script:
#!/bin/bash
# monitor-redis.sh
PROFILE="prod-cloud"
SUBSCRIPTION_ID="123456"
# Check all databases
DATABASES=$(redisctl --profile $PROFILE cloud database list \
--subscription-id $SUBSCRIPTION_ID \
-q "[].{id: databaseId, name: name, status: status}")
echo "$DATABASES" | jq -c '.[]' | while read db; do
ID=$(echo $db | jq -r .id)
NAME=$(echo $db | jq -r .name)
STATUS=$(echo $db | jq -r .status)
if [ "$STATUS" != "active" ]; then
echo "ALERT: Database $NAME ($ID) is not active: $STATUS"
# Send alert (PagerDuty, Slack, etc.)
fi
done
# Check memory usage
for db_id in $(echo "$DATABASES" | jq -r '.[].id'); do
DB_INFO=$(redisctl --profile $PROFILE cloud database get \
--subscription-id $SUBSCRIPTION_ID \
--database-id $db_id)
MEMORY_USED=$(echo $DB_INFO | jq -r .memoryUsageInMB)
MEMORY_LIMIT=$(echo $DB_INFO | jq -r .memoryLimitInGB)
MEMORY_LIMIT_MB=$((MEMORY_LIMIT * 1024))
USAGE_PERCENT=$((MEMORY_USED * 100 / MEMORY_LIMIT_MB))
if [ $USAGE_PERCENT -gt 80 ]; then
echo "WARNING: Database $db_id memory usage at ${USAGE_PERCENT}%"
fi
done
Performance Metrics
Track key performance indicators:
# Get database metrics
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
name: name,
ops: throughputMeasurement.value,
connections: connectionsUsed,
memory: memoryUsageInMB,
evicted: evictedObjects
}"
# Monitor over time
while true; do
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "throughputMeasurement.value" >> ops.log
sleep 60
done
Scaling Operations
Vertical Scaling (Resize)
# Increase memory limit
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 32}' \
--wait
# Increase throughput
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"throughputMeasurement": {
"by": "operations-per-second",
"value": 200000
}
}' \
--wait
Horizontal Scaling (Sharding)
For Redis Enterprise:
# Add shards
redisctl enterprise database update \
--database-id 1 \
--data '{"shardCount": 5}' \
--wait
Backup and Recovery
Automated Backups
Configure backup schedule:
# Enable backups every 4 hours
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"backup": {
"enabled": true,
"interval": 4
}
}'
Manual Backups
# Create manual backup before maintenance
redisctl cloud database backup \
--subscription-id 123456 \
--database-id 789 \
--wait
# List available backups
redisctl cloud database list-backups \
--subscription-id 123456 \
--database-id 789
Restore from Backup
# Prepare import configuration
cat > import.json <<EOF
{
"sourceType": "s3",
"importFromUri": ["s3://backup-bucket/backup-2024-01-15.rdb"],
"s3Credentials": {
"accessKey": "$AWS_ACCESS_KEY",
"secretKey": "$AWS_SECRET_KEY"
}
}
EOF
# Import data
redisctl cloud database import \
--subscription-id 123456 \
--database-id 789 \
--data @import.json \
--wait
Maintenance Operations
Rolling Updates
Update databases with zero downtime:
#!/bin/bash
# rolling-update.sh
DATABASES=(789 790 791)
UPDATE='{"dataEvictionPolicy": "volatile-lru"}'
for db_id in "${DATABASES[@]}"; do
echo "Updating database $db_id..."
# Remove from load balancer
remove_from_lb $db_id
# Update database
redisctl cloud database update \
--subscription-id 123456 \
--database-id $db_id \
--data "$UPDATE" \
--wait
# Health check
while true; do
STATUS=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id $db_id \
-q "status")
if [ "$STATUS" = "active" ]; then
break
fi
sleep 10
done
# Add back to load balancer
add_to_lb $db_id
echo "Database $db_id updated successfully"
sleep 30 # Wait before next update
done
Module Management
Add or update modules:
# Add RedisTimeSeries module
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"modules": [
{"name": "RedisJSON"},
{"name": "RediSearch"},
{"name": "RedisTimeSeries"}
]
}' \
--wait
Security Best Practices
Access Control
Configure ACL rules:
# Create ACL rule
redisctl cloud acl create-rule \
--subscription-id 123456 \
--database-id 789 \
--data '{
"name": "read-only-user",
"rule": "+@read ~* -@dangerous"
}'
# Create user with ACL
redisctl cloud acl create-user \
--subscription-id 123456 \
--database-id 789 \
--data '{
"username": "app-reader",
"password": "$(openssl rand -base64 32)",
"aclRule": "read-only-user"
}'
Password Rotation
#!/bin/bash
# rotate-passwords.sh
# Generate new password
NEW_PASSWORD=$(openssl rand -base64 32)
# Update database password
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data "{\"password\": \"$NEW_PASSWORD\"}" \
--wait
# Store in secret manager
aws secretsmanager update-secret \
--secret-id redis-prod-password \
--secret-string "$NEW_PASSWORD"
# Update application configuration
kubectl set secret redis-secret password="$NEW_PASSWORD"
Troubleshooting Common Issues
High Memory Usage
# Check memory stats
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
used: memoryUsageInMB,
limit: memoryLimitInGB,
evicted: evictedObjects
}"
# If evictions are happening, increase memory or adjust policy
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 24}'
Connection Issues
# Check connection limit
DB_INFO=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id 789)
CONNECTIONS_USED=$(echo $DB_INFO | jq -r .connectionsUsed)
CONNECTIONS_LIMIT=$(echo $DB_INFO | jq -r .connectionsLimit)
if [ $CONNECTIONS_USED -gt $((CONNECTIONS_LIMIT * 80 / 100)) ]; then
echo "Warning: Using $CONNECTIONS_USED of $CONNECTIONS_LIMIT connections"
# Increase connection limit or investigate connection leaks
fi
Performance Degradation
# Check slow log equivalent (through metrics)
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
latency: latency,
ops: throughputMeasurement.value,
cpu: cpuUsagePercentage
}"
# If CPU is high, consider sharding or upgrading
Best Practices Summary
- Always use profiles for different environments
- Enable replication for production databases
- Configure appropriate persistence (AOF or RDB)
- Set up monitoring and alerts before issues occur
- Automate backups and test restore procedures
- Use VPC peering for secure network access
- Implement proper ACLs for security
- Plan for scaling before you need it
- Document your database configurations
- Test changes in staging before production
Next Steps
Environment Variables
Complete reference of environment variables supported by redisctl
.
Redis Cloud
Variable | Description | Example |
---|---|---|
REDIS_CLOUD_API_KEY | API account key | A3qcymrvqpn9rr... |
REDIS_CLOUD_API_SECRET | API secret key | S3s8ecrrnaguqk... |
REDIS_CLOUD_API_URL | API endpoint (optional) | https://api.redislabs.com/v1 |
Redis Enterprise
Variable | Description | Example |
---|---|---|
REDIS_ENTERPRISE_URL | Cluster API URL | https://cluster:9443 |
REDIS_ENTERPRISE_USER | Username | admin@cluster.local |
REDIS_ENTERPRISE_PASSWORD | Password | your-password |
REDIS_ENTERPRISE_INSECURE | Allow self-signed certs | true or false |
General
Variable | Description | Example |
---|---|---|
REDISCTL_PROFILE | Default profile name | production |
REDISCTL_OUTPUT | Default output format | json , yaml , table |
RUST_LOG | Logging level | error , warn , info , debug |
NO_COLOR | Disable colored output | 1 or any value |
Usage Examples
Basic Setup
# Redis Cloud
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Redis Enterprise
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="password"
export REDIS_ENTERPRISE_INSECURE="true"
Debugging
# Enable debug logging
export RUST_LOG=debug
redisctl api cloud get /
# Trace specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
CI/CD
# GitHub Actions
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
Precedence
Environment variables are overridden by:
- Command-line flags (highest priority)
- Configuration file settings
But override:
- Default values (lowest priority)
Configuration File
Complete reference for the redisctl configuration file format and options.
File Location
The configuration file is stored at:
- Linux/macOS:
~/.config/redisctl/config.toml
- Windows:
%APPDATA%\redis\redisctl\config.toml
View the exact path:
redisctl profile path
File Format
The configuration file uses TOML format:
# Default profile to use when none specified
default_profile = "production"
# Profile definitions
[profiles.production]
deployment_type = "cloud"
api_key = "your-api-key"
api_secret = "your-api-secret"
api_url = "https://api.redislabs.com/v1"
[profiles.enterprise-local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true
Profile Configuration
Cloud Profile
All available options for Redis Cloud profiles:
[profiles.cloud-example]
# Required: Deployment type
deployment_type = "cloud"
# Required: API credentials
api_key = "A3qcymrvqpn9rrgdt40sv5f9yfxob26vx64hwddh8vminqnkgfq"
api_secret = "S3s8ecrrnaguqkvwfvealoe3sn25zqs4wc4lwgo4rb0ud3qm77c"
# Optional: API endpoint (defaults to production)
api_url = "https://api.redislabs.com/v1"
# Optional: Custom timeout (seconds)
timeout = 30
# Optional: Retry configuration
max_retries = 3
retry_delay = 1
Enterprise Profile
All available options for Redis Enterprise profiles:
[profiles.enterprise-example]
# Required: Deployment type
deployment_type = "enterprise"
# Required: Cluster URL
url = "https://cluster.example.com:9443"
# Required: Authentication
username = "admin@example.com"
password = "secure-password"
# Optional: Allow self-signed certificates
insecure = false
# Optional: Custom timeout (seconds)
timeout = 60
# Optional: Client certificate authentication
client_cert = "/path/to/client.crt"
client_key = "/path/to/client.key"
# Optional: Custom CA certificate
ca_cert = "/path/to/ca.crt"
Environment Variable Expansion
The configuration file supports environment variable expansion using ${VAR}
syntax:
Basic Expansion
[profiles.production]
deployment_type = "cloud"
api_key = "${REDIS_CLOUD_API_KEY}"
api_secret = "${REDIS_CLOUD_API_SECRET}"
With Default Values
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
# Use production URL if STAGING_API_URL not set
api_url = "${STAGING_API_URL:-https://api.redislabs.com/v1}"
Complex Example
default_profile = "${REDISCTL_DEFAULT_PROFILE:-development}"
[profiles.development]
deployment_type = "cloud"
api_key = "${DEV_API_KEY}"
api_secret = "${DEV_API_SECRET}"
api_url = "${DEV_API_URL:-https://api.redislabs.com/v1}"
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
[profiles."${DYNAMIC_PROFILE_NAME:-custom}"]
deployment_type = "${DYNAMIC_DEPLOYMENT:-cloud}"
api_key = "${DYNAMIC_API_KEY}"
api_secret = "${DYNAMIC_API_SECRET}"
Multiple Profiles
Organizing by Environment
# Development environments
[profiles.dev-cloud]
deployment_type = "cloud"
api_key = "${DEV_CLOUD_KEY}"
api_secret = "${DEV_CLOUD_SECRET}"
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "dev-admin"
password = "${DEV_ENTERPRISE_PASSWORD}"
insecure = true
# Staging environments
[profiles.staging-cloud]
deployment_type = "cloud"
api_key = "${STAGING_CLOUD_KEY}"
api_secret = "${STAGING_CLOUD_SECRET}"
[profiles.staging-enterprise]
deployment_type = "enterprise"
url = "https://staging-cluster:9443"
username = "staging-admin"
password = "${STAGING_ENTERPRISE_PASSWORD}"
# Production environments
[profiles.prod-cloud]
deployment_type = "cloud"
api_key = "${PROD_CLOUD_KEY}"
api_secret = "${PROD_CLOUD_SECRET}"
[profiles.prod-enterprise]
deployment_type = "enterprise"
url = "https://prod-cluster:9443"
username = "prod-admin"
password = "${PROD_ENTERPRISE_PASSWORD}"
Organizing by Region
[profiles.us-east-1]
deployment_type = "cloud"
api_key = "${US_EAST_API_KEY}"
api_secret = "${US_EAST_SECRET}"
[profiles.eu-west-1]
deployment_type = "cloud"
api_key = "${EU_WEST_API_KEY}"
api_secret = "${EU_WEST_SECRET}"
[profiles.ap-southeast-1]
deployment_type = "cloud"
api_key = "${APAC_API_KEY}"
api_secret = "${APAC_SECRET}"
Advanced Configuration
Team Shared Configuration
Create a shared base configuration:
# team-config.toml (checked into git)
[profiles.team-base]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
# Local overrides (not in git)
# ~/.config/redisctl/config.toml
[profiles.team]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
api_key = "${MY_API_KEY}"
api_secret = "${MY_API_SECRET}"
CI/CD Configuration
# CI/CD specific profiles
[profiles.ci-test]
deployment_type = "cloud"
api_key = "${CI_TEST_API_KEY}"
api_secret = "${CI_TEST_API_SECRET}"
api_url = "${CI_API_URL:-https://api.redislabs.com/v1}"
[profiles.ci-deploy]
deployment_type = "enterprise"
url = "${CI_CLUSTER_URL}"
username = "${CI_USERNAME}"
password = "${CI_PASSWORD}"
insecure = true # CI uses self-signed certs
Security Considerations
File Permissions
Set restrictive permissions on the configuration file:
# Linux/macOS
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# Should show: -rw-------
Credential Storage Best Practices
-
Never commit credentials to version control
# .gitignore config.toml *.secret
-
Use environment variables for sensitive data
[profiles.secure] deployment_type = "cloud" api_key = "${REDIS_API_KEY}" # Set in environment api_secret = "${REDIS_API_SECRET}" # Set in environment
-
Integrate with secret managers
# Set environment variables from secret manager export REDIS_API_KEY=$(vault kv get -field=api_key secret/redis) export REDIS_API_SECRET=$(vault kv get -field=api_secret secret/redis)
Migration from Other Formats
From Environment Variables Only
If currently using only environment variables:
# Create profile from environment
redisctl profile set migrated \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
From JSON Configuration
Convert JSON to TOML:
# old-config.json
{
"profiles": {
"production": {
"type": "cloud",
"apiKey": "key",
"apiSecret": "secret"
}
}
}
# Convert to config.toml
[profiles.production]
deployment_type = "cloud"
api_key = "key"
api_secret = "secret"
Validation
Check Configuration
# Validate profile configuration
redisctl profile show production
# Test authentication
redisctl auth test --profile production
# List all profiles
redisctl profile list
Common Issues
Invalid TOML syntax
# Wrong - missing quotes
[profiles.prod]
deployment_type = cloud # Should be "cloud"
# Correct
[profiles.prod]
deployment_type = "cloud"
Environment variable not found
# This will fail if MY_VAR is not set
api_key = "${MY_VAR}"
# Use default value to prevent failure
api_key = "${MY_VAR:-default-key}"
Profile name with special characters
# Use quotes for profile names with special characters
[profiles."prod-us-east-1"]
deployment_type = "cloud"
Backup and Recovery
Backup Configuration
# Backup current configuration
cp ~/.config/redisctl/config.toml ~/.config/redisctl/config.toml.backup
# Backup with timestamp
cp ~/.config/redisctl/config.toml \
~/.config/redisctl/config.toml.$(date +%Y%m%d_%H%M%S)
Restore Configuration
# Restore from backup
cp ~/.config/redisctl/config.toml.backup ~/.config/redisctl/config.toml
# Verify restoration
redisctl profile list
Example Configurations
Minimal Configuration
# Minimal working configuration
[profiles.default]
deployment_type = "cloud"
api_key = "your-key"
api_secret = "your-secret"
Full-Featured Configuration
# Complete example with all features
default_profile = "production"
# Production Cloud
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
# Staging Cloud with defaults
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
api_url = "https://api.redislabs.com/v1"
# Development Enterprise
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "admin@dev.local"
password = "${DEV_PASSWORD}"
insecure = true
# DR Enterprise with client certs
[profiles.dr-enterprise]
deployment_type = "enterprise"
url = "https://dr-cluster:9443"
username = "admin@dr.local"
password = "${DR_PASSWORD}"
client_cert = "/etc/ssl/client.crt"
client_key = "/etc/ssl/client.key"
ca_cert = "/etc/ssl/ca.crt"
# Local testing
[profiles.local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "test123"
insecure = true
Security Best Practices
This guide covers security best practices for using redisctl in production environments.
Credential Storage
Storage Methods Comparison
Method | Security Level | Use Case | Pros | Cons |
---|---|---|---|---|
OS Keyring | ⭐⭐⭐⭐⭐ High | Production | Encrypted by OS, Most secure | Requires secure-storage feature |
Environment Variables | ⭐⭐⭐⭐ Good | CI/CD, Containers | No file storage, Easy rotation | Must be set each session |
Config File (Plaintext) | ⭐⭐ Low | Development only | Simple setup | Credentials visible in file |
Using OS Keyring (Recommended for Production)
The most secure way to store credentials is using your operating system's keyring:
# Install with secure storage support
cargo install redisctl --features secure-storage
# Create secure profile
redisctl profile set production \
--deployment cloud \
--api-key "your-api-key" \
--api-secret "your-api-secret" \
--use-keyring
Platform Support
- macOS: Uses Keychain (automatic)
- Windows: Uses Credential Manager (automatic)
- Linux: Uses Secret Service (requires GNOME Keyring or KWallet)
How Keyring Storage Works
- Initial Setup: When you use
--use-keyring
, credentials are stored in the OS keyring - Config Reference: The config file stores references like
keyring:production-api-key
- Automatic Retrieval: redisctl automatically retrieves credentials from keyring when needed
- Secure Updates: Credentials can be updated without exposing them in files
Example config with keyring references:
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key" # Actual value in keyring
api_secret = "keyring:production-api-secret" # Actual value in keyring
api_url = "https://api.redislabs.com/v1" # Non-sensitive, plaintext
Environment Variables (CI/CD)
For automated environments, use environment variables:
# Set credentials
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Use in commands (overrides config)
redisctl cloud database list
# Or reference in config
cat > config.toml <<EOF
[profiles.ci]
deployment_type = "cloud"
api_key = "\${REDIS_CLOUD_API_KEY}"
api_secret = "\${REDIS_CLOUD_API_SECRET}"
EOF
GitHub Actions Example
- name: Deploy Database
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
run: |
redisctl cloud database create \
--subscription-id 12345 \
--data @database.json \
--wait
File Permissions
Protect configuration files containing credentials:
# Restrict to owner only
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# -rw------- 1 user user 1234 Jan 15 10:00 config.toml
Credential Rotation
Regular Rotation Schedule
- Generate new credentials in Redis Cloud/Enterprise console
- Update keyring with new credentials:
redisctl profile set production \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring
- Test access with new credentials
- Revoke old credentials in console
Automated Rotation Script
#!/bin/bash
# rotate-credentials.sh
PROFILE="production"
NEW_KEY=$(generate-api-key) # Your key generation method
NEW_SECRET=$(generate-api-secret)
# Update credentials
redisctl profile set "$PROFILE" \
--api-key "$NEW_KEY" \
--api-secret "$NEW_SECRET" \
--use-keyring
# Test new credentials
if redisctl --profile "$PROFILE" cloud subscription list > /dev/null; then
echo "Credential rotation successful"
# Notify old credentials can be revoked
else
echo "Credential rotation failed"
exit 1
fi
Secure Development Practices
Never Commit Credentials
Add to .gitignore
:
# Redis configuration
~/.config/redisctl/config.toml
.redisctl/
*.secret
*_credentials.toml
Use Git Hooks
Pre-commit hook to detect credentials:
#!/bin/bash
# .git/hooks/pre-commit
# Check for API keys
if git diff --cached | grep -E "api_key|api_secret|password" | grep -v "keyring:"; then
echo "ERROR: Potential credentials detected in commit"
echo "Use --use-keyring or environment variables instead"
exit 1
fi
Separate Development and Production
Use different profiles for each environment:
# Development (with keyring for safety)
[profiles.dev]
deployment_type = "cloud"
api_key = "keyring:dev-api-key"
api_secret = "keyring:dev-api-secret"
# Staging
[profiles.staging]
deployment_type = "cloud"
api_key = "keyring:staging-api-key"
api_secret = "keyring:staging-api-secret"
# Production
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key"
api_secret = "keyring:production-api-secret"
Audit and Monitoring
Profile Usage Audit
Monitor which profiles are being used:
# Enable debug logging
export RUST_LOG=debug
# Commands will log profile usage
redisctl --profile production cloud database list
# [DEBUG] Using Redis Cloud profile: production
Access Logging
Create wrapper script for audit logging:
#!/bin/bash
# /usr/local/bin/redisctl-audit
# Log command execution
echo "[$(date)] User: $USER, Command: redisctl $*" >> /var/log/redisctl-audit.log
# Execute actual command
exec /usr/local/bin/redisctl "$@"
Credential Access Monitoring
Monitor keyring access (macOS example):
# View keychain access logs
log show --predicate 'subsystem == "com.apple.securityd"' --last 1h
Network Security
TLS/SSL Verification
Always verify SSL certificates in production:
[profiles.production]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@example.com"
password = "keyring:production-password"
insecure = false # Never true in production
IP Whitelisting
Configure API access from specific IPs only:
- In Redis Cloud console, set IP whitelist
- In Redis Enterprise, configure firewall rules
- Document allowed IPs in team runbook
Incident Response
Compromised Credentials
If credentials are compromised:
- Immediately revoke compromised credentials in console
- Generate new credentials
- Update all systems using the credentials:
# Update all profiles using compromised credentials for profile in $(redisctl profile list | grep production); do redisctl profile set "$profile" \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring done
- Audit access logs for unauthorized usage
- Document incident and update security procedures
Security Checklist
- Using OS keyring for production credentials
- Config files have restricted permissions (600)
- Credentials not committed to version control
- Environment variables used in CI/CD
- Regular credential rotation scheduled
- Audit logging enabled
- SSL verification enabled
- IP whitelisting configured
- Incident response plan documented
- Team trained on security procedures
Additional Resources
Best Practices
Recommended patterns and practices for using redisctl effectively.
Profile Management
Use Separate Profiles for Each Environment
# Development
redisctl profile set dev \
--deployment cloud \
--api-key "$DEV_API_KEY" \
--api-secret "$DEV_SECRET"
# Staging
redisctl profile set staging \
--deployment cloud \
--api-key "$STAGING_API_KEY" \
--api-secret "$STAGING_SECRET"
# Production
redisctl profile set prod \
--deployment cloud \
--api-key "$PROD_API_KEY" \
--api-secret "$PROD_SECRET"
Naming Conventions
Use consistent, descriptive profile names:
# Good
cloud-prod
cloud-staging
enterprise-prod
enterprise-dr
# Avoid
prod1
test
my-profile
Secure Credential Storage
# Use environment variables in config
# ~/.config/redisctl/config.toml
[profiles.prod]
deployment_type = "cloud"
api_key = "${REDIS_PROD_API_KEY}"
api_secret = "${REDIS_PROD_SECRET}"
# Set restrictive permissions
chmod 600 ~/.config/redisctl/config.toml
# Use secret management tools
export REDIS_PROD_API_KEY=$(vault kv get -field=api_key secret/redis/prod)
Command Usage
Always Specify Profile for Production
# Explicit is better than implicit
redisctl --profile prod cloud database list --subscription-id 123
# Avoid relying on default profile for production
redisctl cloud database delete --subscription-id 123 --database-id 456 # Dangerous!
Use Output Formats Appropriately
# Human reading: table
redisctl cloud subscription list -o table
# Scripting: json with jq
redisctl cloud subscription list -o json | jq -r '.[].id'
# Quick checks: query
redisctl cloud database get --subscription-id 123 --database-id 456 -q "status"
Implement Idempotent Operations
# Check before create
check_database_exists() {
local name=$1
redisctl cloud database list --subscription-id 123 \
-q "[?name=='$name'].databaseId" | jq -r '.[]'
}
# Only create if doesn't exist
DB_ID=$(check_database_exists "my-database")
if [ -z "$DB_ID" ]; then
redisctl cloud database create --subscription-id 123 --data @db.json --wait
fi
Error Handling
Always Check Exit Codes
#!/bin/bash
set -euo pipefail # Exit on error, undefined variables, pipe failures
# Check individual commands
if ! redisctl cloud subscription list > /dev/null 2>&1; then
echo "Failed to list subscriptions"
exit 1
fi
# Or use && and ||
redisctl cloud database create --subscription-id 123 --data @db.json --wait \
&& echo "Database created successfully" \
|| { echo "Database creation failed"; exit 1; }
Implement Retry Logic
retry_command() {
local max_attempts=${MAX_ATTEMPTS:-3}
local delay=${RETRY_DELAY:-5}
local attempt=1
while [ $attempt -le $max_attempts ]; do
if "$@"; then
return 0
fi
echo "Attempt $attempt failed. Retrying in ${delay}s..." >&2
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2)) # Exponential backoff
done
echo "Command failed after $max_attempts attempts" >&2
return 1
}
# Usage
retry_command redisctl cloud database list --subscription-id 123
Log Operations
# Create audit log
log_operation() {
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
local user=$(whoami)
local command="$*"
echo "[$timestamp] User: $user, Command: $command" >> ~/.redisctl/audit.log
# Execute and log result
if "$@"; then
echo "[$timestamp] Result: SUCCESS" >> ~/.redisctl/audit.log
return 0
else
local exit_code=$?
echo "[$timestamp] Result: FAILED (exit: $exit_code)" >> ~/.redisctl/audit.log
return $exit_code
fi
}
# Usage
log_operation redisctl --profile prod cloud database delete \
--subscription-id 123 --database-id 456 --wait
Async Operations
Always Use --wait for Critical Operations
# Good: Wait for completion
redisctl cloud database create \
--subscription-id 123 \
--data @database.json \
--wait \
--wait-timeout 900
# Risky: Fire and forget
redisctl cloud database create \
--subscription-id 123 \
--data @database.json # Might fail silently
Handle Timeouts Gracefully
# Create with timeout handling
create_database_with_retry() {
local attempt=1
local max_attempts=3
while [ $attempt -le $max_attempts ]; do
echo "Creating database (attempt $attempt)..."
if redisctl cloud database create \
--subscription-id 123 \
--data @database.json \
--wait \
--wait-timeout 600; then
echo "Database created successfully"
return 0
fi
echo "Creation failed or timed out"
attempt=$((attempt + 1))
# Check if partially created
DB_ID=$(redisctl cloud database list --subscription-id 123 \
-q "[?name=='my-database'].databaseId" | jq -r '.[]')
if [ -n "$DB_ID" ]; then
echo "Database partially created with ID: $DB_ID"
# Clean up or continue based on state
return 1
fi
done
return 1
}
Security
Never Hardcode Credentials
# Bad
redisctl profile set prod \
--api-key "abc123def456" \
--api-secret "secret789xyz"
# Good
redisctl profile set prod \
--api-key "$REDIS_API_KEY" \
--api-secret "$REDIS_API_SECRET"
# Better
redisctl profile set prod \
--api-key "$(vault kv get -field=api_key secret/redis)" \
--api-secret "$(vault kv get -field=api_secret secret/redis)"
Rotate Credentials Regularly
#!/bin/bash
# rotate-credentials.sh
# Generate new API key (via Redis Cloud UI or API)
NEW_API_KEY=$(generate_new_api_key)
NEW_API_SECRET=$(generate_new_api_secret)
# Update profile
redisctl profile set prod \
--api-key "$NEW_API_KEY" \
--api-secret "$NEW_API_SECRET"
# Test new credentials
if redisctl --profile prod cloud subscription list > /dev/null 2>&1; then
echo "New credentials working"
# Revoke old credentials
revoke_old_credentials
else
echo "New credentials failed, keeping old ones"
exit 1
fi
Audit Access
# Track who uses production credentials
alias redisctl-prod='log_operation redisctl --profile prod'
# Review audit logs regularly
grep "profile prod" ~/.redisctl/audit.log | tail -20
Performance
Cache Frequently Used Data
# Cache subscription list for 5 minutes
get_subscriptions() {
local cache_file="/tmp/redisctl-subs-cache.json"
local cache_age=$((5 * 60)) # 5 minutes
# Check cache age
if [ -f "$cache_file" ]; then
local file_age=$(($(date +%s) - $(stat -f %m "$cache_file" 2>/dev/null || stat -c %Y "$cache_file")))
if [ $file_age -lt $cache_age ]; then
cat "$cache_file"
return 0
fi
fi
# Refresh cache
redisctl cloud subscription list -o json | tee "$cache_file"
}
Batch Operations
# Good: Single command with multiple operations
redisctl cloud database update \
--subscription-id 123 \
--database-id 456 \
--data '{
"memoryLimitInGb": 16,
"throughputMeasurement": {"by": "operations-per-second", "value": 50000},
"alerts": [{"name": "dataset-size", "value": 90}]
}'
# Avoid: Multiple separate updates
redisctl cloud database update --subscription-id 123 --database-id 456 \
--data '{"memoryLimitInGb": 16}'
redisctl cloud database update --subscription-id 123 --database-id 456 \
--data '{"throughputMeasurement": {"by": "operations-per-second", "value": 50000}}'
Use Appropriate Query Filters
# Efficient: Filter at API level
redisctl api cloud get /subscriptions --query-params "status=active"
# Less efficient: Filter after fetching
redisctl cloud subscription list -o json | jq '.[] | select(.status == "active")'
Automation
Create Reusable Scripts
#!/bin/bash
# provision-database.sh
set -euo pipefail
# Required parameters
ENVIRONMENT=${1:?Environment required (dev/staging/prod)}
DATABASE_NAME=${2:?Database name required}
MEMORY_GB=${3:-4}
# Load environment config
source "config/${ENVIRONMENT}.env"
# Create database config
cat > /tmp/database.json <<EOF
{
"name": "${DATABASE_NAME}-${ENVIRONMENT}",
"memoryLimitInGb": ${MEMORY_GB},
"replication": $([ "$ENVIRONMENT" = "prod" ] && echo "true" || echo "false"),
"dataPersistence": "$([ "$ENVIRONMENT" = "prod" ] && echo "aof-every-1-second" || echo "none")"
}
EOF
# Create database
redisctl --profile "${ENVIRONMENT}-cloud" cloud database create \
--subscription-id "${SUBSCRIPTION_ID}" \
--data @/tmp/database.json \
--wait
# Clean up
rm /tmp/database.json
Use Configuration Files
# config/environments.yaml
environments:
development:
profile: dev-cloud
subscription_id: 12345
defaults:
memory_gb: 2
replication: false
persistence: none
production:
profile: prod-cloud
subscription_id: 67890
defaults:
memory_gb: 16
replication: true
persistence: aof-every-1-second
Implement GitOps
# .github/workflows/redis-sync.yml
name: Sync Redis Configuration
on:
push:
paths:
- 'redis-config/*.json'
jobs:
sync:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Apply configurations
run: |
for config in redis-config/*.json; do
db_name=$(basename "$config" .json)
redisctl cloud database update \
--subscription-id ${{ secrets.SUBSCRIPTION_ID }} \
--database-id $(cat "redis-config/${db_name}.id") \
--data @"$config" \
--wait
done
Monitoring and Alerting
Regular Health Checks
#!/bin/bash
# health-check.sh
check_database_health() {
local sub_id=$1
local db_id=$2
local status=$(redisctl cloud database get \
--subscription-id "$sub_id" \
--database-id "$db_id" \
-q "status")
if [ "$status" != "active" ]; then
alert "Database $db_id is $status"
return 1
fi
return 0
}
# Run checks
while read -r sub_id db_id; do
check_database_health "$sub_id" "$db_id"
done < databases.txt
Track Changes
# Before making changes
backup_configuration() {
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_dir="backups/${timestamp}"
mkdir -p "$backup_dir"
# Backup all database configs
while read -r sub_id; do
redisctl cloud database list --subscription-id "$sub_id" \
-o json > "${backup_dir}/sub_${sub_id}_databases.json"
done < subscriptions.txt
echo "Configuration backed up to $backup_dir"
}
Documentation
Document Your Setup
# Create README for your Redis setup
cat > Redis-Setup.md <<'EOF'
# Redis Infrastructure
## Profiles
- `prod-cloud`: Production Cloud environment
- `prod-enterprise`: Production Enterprise cluster
- `dr-enterprise`: Disaster recovery cluster
## Key Databases
- `user-sessions`: Session storage (16GB, 100k ops/sec)
- `product-cache`: Product catalog cache (8GB, 50k ops/sec)
- `analytics-stream`: Analytics event stream (32GB, 200k ops/sec)
## Maintenance Windows
- Production: Sunday 2-4 AM UTC
- Staging: Any time
## Runbooks
- [Database Creation](./runbooks/create-database.md)
- [Scaling Operations](./runbooks/scaling.md)
- [Disaster Recovery](./runbooks/dr.md)
EOF
Maintain Runbooks
# Runbook: Database Scaling
## When to Scale
- Memory usage > 80% for 30 minutes
- Throughput > 90% of limit
- Latency > 5ms p99
## How to Scale
1. Check current metrics:
```bash
./scripts/check-metrics.sh prod-database
-
Calculate new size:
- Memory: Current usage * 1.5
- Throughput: Current peak * 2
-
Apply scaling:
./scripts/scale-database.sh prod-database --memory 32 --throughput 200000
-
Verify:
./scripts/verify-scaling.sh prod-database
## Summary Checklist
✅ **Profiles**: Use separate profiles for each environment
✅ **Security**: Never hardcode credentials
✅ **Error Handling**: Check exit codes and implement retries
✅ **Async Ops**: Always use --wait for critical operations
✅ **Logging**: Audit all production operations
✅ **Automation**: Create reusable, parameterized scripts
✅ **Monitoring**: Implement regular health checks
✅ **Documentation**: Maintain runbooks and setup documentation
✅ **Testing**: Test changes in non-production first
✅ **Backups**: Backup configurations before changes
Troubleshooting
Solutions for common issues when using redisctl.
Installation Issues
Binary Not Found
Problem: command not found: redisctl
Solutions:
# Check if binary is in PATH
which redisctl
# Add to PATH (Linux/macOS)
export PATH="$PATH:/path/to/redisctl"
echo 'export PATH="$PATH:/path/to/redisctl"' >> ~/.bashrc
# Make executable
chmod +x /path/to/redisctl
# Verify installation
redisctl --version
Permission Denied
Problem: permission denied: redisctl
Solutions:
# Make executable
chmod +x redisctl
# If installed system-wide
sudo chmod +x /usr/local/bin/redisctl
# Check ownership
ls -la $(which redisctl)
SSL Certificate Errors
Problem: Certificate verification failed
Solutions:
# For self-signed certificates (Enterprise)
export REDIS_ENTERPRISE_INSECURE=true
# Update CA certificates (Linux)
sudo update-ca-certificates
# macOS
brew install ca-certificates
Authentication Issues
Invalid Credentials
Problem: 401 Unauthorized
or Authentication failed
Diagnosis:
# Test credentials directly
redisctl auth test --profile prod
# Check environment variables
env | grep REDIS
# Verify profile configuration
redisctl profile show prod
Solutions:
# Re-set credentials
redisctl profile set prod \
--deployment cloud \
--api-key "correct-key" \
--api-secret "correct-secret"
# For Enterprise with special characters in password
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin@domain.com" \
--password 'p@$$w0rd!' # Use single quotes
Profile Not Found
Problem: Profile 'name' not found
Solutions:
# List available profiles
redisctl profile list
# Check config file location
redisctl profile path
# Create missing profile
redisctl profile set missing-profile \
--deployment cloud \
--api-key "$API_KEY" \
--api-secret "$SECRET"
# Set default profile
redisctl profile default prod
Environment Variable Issues
Problem: Environment variables not being read
Solutions:
# Export variables properly
export REDIS_CLOUD_API_KEY="key"
export REDIS_CLOUD_API_SECRET="secret"
# Check if set
echo $REDIS_CLOUD_API_KEY
# Use in same shell or source
source ~/.bashrc
# Debug with trace logging
RUST_LOG=trace redisctl cloud subscription list 2>&1 | grep -i env
Connection Issues
Network Timeout
Problem: Connection timeout
or Failed to connect
Diagnosis:
# Test connectivity
curl -I https://api.redislabs.com/v1/
ping api.redislabs.com
# For Enterprise
curl -k https://your-cluster:9443/v1/bootstrap
# Check DNS
nslookup api.redislabs.com
Solutions:
# Increase timeout (if supported in future versions)
export REDISCTL_TIMEOUT=60
# Check proxy settings
export HTTP_PROXY=http://proxy:8080
export HTTPS_PROXY=http://proxy:8080
# Bypass proxy for local
export NO_PROXY=localhost,127.0.0.1
# Test with curl first
curl -x $HTTPS_PROXY https://api.redislabs.com/v1/
SSL/TLS Errors
Problem: SSL certificate problem
or Certificate verify failed
Solutions for Enterprise:
# Allow self-signed certificates
export REDIS_ENTERPRISE_INSECURE=true
# Or in profile
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin" \
--password "pass" \
--insecure
# Import certificate
# Linux
sudo cp cluster-cert.pem /usr/local/share/ca-certificates/
sudo update-ca-certificates
# macOS
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain cluster-cert.pem
Port Blocked
Problem: Connection refused
Solutions:
# Check if port is open
nc -zv api.redislabs.com 443
nc -zv your-cluster 9443
# Check firewall rules
# Linux
sudo iptables -L -n | grep 9443
# macOS
sudo pfctl -s rules
# Windows
netsh advfirewall firewall show rule name=all
API Errors
Rate Limiting
Problem: 429 Too Many Requests
Solutions:
# Add delay between requests
for sub in $(cat subscriptions.txt); do
redisctl cloud subscription get $sub
sleep 2 # Wait 2 seconds
done
# Implement exponential backoff
retry_with_backoff() {
local max_attempts=5
local attempt=0
local delay=1
while [ $attempt -lt $max_attempts ]; do
if "$@"; then
return 0
fi
echo "Rate limited, waiting ${delay}s..."
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2))
done
return 1
}
retry_with_backoff redisctl cloud subscription list
Resource Not Found
Problem: 404 Not Found
Diagnosis:
# Verify resource exists
redisctl cloud subscription list
redisctl cloud database list --subscription-id 123456
# Check ID format
# Cloud: subscription_id:database_id
# Enterprise: numeric
Solutions:
# Use correct ID format
# Cloud
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789
# Enterprise
redisctl enterprise database get 1
# List to find correct ID
redisctl cloud subscription list -q "[].{id: id, name: name}"
Invalid Request
Problem: 400 Bad Request
Solutions:
# Validate JSON
cat payload.json | jq .
# Check required fields
# Example: database creation requires name
cat > database.json <<EOF
{
"name": "my-database", # Required
"memoryLimitInGb": 1 # Required
}
EOF
# Use schema validation (if available)
redisctl validate database.json
# Test with minimal payload first
echo '{"name": "test", "memoryLimitInGb": 1}' | \
redisctl api cloud post /subscriptions/123/databases --data @-
Command Issues
Command Not Recognized
Problem: Unknown command
Solutions:
# Check available commands
redisctl --help
redisctl cloud --help
redisctl enterprise --help
# Update to latest version
# Download latest from GitHub releases
# Check command syntax
redisctl cloud database list --subscription-id 123 # Correct
redisctl cloud database list 123 # Incorrect
Missing Required Arguments
Problem: Missing required argument
Solutions:
# Check command requirements
redisctl cloud database get --help
# Provide all required arguments
redisctl cloud database get \
--subscription-id 123456 \ # Required
--database-id 789 # Required
# Use environment variables for defaults
export REDIS_SUBSCRIPTION_ID=123456
Output Parsing Errors
Problem: JMESPath query errors or unexpected output
Solutions:
# Test query separately
redisctl cloud subscription list -o json | jq .
redisctl cloud subscription list -q "[].name"
# Escape special characters
redisctl cloud database list -q "[?name=='my-db']" # Correct
redisctl cloud database list -q '[?name==`my-db`]' # Also correct
# Debug output format
redisctl cloud subscription list -o json > output.json
cat output.json | jq '.[] | keys'
Async Operation Issues
Operation Timeout
Problem: Operation timeout
when using --wait
Solutions:
# Increase timeout
redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
--wait \
--wait-timeout 1200 # 20 minutes
# Check operation status manually
TASK_ID=$(redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
-q "taskId")
# Poll manually
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q "status")
echo "Status: $STATUS"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then
break
fi
sleep 30
done
Task Not Found
Problem: Cannot find task ID for async operation
Solutions:
# Check if operation returns task ID
redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
-o json | jq .
# Some operations might not be async
# Check API documentation
# List recent tasks
redisctl api cloud get /tasks --query-params "limit=10"
Configuration Issues
Config File Not Found
Problem: Configuration file not loading
Solutions:
# Check file location
redisctl profile path
# Create config directory
mkdir -p ~/.config/redisctl
# Initialize config
redisctl profile set default \
--deployment cloud \
--api-key "key" \
--api-secret "secret"
# Check permissions
chmod 600 ~/.config/redisctl/config.toml
Environment Variable Expansion
Problem: Variables in config not expanding
Solutions:
# config.toml
[profiles.prod]
deployment_type = "cloud"
api_key = "${REDIS_API_KEY}" # Will expand
api_secret = "$REDIS_SECRET" # Won't expand - needs braces
# With defaults
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
Performance Issues
Slow Response Times
Solutions:
# Enable caching (if implemented)
export REDISCTL_CACHE=true
# Reduce response size
redisctl cloud subscription list --query-params "fields=id,name"
# Use specific queries
redisctl cloud database list -q "[0:5]" # First 5 only
# Parallel processing
for id in $(cat database-ids.txt); do
redisctl cloud database get --subscription-id 123 --database-id $id &
done
wait
Large Output Handling
Solutions:
# Paginate results
LIMIT=50
OFFSET=0
while true; do
RESULTS=$(redisctl api cloud get /subscriptions \
--query-params "limit=$LIMIT&offset=$OFFSET")
# Process results
OFFSET=$((OFFSET + LIMIT))
done
# Stream to file
redisctl cloud database list --subscription-id 123 > databases.json
# Process with streaming tools
redisctl cloud database list --subscription-id 123 | jq -c '.[]' | while read db; do
echo "Processing: $(echo $db | jq -r .name)"
done
Debug Techniques
Enable Debug Logging
# Basic debug
export RUST_LOG=debug
redisctl cloud subscription list
# Trace everything
export RUST_LOG=trace
# Specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
# Save debug output
RUST_LOG=trace redisctl cloud subscription list 2> debug.log
Inspect HTTP Traffic
# Use proxy for inspection
export HTTP_PROXY=http://localhost:8888
# Run Charles Proxy or similar
# Or use trace logging
RUST_LOG=trace redisctl api cloud get /subscriptions 2>&1 | grep -i "http"
Test with Curl
# Replicate redisctl request with curl
# Cloud
curl -H "x-api-key: $API_KEY" \
-H "x-api-secret-key: $SECRET" \
https://api.redislabs.com/v1/subscriptions
# Enterprise
curl -k -u "admin:password" \
https://cluster:9443/v1/cluster
Getting Help
Resources
-
Check documentation
redisctl --help redisctl <command> --help
-
View debug information
redisctl --version RUST_LOG=debug redisctl profile list
-
Report issues
- GitHub Issues: https://github.com/joshrotenberg/redisctl/issues
- Include: version, command, error message, debug output
-
Community support
- Redis Discord
- Stack Overflow with tag
redisctl
Information to Provide
When reporting issues, include:
# Version
redisctl --version
# Command that failed
redisctl cloud database list --subscription-id 123
# Error message
# Full error output
# Debug output
RUST_LOG=debug redisctl cloud database list --subscription-id 123 2>&1
# Environment
uname -a
echo $SHELL
# Config (sanitized)
redisctl profile show prod | sed 's/api_key=.*/api_key=REDACTED/'
API Reference
Complete reference for direct API access using redisctl.
Overview
The api
command provides direct access to REST endpoints for both Redis Cloud and Redis Enterprise APIs.
redisctl api <deployment> <method> <path> [OPTIONS]
Command Structure
Deployment Types
cloud
- Redis Cloud APIenterprise
- Redis Enterprise API
HTTP Methods
get
- HTTP GET requestpost
- HTTP POST requestput
- HTTP PUT requestpatch
- HTTP PATCH requestdelete
- HTTP DELETE request
Path Format
- Must start with
/
- Can include path parameters
- Query parameters via
--query-params
Options
Option | Description | Example |
---|---|---|
--data <JSON> | Request body (inline or @file) | --data @payload.json |
--query-params <PARAMS> | URL query parameters | --query-params "limit=10&offset=0" |
--headers <HEADERS> | Additional HTTP headers | --headers "X-Custom: value" |
-o, --output <FORMAT> | Output format | -o json |
-q, --query <JMESPATH> | JMESPath filter | -q "[].name" |
--profile <NAME> | Use specific profile | --profile prod |
Redis Cloud API
Base URL
https://api.redislabs.com/v1
Authentication
- Header:
x-api-key: <api-key>
- Header:
x-api-secret-key: <secret-key>
Common Endpoints
Account
# Get account info
redisctl api cloud get /
# Get payment methods
redisctl api cloud get /payment-methods
# Get regions
redisctl api cloud get /regions
Subscriptions
# List subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/123456
# Create subscription
redisctl api cloud post /subscriptions --data @subscription.json
# Update subscription
redisctl api cloud put /subscriptions/123456 --data '{"name": "New Name"}'
# Delete subscription
redisctl api cloud delete /subscriptions/123456
Databases
# List databases
redisctl api cloud get /subscriptions/123456/databases
# Get database
redisctl api cloud get /subscriptions/123456/databases/789
# Create database
redisctl api cloud post /subscriptions/123456/databases --data @database.json
# Update database
redisctl api cloud put /subscriptions/123456/databases/789 --data '{"memoryLimitInGb": 8}'
# Delete database
redisctl api cloud delete /subscriptions/123456/databases/789
VPC Peering
# List VPC peerings
redisctl api cloud get /subscriptions/123456/peerings
# Create VPC peering
redisctl api cloud post /subscriptions/123456/peerings --data @vpc.json
# Get peering status
redisctl api cloud get /subscriptions/123456/peerings/abc123
# Delete peering
redisctl api cloud delete /subscriptions/123456/peerings/abc123
Tasks
# Get task status
redisctl api cloud get /tasks/task-123
# List tasks
redisctl api cloud get /tasks --query-params "status=processing"
ACL
# List ACL rules
redisctl api cloud get /subscriptions/123456/databases/789/acl/rules
# Create ACL rule
redisctl api cloud post /subscriptions/123456/databases/789/acl/rules --data @rule.json
# List ACL users
redisctl api cloud get /subscriptions/123456/databases/789/acl/users
# Create ACL user
redisctl api cloud post /subscriptions/123456/databases/789/acl/users --data @user.json
Response Codes
Code | Meaning | Action |
---|---|---|
200 | Success | Request completed |
201 | Created | Resource created |
202 | Accepted | Async operation started |
400 | Bad Request | Check request format |
401 | Unauthorized | Check API credentials |
403 | Forbidden | Check permissions |
404 | Not Found | Verify resource exists |
409 | Conflict | Resource state conflict |
429 | Rate Limited | Retry after delay |
500 | Server Error | Contact support |
Redis Enterprise API
Base URL
https://<cluster-address>:9443
Authentication
- Basic Auth:
username:password
- Header:
Authorization: Basic <base64>
Common Endpoints
Cluster
# Get cluster info
redisctl api enterprise get /v1/cluster
# Update cluster
redisctl api enterprise put /v1/cluster --data '{"name": "Production"}'
# Get cluster policy
redisctl api enterprise get /v1/cluster/policy
# Update policy
redisctl api enterprise put /v1/cluster/policy --data @policy.json
Databases (BDB)
# List databases
redisctl api enterprise get /v1/bdbs
# Get database
redisctl api enterprise get /v1/bdbs/1
# Create database
redisctl api enterprise post /v1/bdbs --data @bdb.json
# Update database
redisctl api enterprise put /v1/bdbs/1 --data '{"memory_size": 10737418240}'
# Delete database
redisctl api enterprise delete /v1/bdbs/1
Nodes
# List nodes
redisctl api enterprise get /v1/nodes
# Get node
redisctl api enterprise get /v1/nodes/1
# Update node
redisctl api enterprise put /v1/nodes/1 --data '{"rack_id": "rack-1"}'
# Node actions
redisctl api enterprise post /v1/nodes/1/actions/check
Users & RBAC
# List users
redisctl api enterprise get /v1/users
# Create user
redisctl api enterprise post /v1/users --data @user.json
# Get user
redisctl api enterprise get /v1/users/1
# Update user
redisctl api enterprise put /v1/users/1 --data '{"name": "Updated Name"}'
# Delete user
redisctl api enterprise delete /v1/users/1
# List roles
redisctl api enterprise get /v1/roles
Statistics
# Cluster stats
redisctl api enterprise get /v1/cluster/stats/last
# Database stats
redisctl api enterprise get /v1/bdbs/stats/last
# Node stats
redisctl api enterprise get /v1/nodes/stats/last
# Shard stats
redisctl api enterprise get /v1/shards/stats/last
Modules
# List modules
redisctl api enterprise get /v1/modules
# Upload module (requires multipart)
# Use module command instead: redisctl enterprise module upload --file module.zip
# Get module
redisctl api enterprise get /v1/modules/1
# Delete module
redisctl api enterprise delete /v1/modules/1
Logs
# Get cluster logs
redisctl api enterprise get /v1/logs --query-params "limit=100"
# Filter logs by time
redisctl api enterprise get /v1/logs --query-params "stime=2024-01-01T00:00:00Z&etime=2024-01-02T00:00:00Z"
API Versions
Redis Enterprise supports both v1 and v2 endpoints:
Version | Status | Usage |
---|---|---|
v1 | Stable | Most operations |
v2 | Preview | New features, async operations |
# v1 endpoint
redisctl api enterprise get /v1/bdbs
# v2 endpoint (if available)
redisctl api enterprise get /v2/bdbs
Query Parameters
Common query parameters across APIs:
Parameter | Description | Example |
---|---|---|
limit | Max results | limit=50 |
offset | Skip results | offset=100 |
sort | Sort field | sort=name |
order | Sort order | order=desc |
fields | Select fields | fields=name,status |
filter | Filter results | filter=status:active |
Request Body Formats
JSON Payload
# Inline JSON
redisctl api cloud post /path --data '{"key": "value"}'
# From file
redisctl api cloud post /path --data @payload.json
# From stdin
echo '{"key": "value"}' | redisctl api cloud post /path --data @-
Complex Examples
Create Database with Full Configuration
{
"name": "production-cache",
"memoryLimitInGb": 16,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 100000
},
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{"name": "RedisJSON"},
{"name": "RediSearch"}
],
"alerts": [
{"name": "dataset-size", "value": 80}
],
"backup": {
"interval": 6,
"enabled": true
}
}
Update Multiple Properties
redisctl api cloud put /subscriptions/123/databases/456 --data '{
"memoryLimitInGb": 32,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 200000
},
"alerts": [
{"name": "dataset-size", "value": 90},
{"name": "throughput-higher-than", "value": 180000}
]
}'
Response Handling
Success Response
# Pretty print JSON
redisctl api cloud get /subscriptions -o json | jq .
# Extract specific fields
redisctl api cloud get /subscriptions -q "[].{id: id, name: name}"
# Table format
redisctl api cloud get /subscriptions -o table
Error Response
{
"error": {
"code": "INVALID_REQUEST",
"message": "Missing required field: name",
"details": {
"field": "name",
"constraint": "required"
}
}
}
Async Operations
# Create returns task ID
TASK_ID=$(redisctl api cloud post /subscriptions/123/databases \
--data @database.json \
-q "taskId")
# Poll task status
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q "status")
if [ "$STATUS" = "completed" ]; then
break
elif [ "$STATUS" = "failed" ]; then
echo "Task failed!"
exit 1
fi
sleep 10
done
Rate Limiting
Both APIs implement rate limiting:
Redis Cloud
- Default: 100 requests per minute
- Burst: 150 requests
- Headers:
X-RateLimit-Limit
,X-RateLimit-Remaining
Redis Enterprise
- Configurable per cluster
- Default: No rate limiting
- Can be enabled in cluster settings
Handling Rate Limits
# Retry with exponential backoff
retry_with_backoff() {
local max_attempts=5
local attempt=0
local delay=1
while [ $attempt -lt $max_attempts ]; do
if redisctl api cloud get /subscriptions; then
return 0
fi
echo "Rate limited, waiting ${delay}s..."
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2))
done
return 1
}
Pagination
Handle paginated results:
#!/bin/bash
# Fetch all pages
LIMIT=100
OFFSET=0
ALL_RESULTS=()
while true; do
RESULTS=$(redisctl api cloud get /subscriptions \
--query-params "limit=$LIMIT&offset=$OFFSET" \
-o json)
COUNT=$(echo "$RESULTS" | jq '. | length')
if [ "$COUNT" -eq 0 ]; then
break
fi
ALL_RESULTS+=("$RESULTS")
OFFSET=$((OFFSET + LIMIT))
done
# Combine results
echo "${ALL_RESULTS[@]}" | jq -s 'flatten'
Best Practices
- Use profiles for credential management
- Handle errors gracefully with proper error checking
- Implement retries for transient failures
- Respect rate limits with backoff strategies
- Use pagination for large result sets
- Cache responses when appropriate
- Log API calls for audit trails
- Validate JSON before sending
- Use query filters to reduce response size
- Monitor API usage to stay within limits
Troubleshooting
Debug API Calls
# Enable debug logging
RUST_LOG=debug redisctl api cloud get /subscriptions
# View request headers
RUST_LOG=trace redisctl api cloud get /subscriptions 2>&1 | grep -i header
# Test with curl
curl -H "x-api-key: $API_KEY" \
-H "x-api-secret-key: $SECRET" \
https://api.redislabs.com/v1/subscriptions
Common Issues
401 Unauthorized
- Check API credentials
- Verify profile configuration
- Ensure credentials have necessary permissions
404 Not Found
- Verify endpoint path
- Check resource IDs
- Ensure API version is correct
429 Rate Limited
- Implement retry logic
- Add delays between requests
- Consider caching responses
500 Server Error
- Check API status page
- Retry with exponential backoff
- Contact support if persistent
Architecture
Coming soon - this section will cover:
- Overall project structure
- Command routing system
- API client design
- Profile management
- Error handling strategy
Using the Libraries
The redis-cloud
and redis-enterprise
crates can be used independently in your Rust projects.
Installation
[dependencies]
redis-cloud = "0.2"
redis-enterprise = "0.2"
Basic Usage
Redis Cloud Client
use redis_cloud::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "your-api-key", "your-api-secret", )?; // Get account info (root endpoint) let account = client.get_raw("/").await?; println!("{}", account); Ok(()) }
Redis Enterprise Client
use redis_enterprise::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "https://cluster:9443", "admin@cluster.local", "password", true, // insecure )?; // Get cluster info let cluster = client.get_raw("/v1/cluster").await?; println!("{}", cluster); Ok(()) }
More documentation coming soon.
Contributing
Contributions are welcome! Please see the GitHub repository for guidelines.