Introduction
redisctl
is a unified command-line tool for managing Redis Cloud and Redis Enterprise deployments through their REST APIs.
Why redisctl?
- Single Tool - One CLI for both Cloud and Enterprise deployments
- Explicit Commands - Clear separation between Cloud and Enterprise operations
- Multiple Interfaces - Raw API access, human-friendly commands, and orchestrated workflows
- Flexible Output - JSON, YAML, or formatted tables with JMESPath filtering
Command Layers
The CLI provides three layers of interaction:
- Raw API Access - Direct REST calls to any endpoint
- Human-Friendly Commands - Typed wrappers around common operations
- Workflows - Multi-step orchestrated operations (coming soon)
Quick Example
# Configure your profile
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# List all databases
redisctl cloud database list
# Get specific database details
redisctl cloud database get 12345
# Direct API call
redisctl api cloud get /subscriptions
Next Steps
- Installation - Get redisctl installed
- Configuration - Set up your profiles
- Quick Start - Your first commands
Installation
Binary Releases
Download the latest release for your platform from the GitHub releases page.
Linux/macOS
# Download the binary (replace VERSION and PLATFORM)
curl -L https://github.com/joshrotenberg/redisctl/releases/download/vVERSION/redisctl-PLATFORM.tar.gz | tar xz
# Move to PATH
sudo mv redisctl /usr/local/bin/
# Make executable
chmod +x /usr/local/bin/redisctl
Windows
Download the .zip
file from the releases page and extract to a directory in your PATH.
From Cargo
If you have Rust installed:
# Basic installation
cargo install redisctl
# With secure credential storage support (recommended)
cargo install redisctl --features secure-storage
Feature Flags
Feature | Description |
---|---|
secure-storage | Enables OS keyring support for secure credential storage (recommended) |
cloud-only | Builds only Cloud functionality (smaller binary) |
enterprise-only | Builds only Enterprise functionality (smaller binary) |
From Source
git clone https://github.com/joshrotenberg/redisctl.git
cd redisctl
# Basic installation
cargo install --path crates/redisctl
# With secure storage support (recommended)
cargo install --path crates/redisctl --features secure-storage
# Development build with all features
cargo build --release --all-features
Docker
# Pull the image
docker pull joshrotenberg/redisctl:latest
# Run commands
docker run --rm joshrotenberg/redisctl --help
# With environment variables
docker run --rm \
-e REDIS_CLOUD_API_KEY="your-key" \
-e REDIS_CLOUD_SECRET_KEY="your-secret" \
joshrotenberg/redisctl cloud database list
Shell Completions
redisctl
can generate shell completions for better command-line experience.
Bash
# Generate completion
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Or system-wide (requires sudo)
redisctl completions bash | sudo tee /usr/share/bash-completion/completions/redisctl
# Reload your shell or source the completion
source ~/.local/share/bash-completion/completions/redisctl
Zsh
# Add to your fpath (usually in ~/.zshrc)
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Or use oh-my-zsh custom completions
redisctl completions zsh > ~/.oh-my-zsh/custom/completions/_redisctl
# Reload shell
exec zsh
Fish
# Generate completion
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically
PowerShell
# Generate completion
redisctl completions powershell | Out-String | Invoke-Expression
# To make permanent, add to your PowerShell profile
redisctl completions powershell >> $PROFILE
Elvish
# Generate completion
redisctl completions elvish > ~/.config/elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.config/elvish/rc.elv
Verify Installation
redisctl --version
Platform-Specific Binaries
For specific deployment scenarios, you can build platform-specific binaries:
# Cloud-only binary (smaller size)
cargo build --release --features cloud-only --bin redis-cloud
# Enterprise-only binary
cargo build --release --features enterprise-only --bin redis-enterprise
Next Steps
- Configuration - Set up your credentials
- Quick Start - Run your first commands
Configuration
redisctl
can be configured using either a configuration file or environment variables.
Configuration File
Create ~/.config/redisctl/config.toml
:
# Set default profile
default_profile = "cloud-prod"
# Redis Cloud Profile
[profiles.cloud-prod]
deployment_type = "cloud"
api_key = "your-account-key"
api_secret = "your-secret-key"
api_url = "https://api.redislabs.com/v1" # Optional, this is the default
# Redis Enterprise Profile
[profiles.enterprise-dev]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true # Allow self-signed certificates
# You can have multiple profiles
[profiles.cloud-staging]
deployment_type = "cloud"
api_key = "staging-key"
api_secret = "staging-secret"
File Locations
- Linux/macOS:
~/.config/redisctl/config.toml
- Windows:
%APPDATA%\redis\redisctl\config.toml
Security
Set appropriate file permissions:
chmod 600 ~/.config/redisctl/config.toml
Environment Variables
Alternatively, use environment variables:
Cloud Variables
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
export REDIS_CLOUD_API_URL="https://api.redislabs.com/v1" # Optional
Enterprise Variables
export REDIS_ENTERPRISE_URL="https://cluster:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # For self-signed certs
Getting Your Credentials
Redis Cloud
- Log in to Redis Cloud Console
- Go to Account Settings → API Keys
- Click Add API Key
- Copy the Account Key and Secret
Redis Enterprise
Get credentials from your cluster administrator or use the default:
- Username:
admin@cluster.local
- Password: Set during cluster setup
Testing Your Configuration
# Test Cloud connection
redisctl api cloud get /
# Test Enterprise connection
redisctl api enterprise get /v1/cluster
Using Profiles
# Use default profile
redisctl cloud database list
# Use specific profile
redisctl cloud database list --profile cloud-staging
# List all profiles
redisctl profile list
Next Steps
- Quick Start - Start using redisctl
Quick Start
This guide will get you running your first commands in 5 minutes.
Step 1: Configure Credentials
Choose one method:
Option A: Environment Variables (Quickest)
# For Redis Cloud
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# For Redis Enterprise
export REDIS_ENTERPRISE_URL="https://cluster:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
Option B: Configuration File
Create ~/.config/redisctl/config.toml
:
[profiles.default]
deployment_type = "cloud" # or "enterprise"
api_key = "your-key"
api_secret = "your-secret"
Step 2: Test Connection
# For Cloud
redisctl api cloud get /
# For Enterprise
redisctl api enterprise get /v1/cluster
Step 3: Common Commands
List Resources
# List all databases
redisctl cloud database list
# List in table format
redisctl cloud database list -o table
# Filter active databases only
redisctl cloud database list -q "[?status=='active']"
Get Details
# Get database details
redisctl cloud database get 12345
# Get as YAML
redisctl cloud database get 12345 -o yaml
Direct API Access
# Any Cloud API endpoint
redisctl api cloud get /subscriptions
redisctl api cloud get /subscriptions/12345/databases
# Any Enterprise API endpoint
redisctl api enterprise get /v1/bdbs
redisctl api enterprise get /v1/nodes
Step 4: Using Workflows
Initialize Enterprise Cluster
For new Redis Enterprise installations, use the init-cluster workflow:
# Complete cluster setup with one command
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword"
# This workflow will:
# 1. Bootstrap the cluster
# 2. Set up authentication
# 3. Create a default database
# 4. Verify connectivity
Step 5: Explore More
Cloud Operations
# Cloud-specific commands
redisctl cloud subscription list
redisctl cloud database list --subscription-id 12345
Enterprise Operations
# Enterprise-specific commands
redisctl enterprise cluster info
redisctl enterprise database list
redisctl enterprise node list
Output Options
# JSON (default)
redisctl enterprise database list
# Table format
redisctl enterprise database list -o table
# YAML
redisctl enterprise database list -o yaml
# Filter with JMESPath
redisctl enterprise database list -q "[].{name:name,memory:memory_size}"
What's Next?
- Redis Cloud Guide - Cloud-specific operations
- Redis Enterprise Guide - Enterprise-specific operations
- Workflows - Automating complex operations
- Examples - More detailed examples
Authentication
redisctl
supports authentication for both Redis Cloud and Redis Enterprise deployments.
Redis Cloud
Redis Cloud uses API key authentication:
- API Key - Your account key (public identifier)
- API Secret - Your secret key (keep this private!)
Getting Your API Keys
- Log in to app.redislabs.com
- Click your name → Account Settings → API Keys
- Click "Add API Key" and give it a name
- Copy both the Account key and Secret (you won't see the secret again!)
Setting Up Authentication
Option 1: Secure OS Keyring (Recommended)
When compiled with the secure-storage
feature, store credentials securely in your OS keyring:
# Install with secure storage support
cargo install redisctl --features secure-storage
# Create secure profile
redisctl profile set cloud \
--deployment cloud \
--api-key "your-account-key" \
--api-secret "your-secret-key" \
--use-keyring # Stores in OS keyring
# Test it works
redisctl --profile cloud api cloud get /
Your config will contain secure references:
[profiles.cloud]
deployment_type = "cloud"
api_key = "keyring:cloud-api-key" # Actual value in OS keyring
api_secret = "keyring:cloud-api-secret" # Actual value in OS keyring
Option 2: Environment Variables
Use environment variables (good for CI/CD):
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
# Test it works
redisctl api cloud get /
Option 3: Configuration File (Development Only)
For development only, you can use plaintext config at ~/.config/redisctl/config.toml
:
[profiles.cloud]
deployment_type = "cloud"
api_key = "your-account-key"
api_secret = "your-secret-key"
⚠️ Warning: This stores credentials in plaintext. Use keyring or environment variables for production!
Redis Enterprise
Redis Enterprise uses basic authentication with username/password.
Default Credentials
- Username:
admin@cluster.local
(default) - Password: Set during cluster setup
Setting Up Authentication
Option 1: Secure OS Keyring (Recommended)
Store credentials securely in your OS keyring:
# Create secure profile
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster.example.com:9443" \
--username "admin@cluster.local" \
--password "your-password" \
--use-keyring # Stores in OS keyring
# For self-signed certificates
redisctl profile set enterprise --insecure true
# Test it works
redisctl --profile enterprise api enterprise get /v1/cluster
Your config will contain secure references:
[profiles.enterprise]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "keyring:enterprise-username" # Actual value in OS keyring
password = "keyring:enterprise-password" # Actual value in OS keyring
insecure = false
Option 2: Environment Variables
Use environment variables:
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
# For self-signed certificates
export REDIS_ENTERPRISE_INSECURE="true"
# Test it works
redisctl api enterprise get /v1/cluster
Option 3: Configuration File (Development Only)
For development only, add to ~/.config/redisctl/config.toml
:
[profiles.enterprise]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true # For self-signed certs
⚠️ Warning: This stores credentials in plaintext. Use keyring or environment variables for production!
Security Tips
- Never commit credentials - Use environment variables or secure vaults
- Use read-only API keys when possible for Cloud
- Rotate credentials regularly
- Set file permissions:
chmod 600 ~/.config/redisctl/config.toml
Troubleshooting
Authentication Failed
Check your credentials:
# Enable debug logging to see what's happening
RUST_LOG=debug redisctl api cloud get /
Connection Refused
Verify the URL and port are correct:
curl -k https://your-cluster:9443/v1/cluster
Certificate Errors
For development/testing with self-signed certificates:
export REDIS_ENTERPRISE_INSECURE=true
See Also
- Configuration - Profile management
- Environment Variables - All supported variables
Shell Completions
redisctl supports tab completion for all major shells. This guide shows how to install and configure completions for your shell.
Generating Completions
First, generate the completion script for your shell:
# Bash
redisctl completions bash > redisctl.bash
# Zsh
redisctl completions zsh > _redisctl
# Fish
redisctl completions fish > redisctl.fish
# PowerShell
redisctl completions powershell > redisctl.ps1
# Elvish
redisctl completions elvish > redisctl.elv
Installing Completions
Bash
# Linux - User-specific
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Linux - System-wide (requires sudo)
sudo redisctl completions bash > /usr/share/bash-completion/completions/redisctl
# macOS with Homebrew
redisctl completions bash > $(brew --prefix)/etc/bash_completion.d/redisctl
# Reload your shell
source ~/.bashrc
# or start a new terminal
Zsh
# Add to your fpath (usually in ~/.zshrc)
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
# Create directory if needed
mkdir -p ~/.zsh/completions
# Generate completion file
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Reload your shell
source ~/.zshrc
# or start a new terminal
Fish
# Generate completion file
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically in new shells
# or reload current shell:
source ~/.config/fish/config.fish
PowerShell
# Add to your PowerShell profile
redisctl completions powershell >> $PROFILE
# Or save to a file and source it
redisctl completions powershell > redisctl.ps1
Add-Content $PROFILE ". $PWD\redisctl.ps1"
# Reload profile
. $PROFILE
Elvish
# Generate completion file
redisctl completions elvish > ~/.elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.elvish/rc.elv
# Reload shell
exec elvish
Testing Completions
After installation, test that completions work:
# Type and press Tab
redisctl <Tab>
# Should show: api, auth, cloud, enterprise, profile, etc.
# Try sub-commands
redisctl cloud <Tab>
# Should show: database, subscription, user, etc.
# Try options
redisctl --<Tab>
# Should show: --help, --version, --profile, --output, etc.
Troubleshooting
Completions Not Working
-
Check shell configuration:
# Bash - verify completion is enabled echo $BASH_COMPLETION_COMPAT_DIR # Zsh - check fpath echo $fpath # Fish - check completion directory ls ~/.config/fish/completions/
-
Reload your shell:
# Option 1: Source config file source ~/.bashrc # or ~/.zshrc, etc. # Option 2: Start new shell exec $SHELL # Option 3: Open new terminal
-
Verify file permissions:
# Check completion file exists and is readable ls -la ~/.local/share/bash-completion/completions/redisctl # or your shell's completion directory
Updating Completions
When updating redisctl, regenerate completions to get new commands:
# Example for Bash
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
source ~/.bashrc
Custom Completion Directories
If using non-standard directories:
# Bash - add to .bashrc
source /path/to/redisctl.bash
# Zsh - add to .zshrc
fpath=(/path/to/completions $fpath)
autoload -U compinit && compinit
# Fish - add to config.fish
source /path/to/redisctl.fish
Tips
- Auto-update completions: Add completion generation to your dotfiles setup
- Multiple shells: Generate completions for all shells you use
- Container usage: Mount completion files when using Docker:
docker run -v ~/.local/share/bash-completion:/etc/bash_completion.d:ro ...
- CI/CD: Include completion generation in your deployment scripts
See Also
- Installation Guide - Installing redisctl
- Configuration - Setting up profiles
- Quick Start - Getting started with redisctl
Docker Development Environment
The Redis Enterprise CLI includes a comprehensive Docker setup for development and testing. This environment provides a real Redis Enterprise cluster without requiring manual setup.
Overview
Our Docker environment includes:
- Redis Enterprise cluster for local development
- Automated cluster initialization using our CLI workflows
- Multiple service profiles for different testing scenarios
- Development tooling with live code mounting
- Performance testing and debugging capabilities
Quick Start
# Start Redis Enterprise cluster
docker compose up -d
# Access the cluster
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_INSECURE="true"
redisctl enterprise cluster info
# Clean up
docker compose down -v
Using Docker Compose
The Docker Compose setup provides a Redis Enterprise cluster with automatic initialization:
# Start Redis Enterprise with auto-initialization
docker compose up -d
# Check cluster status
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="Redis123!"
export REDIS_ENTERPRISE_INSECURE="true"
redisctl enterprise cluster info
# View databases
redisctl enterprise database list
# Clean up when done
docker compose down -v
Includes:
- Redis Enterprise server
- Automatic cluster initialization via workflow
- Ready-to-use configuration
Testing with Docker
Running Tests Against the Cluster
Once your cluster is running, you can test various commands:
# Test cluster commands
redisctl enterprise cluster info
redisctl enterprise node list
redisctl enterprise database list
# Create a test database
redisctl enterprise database create --data '{
"name": "test-db",
"memory_size": 1073741824,
"port": 12000
}'
# Test with different output formats
redisctl enterprise database list -o yaml
redisctl enterprise database list -o table
# Use verbose logging for debugging
RUST_LOG=debug redisctl enterprise cluster info
Interactive Testing
For interactive testing, you can use a temporary container:
# Run interactive shell with redisctl
docker run --rm -it \
--network redisctl_redisctl-network \
-e REDIS_ENTERPRISE_URL="https://redis-enterprise:9443" \
-e REDIS_ENTERPRISE_INSECURE="true" \
-e REDIS_ENTERPRISE_USER="admin@redis.local" \
-e REDIS_ENTERPRISE_PASSWORD="Redis123!" \
joshrotenberg/redisctl:latest \
/bin/sh
# Inside the container, run commands
redisctl enterprise cluster info
redisctl enterprise database list
Environment Variables
Configure the Docker environment via .env
file (if needed):
# Copy example environment file (optional)
cp .env.example .env
# Edit .env to customize:
# - REDIS_ENTERPRISE_IMAGE: Docker image to use
# - REDIS_ENTERPRISE_PLATFORM: Platform architecture
Control logging and behavior:
# Set log level
RUST_LOG=debug docker compose up
# Component-specific logging
RUST_LOG="redis_enterprise=trace,redisctl=debug" docker compose up
Development Workflow
Typical Development Session
# 1. Start development environment
docker compose up -d
# 2. Build and test your changes locally
cargo build --release
./target/release/redisctl enterprise cluster info
# 3. Test with Docker image
docker build -t redisctl:dev .
docker run --rm \
--network redisctl_redisctl-network \
-e REDIS_ENTERPRISE_URL="https://redis-enterprise:9443" \
-e REDIS_ENTERPRISE_INSECURE="true" \
-e REDIS_ENTERPRISE_USER="admin@redis.local" \
-e REDIS_ENTERPRISE_PASSWORD="Redis123!" \
redisctl:dev enterprise cluster info
# 4. Clean up
docker compose down -v
Testing New Features
# Start basic environment
docker compose up -d
# Test your new command locally
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="Redis123!"
export REDIS_ENTERPRISE_INSECURE="true"
# Run your new command
cargo run -- enterprise your-new-command
# Or test with the release build
cargo build --release
./target/release/redisctl enterprise your-new-command
Debugging Connection Issues
# Check Redis Enterprise health
docker compose ps
docker compose logs redis-enterprise
# Test connectivity directly
curl -k https://localhost:9443/v1/bootstrap
# Test with verbose logging
RUST_LOG=debug redisctl enterprise cluster info
# Check network connectivity from container
docker run --rm \
--network redisctl_redisctl-network \
alpine/curl \
curl -k https://redis-enterprise:9443/v1/bootstrap
Service Architecture
Main Services
- redis-enterprise: Redis Enterprise server
- redis-enterprise-init: Automatic cluster initialization using the workflow command
Networking
All services use the redisctl-network
bridge network:
- Redis Enterprise API:
https://redis-enterprise:9443
(external:https://localhost:9443
) - Web UI:
https://redis-enterprise:8443
(external:https://localhost:8443
) - Database ports:
12000-12010
Volumes
- enterprise-data: Persistent Redis Enterprise data
- Source mounting: Development containers access project files
Troubleshooting
Common Issues
Port Conflicts:
# Check if ports are in use
lsof -i :9443
lsof -i :8443
# Stop conflicting services
docker compose down
Platform Compatibility Issues:
# If you encounter platform issues, check Docker settings
docker version
# Ensure Docker Desktop is configured for your platform
# Try pulling the image manually
docker pull redis/redis-stack-server:latest
Permission Issues:
# Reset Docker volumes
docker compose down -v
docker compose up -d
Build Issues:
# Force rebuild
docker compose build --no-cache
docker compose up --force-recreate
Debugging Commands
# Check service status
docker compose ps
# View logs
docker compose logs -f enterprise
docker compose logs -f enterprise-init
# Execute commands in running container
docker compose exec cli sh
docker compose exec enterprise bash
# Check network connectivity
docker compose exec cli ping enterprise
docker compose exec cli curl -k https://enterprise:9443/v1/bootstrap
Best Practices
Development
- Use
docker compose up -d
for complete environment setup - Build locally with
cargo build --release
for development - Use verbose logging (
RUST_LOG=debug
) for debugging - Always clean up with
docker compose down -v
Testing
- Always test against real Redis Enterprise
- Test all output formats (JSON, YAML, table)
- Clean up test data between runs
- Verify error handling with invalid inputs
Performance
- Use performance profile to validate changes
- Monitor resource usage during development
- Test with realistic data sizes
- Validate API response times
Redis Cloud Overview
Redis Cloud is a fully-managed database-as-a-service offering. redisctl
provides comprehensive access to the Redis Cloud REST API.
Authentication
Redis Cloud uses API key authentication:
# Set credentials
export REDIS_CLOUD_API_KEY="your-account-key"
export REDIS_CLOUD_API_SECRET="your-secret-key"
# Test connection
redisctl api cloud get /
Command Structure
Redis Cloud commands follow this pattern:
redisctl cloud <resource> <action> [options]
Resources include:
subscription
- Manage subscriptionsdatabase
- Manage databasesaccount
- Account informationuser
- User managementacl
- Access control listsbackup
- Backup operations
Common Operations
# List all subscriptions
redisctl cloud subscription list
# Get subscription details
redisctl cloud subscription get 12345
# List databases in a subscription
redisctl cloud database list --subscription-id 12345
# Get database details
redisctl cloud database get --subscription-id 12345 --database-id 67890
Next Steps
- Human-Friendly Commands - High-level command reference
- Raw API Access - Direct API endpoint access
- Examples - Real-world usage examples
Subscriptions
Manage Redis Cloud subscriptions - the containers for your databases and configuration.
Commands
List Subscriptions
List all subscriptions in your account.
redisctl cloud subscription list [OPTIONS]
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table (default: auto)-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# List all subscriptions
redisctl cloud subscription list
# Table format with specific fields
redisctl cloud subscription list -o table
# Get only subscription IDs and names
redisctl cloud subscription list -q "[].{id: id, name: name}"
# Filter by status
redisctl cloud subscription list -q "[?status=='active']"
Get Subscription
Get details of a specific subscription.
redisctl cloud subscription get <ID> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# Get subscription details
redisctl cloud subscription get 123456
# Get specific fields in YAML
redisctl cloud subscription get 123456 -o yaml -q "{name: name, status: status, databases: numberOfDatabases}"
Create Subscription
Create a new subscription.
redisctl cloud subscription create --data <JSON> [OPTIONS]
Options:
--data <JSON>
- JSON payload (inline or @file.json)--wait
- Wait for operation to complete--wait-timeout <SECONDS>
- Maximum time to wait (default: 600)--wait-interval <SECONDS>
- Polling interval (default: 10)
Example Payload:
{
"name": "Production Subscription",
"cloudProvider": {
"provider": "AWS",
"regions": [
{
"region": "us-east-1",
"multipleAvailabilityZones": true,
"networking": {
"deploymentCIDR": "10.0.0.0/24"
}
}
]
},
"databases": [
{
"name": "cache-db",
"memoryLimitInGb": 1,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
}
]
}
Examples:
# Create subscription from file
redisctl cloud subscription create --data @subscription.json
# Create and wait for completion
redisctl cloud subscription create --data @subscription.json --wait
# Create with inline JSON
redisctl cloud subscription create --data '{
"name": "Test Subscription",
"cloudProvider": {"provider": "AWS", "regions": [{"region": "us-east-1"}]}
}'
Update Subscription
Update an existing subscription.
redisctl cloud subscription update <ID> --data <JSON> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
--data <JSON>
- JSON payload with updates--wait
- Wait for operation to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Update subscription name
redisctl cloud subscription update 123456 --data '{"name": "New Name"}'
# Update payment method
redisctl cloud subscription update 123456 --data '{"paymentMethodId": 8840}' --wait
Delete Subscription
Delete a subscription (requires all databases to be deleted first).
redisctl cloud subscription delete <ID> [OPTIONS]
Arguments:
<ID>
- Subscription ID
Options:
--wait
- Wait for deletion to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Delete subscription
redisctl cloud subscription delete 123456
# Delete and wait for completion
redisctl cloud subscription delete 123456 --wait
Fixed Subscriptions
Fixed subscriptions offer reserved capacity with predictable pricing.
List Fixed Subscriptions
redisctl cloud fixed-subscription list
Get Fixed Subscription
redisctl cloud fixed-subscription get <ID>
Create Fixed Subscription
redisctl cloud fixed-subscription create --data @fixed-subscription.json --wait
Example Payload:
{
"name": "Fixed Production",
"plan": {
"provider": "AWS",
"region": "us-east-1",
"size": "r5.xlarge"
},
"quantity": 2
}
Related Commands
- Databases - Manage databases within subscriptions
- Network Connectivity - Configure VPC peering and private endpoints
- Provider Accounts - Manage cloud provider integrations
Common Patterns
List All Databases Across Subscriptions
# Get all subscription IDs
SUBS=$(redisctl cloud subscription list -q "[].id" | jq -r '.[]')
# List databases for each subscription
for sub in $SUBS; do
echo "Subscription $sub:"
redisctl cloud database list --subscription-id $sub
done
Monitor Subscription Usage
# Get memory usage across all databases
redisctl cloud subscription get 123456 -q "databases[].{name: name, memory: memoryLimitInGb}" | \
jq -r '.[] | "\(.name): \(.memory)GB"'
Troubleshooting
Common Issues
"Subscription not found"
- Verify the subscription ID is correct
- Check that your API key has access to the subscription
"Cannot delete subscription with active databases"
- Delete all databases first:
redisctl cloud database list --subscription-id <ID>
- Then delete each database before deleting the subscription
"Operation timeout"
- Increase timeout:
--wait-timeout 1200
- Check operation status:
redisctl cloud task get <TASK_ID>
API Reference
These commands use the following REST endpoints:
GET /v1/subscriptions
- List subscriptionsGET /v1/subscriptions/{id}
- Get subscriptionPOST /v1/subscriptions
- Create subscriptionPUT /v1/subscriptions/{id}
- Update subscriptionDELETE /v1/subscriptions/{id}
- Delete subscription
For direct API access, use: redisctl api cloud get /subscriptions
Databases
Manage Redis Cloud databases within subscriptions.
Commands
List Databases
List all databases in a subscription.
redisctl cloud database list --subscription-id <ID> [OPTIONS]
Required Options:
--subscription-id <ID>
- The subscription ID
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# List all databases in subscription
redisctl cloud database list --subscription-id 123456
# Show specific fields in table format
redisctl cloud database list --subscription-id 123456 -o table
# Filter active databases only
redisctl cloud database list --subscription-id 123456 -q "[?status=='active']"
# Get database names and endpoints
redisctl cloud database list --subscription-id 123456 \
-q "[].{name: name, endpoint: publicEndpoint}"
Get Database
Get details of a specific database.
redisctl cloud database get --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID
Examples:
# Get database details
redisctl cloud database get --subscription-id 123456 --database-id 789
# Get connection details
redisctl cloud database get --subscription-id 123456 --database-id 789 \
-q "{endpoint: publicEndpoint, port: port, password: password}"
Create Database
Create a new database in a subscription.
redisctl cloud database create --subscription-id <ID> --data <JSON> [OPTIONS]
Required Options:
--subscription-id <ID>
- The subscription ID--data <JSON>
- Database configuration (inline or @file.json)
Async Options:
--wait
- Wait for database creation to complete--wait-timeout <SECONDS>
- Maximum time to wait (default: 600)--wait-interval <SECONDS>
- Polling interval (default: 10)
Example Payload:
{
"name": "production-cache",
"memoryLimitInGb": 4,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 25000
},
"replication": true,
"dataPersistence": "aof-every-write",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{
"name": "RedisJSON"
},
{
"name": "RediSearch"
}
],
"alerts": [
{
"name": "dataset-size",
"value": 80
}
]
}
Examples:
# Create database from file
redisctl cloud database create --subscription-id 123456 --data @database.json
# Create and wait for completion
redisctl cloud database create --subscription-id 123456 --data @database.json --wait
# Create minimal database
redisctl cloud database create --subscription-id 123456 --data '{
"name": "test-db",
"memoryLimitInGb": 1
}'
Update Database
Update database configuration.
redisctl cloud database update --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID--data <JSON>
- Updates to apply
Async Options:
--wait
- Wait for update to complete--wait-timeout <SECONDS>
- Maximum time to wait--wait-interval <SECONDS>
- Polling interval
Examples:
# Increase memory limit
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 8}' \
--wait
# Update eviction policy
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"dataEvictionPolicy": "volatile-lru"}'
# Add modules
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"modules": [{"name": "RedisTimeSeries"}]}'
Delete Database
Delete a database.
redisctl cloud database delete --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Required Options:
--subscription-id <SUB_ID>
- The subscription ID--database-id <DB_ID>
- The database ID
Async Options:
--wait
- Wait for deletion to complete
Examples:
# Delete database
redisctl cloud database delete --subscription-id 123456 --database-id 789
# Delete and wait for completion
redisctl cloud database delete --subscription-id 123456 --database-id 789 --wait
Database Operations
Backup Database
Create a manual backup.
redisctl cloud database backup --subscription-id <SUB_ID> --database-id <DB_ID> [OPTIONS]
Examples:
# Create backup
redisctl cloud database backup --subscription-id 123456 --database-id 789
# Create and wait
redisctl cloud database backup --subscription-id 123456 --database-id 789 --wait
Import Data
Import data from a backup.
redisctl cloud database import --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Example Payload:
{
"sourceType": "s3",
"importFromUri": ["s3://bucket/backup.rdb"],
"s3Credentials": {
"accessKey": "AWS_ACCESS_KEY",
"secretKey": "AWS_SECRET_KEY"
}
}
Export Data
Export database data.
redisctl cloud database export --subscription-id <SUB_ID> --database-id <DB_ID> --data <JSON> [OPTIONS]
Fixed Databases
Fixed databases run on reserved infrastructure.
List Fixed Databases
redisctl cloud fixed-database list --subscription-id <ID>
Create Fixed Database
redisctl cloud fixed-database create --subscription-id <ID> --data @fixed-db.json --wait
Active-Active Databases
Multi-region Active-Active (CRDB) databases.
Create Active-Active Database
redisctl cloud database create-active-active --subscription-id <ID> --data @crdb.json --wait
Example Payload:
{
"name": "global-cache",
"memoryLimitInGb": 10,
"regions": [
{
"region": "us-east-1",
"localThroughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
},
{
"region": "eu-west-1",
"localThroughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
}
]
}
Common Patterns
Get Database Connection String
# Get Redis URI
DB=$(redisctl cloud database get --subscription-id 123456 --database-id 789)
echo "redis://:$(echo $DB | jq -r .password)@$(echo $DB | jq -r .publicEndpoint)"
Monitor Database Metrics
# Check memory usage
redisctl cloud database get --subscription-id 123456 --database-id 789 \
-q "{used: usedMemoryInMB, limit: memoryLimitInGB}" | \
jq -r '"Memory: \(.used)MB / \(.limit)GB"'
Bulk Operations
# Update all databases in subscription
for db in $(redisctl cloud database list --subscription-id 123456 -q "[].databaseId" | jq -r '.[]'); do
echo "Updating database $db"
redisctl cloud database update \
--subscription-id 123456 \
--database-id $db \
--data '{"alerts": [{"name": "dataset-size", "value": 90}]}'
done
Troubleshooting
Common Issues
"Database creation failed"
- Check subscription has available resources
- Verify region supports requested features
- Check module compatibility
"Cannot connect to database"
- Verify security group/firewall rules
- Check if database is active:
status == 'active'
- Ensure correct endpoint and port
"Module not available"
- Some modules require specific Redis versions
- Check module compatibility in subscription settings
Related Commands
- Subscriptions - Manage parent subscriptions
- ACL - Configure access control
- Connectivity - Set up VPC peering
API Reference
These commands use the following REST endpoints:
GET /v1/subscriptions/{subId}/databases
- List databasesGET /v1/subscriptions/{subId}/databases/{dbId}
- Get databasePOST /v1/subscriptions/{subId}/databases
- Create databasePUT /v1/subscriptions/{subId}/databases/{dbId}
- Update databaseDELETE /v1/subscriptions/{subId}/databases/{dbId}
- Delete database
For direct API access: redisctl api cloud get /subscriptions/123456/databases
Network Connectivity
Configure VPC peering, Private Service Connect, and Transit Gateway connections for secure network access.
VPC Peering
List VPC Peerings
redisctl cloud connectivity list-vpc --subscription-id <ID>
Create VPC Peering
redisctl cloud connectivity create-vpc --subscription-id <ID> --data @vpc.json --wait
Example Payload:
{
"region": "us-east-1",
"awsAccountId": "123456789012",
"vpcId": "vpc-0abc123def456",
"vpcCidr": "10.0.0.0/16"
}
Delete VPC Peering
redisctl cloud connectivity delete-vpc --subscription-id <ID> --peering-id <PEERING_ID> --wait
Private Service Connect (GCP)
Create PSC
redisctl cloud connectivity create-psc --subscription-id <ID> --data @psc.json --wait
Transit Gateway (AWS)
Create Transit Gateway Attachment
redisctl cloud connectivity create-tgw --subscription-id <ID> --data @tgw.json --wait
Common Patterns
Verify Connectivity
# List all connections
redisctl cloud connectivity list-vpc --subscription-id 123456
# Check connection status
redisctl cloud connectivity get-vpc --subscription-id 123456 --peering-id abc123 \
-q "status"
Troubleshooting
- Ensure CIDR blocks don't overlap
- Verify cloud provider permissions
- Check security group rules allow Redis ports
Cloud Workflows
Workflows orchestrate multiple API calls to accomplish common tasks. This guide covers the available Cloud workflows.
Subscription Setup Workflow
The subscription-setup
workflow creates a complete Redis Cloud subscription with optional database configuration.
Basic Usage
# Create a subscription with default settings
redisctl cloud workflow subscription-setup \
--name "my-subscription" \
--wait
# Create with custom configuration
redisctl cloud workflow subscription-setup \
--name "production" \
--provider AWS \
--region us-west-2 \
--database-name "main-db" \
--database-memory-gb 2 \
--wait
Options
--name
: Subscription name (default: redisctl-test)--provider
: Cloud provider - AWS, GCP, or Azure (default: AWS)--region
: Cloud region (default: us-east-1)--payment-method-id
: Payment method ID (auto-detected if not specified)--database-name
: Database name (default: default-db)--database-memory-gb
: Database memory in GB (default: 1)--database-throughput
: Operations per second (default: 1000)--modules
: Comma-separated list of modules (e.g., "RedisJSON,RediSearch")--high-availability
: Enable HA replication--data-persistence
: Enable data persistence (default: true)--skip-database
: Only create subscription without database--wait
: Wait for operations to complete (default: true)--wait-timeout
: Maximum wait time in seconds (default: 600)--wait-interval
: Polling interval in seconds (default: 10)--dry-run
: Preview what would be created without executing
What It Does
- Validates payment method: Looks up your account's payment method
- Creates subscription: Provisions infrastructure in the specified cloud/region
- Creates database: Sets up a Redis database with your configuration
- Waits for completion: Monitors async operations until resources are ready
- Returns connection details: Provides endpoints and credentials
Output Formats
# Human-readable output (default)
redisctl cloud workflow subscription-setup --name "test"
# JSON output for automation
redisctl cloud workflow subscription-setup --name "test" --output json
# YAML output
redisctl cloud workflow subscription-setup --name "test" --output yaml
Example JSON Output
{
"success": true,
"message": "Subscription setup completed successfully",
"outputs": {
"subscription_id": 12345,
"subscription_name": "test",
"database_id": 67890,
"database_name": "default-db",
"connection_string": "redis://redis-12345.c1.us-east-1.ec2.cloud.redislabs.com:12345",
"provider": "AWS",
"region": "us-east-1",
"status": "active"
}
}
Use Cases
-
Quick Development Environment
redisctl cloud workflow subscription-setup \ --name "dev-env" \ --database-memory-gb 0.1 \ --wait
-
Production Setup with Modules
redisctl cloud workflow subscription-setup \ --name "production" \ --database-memory-gb 10 \ --modules "RedisJSON,RediSearch,RedisTimeSeries" \ --high-availability \ --wait
-
Multi-Region Preparation (subscription only)
redisctl cloud workflow subscription-setup \ --name "global-app" \ --region eu-west-1 \ --skip-database \ --wait
Future Workflows
Additional workflows are planned:
- active-active-setup: Multi-region Active-Active configuration
- database-migration: Migrate databases between subscriptions
- acl-setup: Configure comprehensive ACL security
See issue tracker for workflow development status.
Raw API Access
Direct access to any Redis Cloud REST API endpoint.
Basic Usage
redisctl api cloud <method> <path> [options]
Methods: get
, post
, put
, patch
, delete
Examples
GET Requests
# Get account information
redisctl api cloud get /
# Get all subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/12345
# Get databases in subscription
redisctl api cloud get /subscriptions/12345/databases
# Get with query parameters
redisctl api cloud get "/subscriptions?limit=10&offset=20"
POST Requests
# Create database (with JSON file)
redisctl api cloud post /subscriptions/12345/databases \
--data @database.json
# Create database (with inline JSON)
redisctl api cloud post /subscriptions/12345/databases \
--data '{
"name": "my-database",
"memoryLimitInGb": 1,
"modules": ["RedisJSON", "RediSearch"]
}'
# Create with data from stdin
echo '{"name": "test"}' | redisctl api cloud post /subscriptions/12345/databases \
--data @-
PUT Requests
# Update database
redisctl api cloud put /subscriptions/12345/databases/67890 \
--data '{"memoryLimitInGb": 2}'
DELETE Requests
# Delete database
redisctl api cloud delete /subscriptions/12345/databases/67890
Request Options
Headers
# Add custom headers
redisctl api cloud get / \
--header "X-Request-ID: abc123" \
--header "X-Custom: value"
Output Control
# Get raw response body only
redisctl api cloud get / --raw
# Include response headers
redisctl api cloud get / --include-headers
# Verbose output (shows request details)
redisctl api cloud get / --verbose
Working with Files
Request Body from File
# JSON file
redisctl api cloud post /subscriptions/12345/databases \
--data @create-database.json
# YAML file (converted to JSON)
redisctl api cloud post /subscriptions/12345/databases \
--data @create-database.yaml
Save Response to File
# Save response
redisctl api cloud get /subscriptions > subscriptions.json
# Pretty print and save
redisctl api cloud get /subscriptions | jq '.' > subscriptions.json
Common Endpoints
Account & Billing
/account
- Account information/payment-methods
- Payment methods/cloud-accounts
- Cloud provider accounts
Subscriptions
/subscriptions
- List subscriptions/subscriptions/{id}
- Subscription details/subscriptions/{id}/databases
- Databases in subscription/subscriptions/{id}/pricing
- Pricing information
Databases
/subscriptions/{sub}/databases
- List databases/subscriptions/{sub}/databases/{db}
- Database details/subscriptions/{sub}/databases/{db}/backup
- Backup operations/subscriptions/{sub}/databases/{db}/import
- Import data
Users & Access
/users
- User management/roles
- Role definitions/subscriptions/{id}/redis-acl
- ACL rules
Operations
/tasks/{id}
- Task status/logs
- System logs/metrics
- Performance metrics
Error Handling
API errors are returned with appropriate HTTP status codes:
# Check response code
redisctl api cloud get /invalid-endpoint
# Error: 404 Not Found
# Verbose mode shows full error
redisctl api cloud get /invalid-endpoint --verbose
# Shows full error response with details
Tips
- Use
--data @-
to read from stdin for piping - Use
-q
with JMESPath to filter responses - Use
--raw
to get just the response body for scripting - Check the Redis Cloud API docs for endpoint details
Redis Enterprise Overview
Redis Enterprise is a self-managed database platform that can be deployed on-premises or in your cloud account. redisctl
provides comprehensive access to the Redis Enterprise REST API.
Authentication
Redis Enterprise uses basic authentication:
# Set credentials
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # For self-signed certificates
# Test connection
redisctl api enterprise get /v1/cluster
Command Structure
Redis Enterprise commands follow this pattern:
redisctl enterprise <resource> <action> [options]
Resources include:
cluster
- Cluster managementdatabase
- Database operationsnode
- Node managementuser
- User managementrole
- Role-based access controlalert
- Alert configurationworkflow
- Multi-step automated operations
Common Operations
# Get cluster information
redisctl enterprise cluster info
# List all databases
redisctl enterprise database list
# Get database details
redisctl enterprise database get 1
# List nodes
redisctl enterprise node list
# Initialize a new cluster (workflow)
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "SecurePassword"
Next Steps
- Human-Friendly Commands - High-level command reference
- Workflows - Automated multi-step operations
- Raw API Access - Direct API endpoint access
- Examples - Real-world usage examples
Cluster
Manage Redis Enterprise cluster configuration and operations.
Commands
Get Cluster Info
Get current cluster configuration and status.
redisctl enterprise cluster info [OPTIONS]
Options:
-o, --output <FORMAT>
- Output format: json, yaml, or table-q, --query <JMESPATH>
- JMESPath query to filter output
Examples:
# Get full cluster information
redisctl enterprise cluster info
# Get specific fields in table format
redisctl enterprise cluster info -o table
# Get cluster name and version
redisctl enterprise cluster info -q "{name: name, version: version}"
# Check cluster health
redisctl enterprise cluster info -q "alert_settings"
Update Cluster
Update cluster configuration.
redisctl enterprise cluster update --data <JSON> [OPTIONS]
Options:
--data <JSON>
- Configuration updates (inline or @file.json)
Examples:
# Update cluster name
redisctl enterprise cluster update --data '{"name": "production-cluster"}'
# Update alert settings
redisctl enterprise cluster update --data '{
"alert_settings": {
"cluster_certs_about_to_expire": {"enabled": true, "threshold": 30}
}
}'
# Update from file
redisctl enterprise cluster update --data @cluster-config.json
Get Cluster Policy
Get cluster-wide policies.
redisctl enterprise cluster get-policy [OPTIONS]
Examples:
# Get all policies
redisctl enterprise cluster get-policy
# Get specific policy in YAML
redisctl enterprise cluster get-policy -o yaml -q "rack_aware"
Update Cluster Policy
Update cluster policies.
redisctl enterprise cluster update-policy --data <JSON> [OPTIONS]
Examples:
# Enable rack awareness
redisctl enterprise cluster update-policy --data '{"rack_aware": true}'
# Update multiple policies
redisctl enterprise cluster update-policy --data '{
"rack_aware": true,
"default_non_sharded_proxy_policy": "all-master-shards"
}'
Certificate Management
List Certificates
List cluster certificates.
redisctl enterprise cluster list-certificates [OPTIONS]
Examples:
# List all certificates
redisctl enterprise cluster list-certificates
# Check certificate expiration
redisctl enterprise cluster list-certificates -q "[].{name: name, expires: expiry_date}"
Update Certificate
Update cluster certificate.
redisctl enterprise cluster update-certificate --data <JSON> [OPTIONS]
Example Payload:
{
"name": "api-cert",
"key": "-----BEGIN RSA PRIVATE KEY-----\n...",
"certificate": "-----BEGIN CERTIFICATE-----\n..."
}
Examples:
# Update API certificate
redisctl enterprise cluster update-certificate --data @new-cert.json
# Update proxy certificate
redisctl enterprise cluster update-certificate --data '{
"name": "proxy-cert",
"key": "...",
"certificate": "..."
}'
Rotate Certificates
Rotate cluster certificates.
redisctl enterprise cluster rotate-certificates [OPTIONS]
Examples:
# Rotate all certificates
redisctl enterprise cluster rotate-certificates
# Rotate with custom validity period
redisctl enterprise cluster rotate-certificates --days 365
Cluster Operations
Check Cluster Status
Get detailed cluster status.
redisctl enterprise cluster status [OPTIONS]
Examples:
# Full status check
redisctl enterprise cluster status
# Check specific components
redisctl enterprise cluster status -q "services"
Get Cluster Stats
Get cluster statistics.
redisctl enterprise cluster stats [OPTIONS]
Options:
--interval <SECONDS>
- Stats interval (1sec, 1min, 5min, 15min, 1hour, 1day)
Examples:
# Get current stats
redisctl enterprise cluster stats
# Get hourly stats
redisctl enterprise cluster stats --interval 1hour
# Get memory usage
redisctl enterprise cluster stats -q "{used: used_memory, total: total_memory}"
License Management
Get License
redisctl enterprise cluster get-license
Update License
redisctl enterprise cluster update-license --data <JSON>
Example:
# Update license
redisctl enterprise cluster update-license --data '{
"license": "-----BEGIN LICENSE-----\n...\n-----END LICENSE-----"
}'
Module Management
List Modules
List available Redis modules.
redisctl enterprise module list
Upload Module
Upload a new module.
redisctl enterprise module upload --file <PATH>
Examples:
# Upload module
redisctl enterprise module upload --file redisgraph.zip
# Upload and get module ID
MODULE_ID=$(redisctl enterprise module upload --file module.zip -q "uid")
Common Patterns
Health Check Script
#!/bin/bash
# Check cluster health
STATUS=$(redisctl enterprise cluster info -q "status")
if [ "$STATUS" != "active" ]; then
echo "Cluster not healthy: $STATUS"
exit 1
fi
# Check certificate expiration
DAYS_LEFT=$(redisctl enterprise cluster list-certificates \
-q "[0].days_until_expiry")
if [ "$DAYS_LEFT" -lt 30 ]; then
echo "Certificate expiring soon: $DAYS_LEFT days"
fi
Monitor Cluster Resources
# Get resource utilization
redisctl enterprise cluster stats -q "{
cpu: cpu_usage_percent,
memory: memory_usage_percent,
disk: persistent_storage_usage_percent
}" | jq
Backup Cluster Configuration
# Export cluster config
redisctl enterprise cluster info > cluster-backup-$(date +%Y%m%d).json
# Export policies
redisctl enterprise cluster get-policy > policies-backup-$(date +%Y%m%d).json
Troubleshooting
Common Issues
"Cluster not responding"
- Check network connectivity to cluster endpoint
- Verify credentials are correct
- Check if API is enabled on cluster
"Certificate expired"
- Rotate certificates:
redisctl enterprise cluster rotate-certificates
- Or update manually with new certificate
"License expired"
- Update license:
redisctl enterprise cluster update-license --data @license.json
- Contact Redis support for new license
"Policy update failed"
- Some policies require cluster restart
- Check policy compatibility with cluster version
Related Commands
API Reference
These commands use the following REST endpoints:
GET /v1/cluster
- Get cluster infoPUT /v1/cluster
- Update clusterGET /v1/cluster/policy
- Get policiesPUT /v1/cluster/policy
- Update policiesGET /v1/cluster/certificates
- List certificatesPUT /v1/cluster/update_cert
- Update certificatePOST /v1/cluster/certificates/rotate
- Rotate certificates
For direct API access: redisctl api enterprise get /v1/cluster
Bootstrap Commands
Initialize and manage Redis Enterprise cluster setup.
Overview
Bootstrap commands handle the initial setup and configuration of Redis Enterprise clusters, including node initialization, cluster creation, and joining existing clusters.
Available Commands
Get Bootstrap Status
redisctl enterprise bootstrap status
Returns the current bootstrap status and node information:
- Bootstrap state (not_started, in_progress, completed)
- Local node details (architecture, memory, storage paths)
- Available network addresses
- Supported database versions
Create New Cluster
redisctl enterprise bootstrap create-cluster --data '{
"cluster_name": "my-cluster",
"rack_aware": false,
"license": "...",
"nodes": [...]
}'
Initialize a new Redis Enterprise cluster with the specified configuration.
Join Existing Cluster
redisctl enterprise bootstrap join-cluster --data '{
"cluster_address": "192.168.1.100",
"username": "admin@redis.local",
"password": "password",
"replace_node": false
}'
Join this node to an existing Redis Enterprise cluster.
Validate Configuration
# Validate cluster creation config
redisctl enterprise bootstrap validate create_cluster --data '{...}'
# Validate join cluster config
redisctl enterprise bootstrap validate join_cluster --data '{...}'
Pre-flight validation of bootstrap configurations before execution.
Common Use Cases
Initial Cluster Setup
# 1. Check bootstrap status
redisctl enterprise bootstrap status
# 2. Validate configuration
redisctl enterprise bootstrap validate create_cluster --data @cluster-config.json
# 3. Create the cluster
redisctl enterprise bootstrap create-cluster --data @cluster-config.json
Adding Nodes to Cluster
# 1. On new node, check status
redisctl enterprise bootstrap status
# 2. Join the cluster
redisctl enterprise bootstrap join-cluster --data '{
"cluster_address": "node1.redis.local",
"username": "admin@redis.local",
"password": "${REDIS_PASSWORD}"
}'
Output Examples
Bootstrap Status
{
"bootstrap_status": {
"state": "completed",
"start_time": "2025-09-15T00:18:27Z",
"end_time": "2025-09-15T00:18:49Z"
},
"local_node_info": {
"uid": "1",
"architecture": "x86_64",
"total_memory": 8217473024,
"cores": 14,
"persistent_storage_path": "/var/opt/redislabs/persist",
"ephemeral_storage_path": "/var/opt/redislabs/tmp",
"os_version": "Red Hat Enterprise Linux 9.6"
}
}
Important Notes
- Bootstrap operations are typically one-time actions during initial cluster setup
- Most bootstrap operations require root or sudo privileges
- Always validate configurations before applying them
- Bootstrap operations cannot be undone - ensure backups exist
Related Commands
- Cluster Commands - Manage cluster after bootstrap
- Node Commands - Manage individual nodes
- Auth Commands - Configure authentication after bootstrap
Cluster Manager Settings
Cluster Manager (CM) settings control various cluster-wide behaviors and policies in Redis Enterprise. These settings affect how the cluster operates, manages resources, and handles various operations.
Overview
CM settings provide configuration for:
- Resource management policies
- Operational behaviors
- System defaults
- Performance tuning
- Security policies
- Maintenance settings
Warning: Modifying CM settings affects the entire cluster. Changes should be made carefully and tested in non-production environments first.
Available Commands
Get Settings
Retrieve current cluster manager settings:
# Get all settings
redisctl enterprise cm-settings get
# Get specific setting using JMESPath
redisctl enterprise cm-settings get --setting "timezone"
# Get nested settings
redisctl enterprise cm-settings get --setting "backup_job_settings.enabled"
# Output as YAML
redisctl enterprise cm-settings get -o yaml
Update Settings
Update cluster manager settings:
# Update from JSON file
redisctl enterprise cm-settings set --data @settings.json
# Update from stdin
echo '{"timezone": "America/New_York"}' | redisctl enterprise cm-settings set --data -
# Update with force (skip confirmation)
redisctl enterprise cm-settings set --data @settings.json --force
Update Specific Setting
Update a single setting value:
# Update timezone
redisctl enterprise cm-settings set-value timezone --value "Europe/London"
# Update nested setting
redisctl enterprise cm-settings set-value backup_job_settings.enabled --value true
# Update with force
redisctl enterprise cm-settings set-value timezone --value "UTC" --force
Reset Settings
Reset settings to cluster defaults:
# Reset all settings (with confirmation)
redisctl enterprise cm-settings reset
# Reset without confirmation
redisctl enterprise cm-settings reset --force
Export/Import Settings
Export and import settings for backup or migration:
# Export to file
redisctl enterprise cm-settings export --output settings-backup.json
# Export to stdout
redisctl enterprise cm-settings export --output -
# Import from file
redisctl enterprise cm-settings import --file @settings-backup.json
# Import from stdin
cat settings.json | redisctl enterprise cm-settings import --file -
Validate Settings
Validate settings file before importing:
# Validate settings file
redisctl enterprise cm-settings validate --file @settings.json
# Validate from stdin
echo '{"timezone": "UTC"}' | redisctl enterprise cm-settings validate --file -
List Categories
View available setting categories:
# List all categories
redisctl enterprise cm-settings list-categories
# Output as table
redisctl enterprise cm-settings list-categories -o table
Get Category Settings
Get all settings within a specific category:
# Get all backup-related settings
redisctl enterprise cm-settings get-category backup_job_settings
# Get specific field from category
redisctl enterprise cm-settings get-category backup_job_settings -q "cron_expression"
Common Settings
Time Zone Configuration
{
"timezone": "UTC"
}
Common timezone values:
UTC
- Coordinated Universal TimeAmerica/New_York
- Eastern TimeAmerica/Los_Angeles
- Pacific TimeEurope/London
- British TimeAsia/Tokyo
- Japan Time
Backup Job Settings
{
"backup_job_settings": {
"enabled": true,
"cron_expression": "0 2 * * *",
"retention_days": 7
}
}
Resource Management
{
"resource_management": {
"memory_reserve_percent": 15,
"cpu_reserve_percent": 10,
"max_databases_per_node": 100
}
}
Security Settings
{
"security": {
"password_complexity": "high",
"session_timeout_minutes": 30,
"max_login_attempts": 5,
"audit_logging": true
}
}
Examples
Backup Current Settings
#!/bin/bash
# Backup current settings with timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="cm_settings_backup_${TIMESTAMP}.json"
redisctl enterprise cm-settings export --output "$BACKUP_FILE"
echo "Settings backed up to: $BACKUP_FILE"
Configure for Production
# Production settings template
cat << EOF > production-settings.json
{
"timezone": "UTC",
"backup_job_settings": {
"enabled": true,
"cron_expression": "0 2 * * *",
"retention_days": 30
},
"security": {
"audit_logging": true,
"password_complexity": "high"
},
"resource_management": {
"memory_reserve_percent": 20
}
}
EOF
# Apply production settings
redisctl enterprise cm-settings import --file @production-settings.json
Compare Settings Between Clusters
#!/bin/bash
# Compare settings between two clusters
# Export from cluster 1
redisctl profile use cluster1
redisctl enterprise cm-settings export --output cluster1-settings.json
# Export from cluster 2
redisctl profile use cluster2
redisctl enterprise cm-settings export --output cluster2-settings.json
# Compare
diff -u cluster1-settings.json cluster2-settings.json
Audit Settings Changes
#!/bin/bash
# Track settings changes over time
AUDIT_DIR="cm_settings_audit"
mkdir -p "$AUDIT_DIR"
# Get current settings
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
redisctl enterprise cm-settings get > "$AUDIT_DIR/settings_${TIMESTAMP}.json"
# Show changes from last audit
if [ -f "$AUDIT_DIR/settings_latest.json" ]; then
diff -u "$AUDIT_DIR/settings_latest.json" "$AUDIT_DIR/settings_${TIMESTAMP}.json"
fi
# Update latest link
ln -sf "settings_${TIMESTAMP}.json" "$AUDIT_DIR/settings_latest.json"
Safe Settings Update
#!/bin/bash
# Safely update settings with validation and backup
NEW_SETTINGS="$1"
if [ -z "$NEW_SETTINGS" ]; then
echo "Usage: $0 <settings-file>"
exit 1
fi
# Validate new settings
echo "Validating settings..."
if ! redisctl enterprise cm-settings validate --file "@$NEW_SETTINGS"; then
echo "Settings validation failed!"
exit 1
fi
# Backup current settings
echo "Backing up current settings..."
redisctl enterprise cm-settings export --output settings-backup-$(date +%s).json
# Apply new settings
echo "Applying new settings..."
redisctl enterprise cm-settings import --file "@$NEW_SETTINGS"
echo "Settings updated successfully"
Settings Migration
Export from Source Cluster
# Export all settings
redisctl enterprise cm-settings export --output source-settings.json
# Review exported settings
jq '.' source-settings.json
Import to Target Cluster
# Validate before import
redisctl enterprise cm-settings validate --file @source-settings.json
# Import settings
redisctl enterprise cm-settings import --file @source-settings.json --force
Best Practices
- Always backup before changes - Export current settings before modifications
- Test in non-production - Validate changes in test environments first
- Document changes - Keep records of what was changed and why
- Use version control - Store settings files in Git for tracking
- Validate before import - Always validate settings files before importing
- Monitor after changes - Watch cluster behavior after settings updates
Troubleshooting
Settings Not Applied
# Check if settings were saved
redisctl enterprise cm-settings get
# Verify specific setting
redisctl enterprise cm-settings get --setting "your.setting.path"
# Check cluster logs for errors
redisctl enterprise logs list --type error
Invalid Settings Format
# Validate JSON syntax
jq '.' settings.json
# Validate against schema
redisctl enterprise cm-settings validate --file @settings.json
Reset to Defaults
If settings cause issues:
# Reset all settings to defaults
redisctl enterprise cm-settings reset --force
# Restart cluster services if needed
redisctl enterprise cluster restart-services
Permission Denied
CM settings require admin privileges:
# Check user permissions
redisctl enterprise user whoami
# Ensure admin role
redisctl enterprise user get <user_id> -q "role"
Related Commands
enterprise cluster
- Cluster configuration and managemententerprise job-scheduler
- Job scheduling configurationenterprise diagnostics
- Cluster diagnosticsapi enterprise
- Direct API access for advanced operations
Database Groups
Database groups (BDB groups) allow you to organize and manage related databases as a single unit in Redis Enterprise. This is useful for grouping databases that belong to the same application, tenant, or environment.
Overview
Database groups provide a way to:
- Organize databases logically by application or purpose
- Apply bulk operations to related databases
- Simplify management of multi-database deployments
- Track and monitor groups of databases together
Available Commands
List Database Groups
List all database groups in the cluster:
# List all groups
redisctl enterprise bdb-group list
# List groups with specific fields
redisctl enterprise bdb-group list -q "[].{uid: uid, databases: bdbs}"
# Output as table
redisctl enterprise bdb-group list -o table
Get Group Details
Get detailed information about a specific database group:
# Get group by UID
redisctl enterprise bdb-group get <uid>
# Get specific fields
redisctl enterprise bdb-group get <uid> -q "bdbs"
Create Database Group
Create a new database group:
# Create from JSON file
redisctl enterprise bdb-group create --data @group.json
# Create from stdin
echo '{"name": "web-app-group"}' | redisctl enterprise bdb-group create --data -
# Create with initial databases
redisctl enterprise bdb-group create --data '{"name": "api-group", "bdbs": [1, 2, 3]}'
Update Database Group
Update an existing database group:
# Update from JSON file
redisctl enterprise bdb-group update <uid> --data @updates.json
# Update inline
redisctl enterprise bdb-group update <uid> --data '{"name": "new-name"}'
Delete Database Group
Delete a database group:
# Delete with confirmation
redisctl enterprise bdb-group delete <uid>
# Delete without confirmation
redisctl enterprise bdb-group delete <uid> --force
Manage Group Membership
Add or remove databases from a group:
# Add database to group
redisctl enterprise bdb-group add-database <group_uid> --database <bdb_uid>
# Remove database from group
redisctl enterprise bdb-group remove-database <group_uid> --database <bdb_uid>
# List databases in group
redisctl enterprise bdb-group list-databases <group_uid>
Group Configuration
Basic Group Structure
{
"uid": 1,
"name": "production-group",
"bdbs": [1, 2, 3, 4],
"description": "Production application databases"
}
Creating Groups
When creating a group, the UID is auto-assigned by the cluster:
{
"name": "staging-group",
"bdbs": [],
"description": "Staging environment databases"
}
Use Cases
Application Grouping
Group all databases for a specific application:
# Create application group
redisctl enterprise bdb-group create --data '{
"name": "ecommerce-app",
"description": "E-commerce platform databases"
}'
# Add databases to the group
redisctl enterprise bdb-group add-database 1 --database 10 # Session store
redisctl enterprise bdb-group add-database 1 --database 11 # Product cache
redisctl enterprise bdb-group add-database 1 --database 12 # Shopping cart
Environment Separation
Organize databases by environment:
# Create environment groups
redisctl enterprise bdb-group create --data '{"name": "dev-databases"}'
redisctl enterprise bdb-group create --data '{"name": "staging-databases"}'
redisctl enterprise bdb-group create --data '{"name": "production-databases"}'
# Add databases to appropriate groups
for db in 1 2 3; do
redisctl enterprise bdb-group add-database 1 --database $db # Dev
done
for db in 4 5 6; do
redisctl enterprise bdb-group add-database 2 --database $db # Staging
done
Multi-Tenant Organization
Group databases by tenant:
# Create tenant groups
redisctl enterprise bdb-group create --data '{
"name": "tenant-acme",
"description": "ACME Corp databases"
}'
redisctl enterprise bdb-group create --data '{
"name": "tenant-globex",
"description": "Globex Inc databases"
}'
Practical Examples
Bulk Operations Script
Perform operations on all databases in a group:
#!/bin/bash
# Get all databases in a group
GROUP_UID=1
DATABASES=$(redisctl enterprise bdb-group get $GROUP_UID -q "bdbs[]")
# Perform operation on each database
for db in $DATABASES; do
echo "Processing database $db..."
redisctl enterprise database get $db
done
Group Health Check
Monitor all databases in a group:
# Get group databases
GROUP_UID=1
redisctl enterprise bdb-group list-databases $GROUP_UID | \
jq -r '.[]' | \
while read db_uid; do
echo "Checking database $db_uid..."
redisctl enterprise database get $db_uid -q "{uid: uid, status: status}"
done
Migration Helper
Move databases between groups:
# Move database from one group to another
move_database() {
local db_uid=$1
local from_group=$2
local to_group=$3
# Remove from old group
redisctl enterprise bdb-group remove-database $from_group --database $db_uid
# Add to new group
redisctl enterprise bdb-group add-database $to_group --database $db_uid
echo "Moved database $db_uid from group $from_group to $to_group"
}
# Usage
move_database 5 1 2
Group Report
Generate a report of all groups and their databases:
# Generate group report
redisctl enterprise bdb-group list -q '[]' | jq -r '.[] | .uid' | \
while read group_uid; do
group_info=$(redisctl enterprise bdb-group get $group_uid)
name=$(echo "$group_info" | jq -r '.name // "unnamed"')
db_count=$(echo "$group_info" | jq '.bdbs | length')
echo "Group $group_uid: $name ($db_count databases)"
echo "$group_info" | jq -r '.bdbs[]' | \
while read db_uid; do
db_name=$(redisctl enterprise database get $db_uid -q "name")
echo " - Database $db_uid: $db_name"
done
echo
done
Best Practices
- Logical Organization - Group databases by application, environment, or purpose
- Naming Conventions - Use clear, descriptive names for groups
- Documentation - Include descriptions to explain group purpose
- Regular Review - Periodically review group membership
- Avoid Overlaps - Each database should typically belong to one logical group
- Use for Bulk Operations - Leverage groups for maintenance and monitoring
Limitations
- Database groups are a logical organization feature
- They don't affect database performance or configuration
- Groups don't enforce any policies on member databases
- A database can belong to multiple groups
- Deleting a group doesn't delete the databases
Troubleshooting
Group Creation Fails
# Check cluster status
redisctl enterprise cluster get -q "cluster_state"
# Verify required fields
redisctl enterprise api get /v1/jsonschema | jq '.bdb_group'
Database Not Added to Group
# Verify database exists
redisctl enterprise database get <bdb_uid>
# Check current group membership
redisctl enterprise bdb-group get <group_uid> -q "bdbs"
Group Operations Slow
# Check number of databases in group
redisctl enterprise bdb-group get <group_uid> -q "bdbs | length"
# Consider splitting large groups
Related Commands
enterprise database
- Individual database managemententerprise cluster
- Cluster-wide operationsenterprise stats
- Statistics for grouped databases
Shard Management
Shards are the fundamental units of data storage and processing in Redis Enterprise. Each database is composed of one or more shards distributed across cluster nodes. The shard commands provide tools for monitoring, managing, and optimizing shard placement and performance.
Overview
Shards in Redis Enterprise:
- Primary building blocks of databases
- Distributed across nodes for high availability
- Replicated for data redundancy
- Can be migrated between nodes for load balancing
- Support failover for high availability
Available Commands
List Shards
List all shards in the cluster with optional filtering:
# List all shards
redisctl enterprise shard list
# Filter by node
redisctl enterprise shard list --node 1
# Filter by database
redisctl enterprise shard list --database 1
# Filter by role (master/slave)
redisctl enterprise shard list --role master
# Combine filters
redisctl enterprise shard list --node 1 --role slave
# Output as table
redisctl enterprise shard list -o table
Get Shard Details
Get detailed information about a specific shard:
# Get shard details
redisctl enterprise shard get <shard_uid>
# Get specific fields
redisctl enterprise shard get <shard_uid> -q "role"
redisctl enterprise shard get <shard_uid> -q "{uid: uid, node: node, role: role, status: status}"
List Database Shards
List all shards for a specific database:
# List shards for database
redisctl enterprise shard list-by-database <bdb_uid>
# Get shard distribution
redisctl enterprise shard list-by-database <bdb_uid> -q "[].{shard: uid, node: node, role: role}"
Shard Failover
Perform manual failover of a shard to its replica:
# Failover with confirmation
redisctl enterprise shard failover <shard_uid>
# Failover without confirmation
redisctl enterprise shard failover <shard_uid> --force
Shard Migration
Migrate a shard to a different node:
# Migrate shard to target node
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid>
# Migrate without confirmation
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid> --force
Bulk Operations
Perform failover or migration on multiple shards:
# Bulk failover from JSON file
redisctl enterprise shard bulk-failover --data @failover-list.json
# Bulk migration from stdin
echo '{"shards": [{"uid": 1, "target_node": 2}]}' | \
redisctl enterprise shard bulk-migrate --data -
# Force bulk operations
redisctl enterprise shard bulk-failover --data @failover.json --force
Shard Statistics
Get performance statistics for shards:
# Get stats for specific shard
redisctl enterprise shard stats <shard_uid>
# Get stats for all shards
redisctl enterprise shard stats
# Specify time interval
redisctl enterprise shard stats --interval 1hour
redisctl enterprise shard stats --interval 1day
# Specify time range
redisctl enterprise shard stats \
--stime "2024-01-01T00:00:00Z" \
--etime "2024-01-02T00:00:00Z"
# Get latest stats
redisctl enterprise shard stats-last
# Get latest stats for specific shard
redisctl enterprise shard stats-last <shard_uid> --interval 1sec
Shard Health & Configuration
Check shard health and configuration:
# Check shard health
redisctl enterprise shard health <shard_uid>
# Get shard configuration
redisctl enterprise shard config <shard_uid>
Shard Structure
A typical shard object contains:
{
"uid": 1,
"bdb_uid": 1,
"node": 1,
"role": "master",
"status": "active",
"loading": false,
"memory_usage": 1048576,
"cpu_usage": 0.5,
"connections": 10,
"shard_key_regex": ".*",
"backup": true,
"replication": {
"status": "in-sync",
"lag": 0
},
"persistence": {
"type": "aof",
"status": "active"
}
}
Use Cases
Load Balancing
Redistribute shards across nodes for better resource utilization:
#!/bin/bash
# Balance shards across nodes
# Get shard distribution
for node in 1 2 3; do
COUNT=$(redisctl enterprise shard list --node $node -q "[] | length")
echo "Node $node: $COUNT shards"
done
# Migrate shards from overloaded node
redisctl enterprise shard list --node 1 --role master -q "[].uid" | \
head -2 | while read shard; do
echo "Migrating shard $shard to node 2"
redisctl enterprise shard migrate $shard --target-node 2
done
Failover Management
Handle node maintenance with controlled failovers:
#!/bin/bash
# Failover all master shards on a node before maintenance
NODE_ID=1
# Get all master shards on the node
SHARDS=$(redisctl enterprise shard list --node $NODE_ID --role master -q "[].uid")
# Failover each shard
for shard in $SHARDS; do
echo "Failing over shard $shard"
redisctl enterprise shard failover $shard --force
sleep 5
done
echo "All master shards failed over from node $NODE_ID"
Performance Monitoring
Monitor shard performance metrics:
#!/bin/bash
# Monitor shard performance
# Get top memory-consuming shards
redisctl enterprise shard list -q "[] | sort_by(@, &memory_usage) | reverse(@) | [:5]"
# Check for lagging replicas
redisctl enterprise shard list --role slave -q \
"[?replication.lag > \`100\`].{shard: uid, lag: replication.lag, node: node}"
# Monitor shard connections
while true; do
clear
echo "=== Shard Connection Count ==="
redisctl enterprise shard list -q \
"[].{shard: uid, connections: connections}" -o table
sleep 10
done
Shard Health Check
Comprehensive health check script:
#!/bin/bash
# Check shard health across cluster
echo "=== Shard Health Report ==="
# Check for inactive shards
INACTIVE=$(redisctl enterprise shard list -q "[?status != 'active'].uid")
if [ -n "$INACTIVE" ]; then
echo "WARNING: Inactive shards found: $INACTIVE"
fi
# Check for loading shards
LOADING=$(redisctl enterprise shard list -q "[?loading == \`true\`].uid")
if [ -n "$LOADING" ]; then
echo "INFO: Shards currently loading: $LOADING"
fi
# Check replication lag
HIGH_LAG=$(redisctl enterprise shard list --role slave -q \
"[?replication.lag > \`1000\`].uid")
if [ -n "$HIGH_LAG" ]; then
echo "WARNING: High replication lag on shards: $HIGH_LAG"
fi
# Check memory usage
for shard in $(redisctl enterprise shard list -q "[].uid"); do
MEMORY=$(redisctl enterprise shard get $shard -q "memory_usage")
if [ "$MEMORY" -gt 1073741824 ]; then # 1GB
echo "INFO: Shard $shard using $(($MEMORY / 1048576))MB"
fi
done
Bulk Operation Examples
Bulk Failover Configuration
{
"shards": [1, 2, 3, 4]
}
Bulk Migration Configuration
{
"migrations": [
{
"shard_uid": 1,
"target_node": 2
},
{
"shard_uid": 3,
"target_node": 3
}
]
}
Best Practices
- Monitor shard distribution - Ensure even distribution across nodes
- Check replication lag - High lag indicates performance issues
- Plan migrations carefully - Migrations consume resources
- Use controlled failovers - For planned maintenance
- Monitor memory usage - Prevent out-of-memory situations
- Regular health checks - Detect issues early
Troubleshooting
Shard Not Responding
# Check shard status
redisctl enterprise shard get <shard_uid> -q "status"
# Check node status
NODE=$(redisctl enterprise shard get <shard_uid> -q "node")
redisctl enterprise node get $NODE -q "status"
# Force failover if needed
redisctl enterprise shard failover <shard_uid> --force
Migration Stuck
# Check migration status
redisctl enterprise action list --type shard_migration --status running
# Cancel if needed
redisctl enterprise action cancel <action_uid>
# Retry migration
redisctl enterprise shard migrate <shard_uid> --target-node <node_uid>
High Memory Usage
# Identify high-memory shards
redisctl enterprise shard list -q \
"[] | sort_by(@, &memory_usage) | reverse(@) | [:10]"
# Check database configuration
BDB=$(redisctl enterprise shard get <shard_uid> -q "bdb_uid")
redisctl enterprise database get $BDB -q "memory_size"
# Consider adding shards to database
redisctl enterprise database update $BDB --data '{"shards_count": 4}'
Replication Issues
# Check replication status
redisctl enterprise shard list --role slave -q \
"[].{shard: uid, status: replication.status, lag: replication.lag}"
# Force re-sync if needed
redisctl enterprise shard get <shard_uid> -q "replication"
Integration with Other Commands
Shard commands work with:
# Get database shard count
redisctl enterprise database get 1 -q "shards_count"
# Check node shard capacity
redisctl enterprise node get 1 -q "max_shards"
# Monitor shard-related actions
redisctl enterprise action list --type shard_migration
Performance Considerations
- Migration impact: Shard migrations consume network and CPU resources
- Failover time: Typically completes in seconds but depends on data size
- Replication overhead: More replicas mean more network traffic
- Memory overhead: Each shard has memory overhead for metadata
Related Commands
enterprise database
- Database configuration affects shardsenterprise node
- Node capacity and shard placemententerprise action
- Monitor shard operationsenterprise stats
- Detailed performance metrics
Proxy Management
The proxy commands provide management and monitoring capabilities for Redis Enterprise proxy processes. Proxies handle connection management, load balancing, and request routing between clients and Redis database shards.
Available Commands
List Proxies
List all proxy processes in the cluster:
# List all proxies
redisctl enterprise proxy list
# List proxies as a table
redisctl enterprise proxy list -o table
# Filter to active proxies
redisctl enterprise proxy list -q "[?status == 'active']"
# List proxies by node
redisctl enterprise proxy list -q "[?node_uid == \`1\`]"
# Get proxy IDs and their nodes
redisctl enterprise proxy list -q "[].{id: uid, node: node_uid, status: status}"
Get Proxy Details
Get detailed information about a specific proxy:
# Get proxy details
redisctl enterprise proxy get 1
# Get proxy in YAML format
redisctl enterprise proxy get 1 -o yaml
# Extract specific fields
redisctl enterprise proxy get 1 -q '{uid: uid, port: port, threads: threads}'
# Check proxy load
redisctl enterprise proxy get 1 -q '{connections: connections, cpu: cpu_usage}'
Update Proxy Configuration
Update configuration for a specific proxy:
# Update proxy configuration from file
redisctl enterprise proxy update 1 --data @proxy-config.json
# Update proxy with inline JSON
redisctl enterprise proxy update 1 --data '{"threads": 8, "max_connections": 10000}'
# Update proxy from stdin
echo '{"threads": 4}' | redisctl enterprise proxy update 1 --data -
# Update with specific settings
cat <<EOF | redisctl enterprise proxy update 1 --data -
{
"threads": 8,
"max_connections": 10000,
"tcp_keepalive": 60,
"tcp_backlog": 512
}
EOF
Update All Proxies
Update configuration for all proxies simultaneously:
# Update all proxies from file
redisctl enterprise proxy update-all --data @global-proxy-config.json
# Update all proxies with inline configuration
redisctl enterprise proxy update-all --data '{"threads": 8}'
# Apply global proxy policy
cat <<EOF | redisctl enterprise proxy update-all --data -
{
"threads": 8,
"max_connections": 10000,
"tcp_keepalive": 60,
"timeout": 300
}
EOF
Output Examples
Proxy List
[
{
"uid": 1,
"node_uid": 1,
"port": 8080,
"status": "active",
"threads": 4,
"connections": 245,
"cpu_usage": 12.5,
"memory_usage": 128,
"databases": [1, 2, 3]
},
{
"uid": 2,
"node_uid": 2,
"port": 8080,
"status": "active",
"threads": 4,
"connections": 189,
"cpu_usage": 10.2,
"memory_usage": 115,
"databases": [1, 2, 3]
}
]
Proxy Details
{
"uid": 1,
"node_uid": 1,
"port": 8080,
"status": "active",
"threads": 4,
"max_connections": 10000,
"current_connections": 245,
"total_connections": 1234567,
"cpu_usage": 12.5,
"memory_usage": 128,
"tcp_keepalive": 60,
"tcp_backlog": 512,
"timeout": 300,
"databases": [
{
"bdb_uid": 1,
"name": "cache-db",
"connections": 89
},
{
"bdb_uid": 2,
"name": "session-db",
"connections": 67
}
],
"stats": {
"requests_per_sec": 5432,
"operations_per_sec": 8901,
"latency_avg": 0.8,
"errors_per_sec": 0.1
}
}
Common Use Cases
Monitoring Proxy Load
Monitor proxy load and performance:
# Check proxy connections across cluster
redisctl enterprise proxy list -q "[].{proxy: uid, node: node_uid, connections: connections}" -o table
# Find overloaded proxies
redisctl enterprise proxy list -q "[?connections > \`1000\`]"
# Monitor CPU usage
redisctl enterprise proxy list -q "[?cpu_usage > \`50\`].{proxy: uid, cpu: cpu_usage}"
# Check memory usage
redisctl enterprise proxy list -q "[].{proxy: uid, memory_mb: memory_usage}" -o table
Performance Tuning
Optimize proxy performance:
# Increase threads for high-load proxies
for proxy in $(redisctl enterprise proxy list -q "[?cpu_usage > \`75\`].uid" -o json | jq -r '.[]'); do
echo "Updating proxy $proxy"
redisctl enterprise proxy update "$proxy" --data '{"threads": 8}'
done
# Update connection limits
redisctl enterprise proxy update-all --data '{"max_connections": 20000}'
# Apply optimized settings
cat <<EOF | redisctl enterprise proxy update-all --data -
{
"threads": 8,
"max_connections": 15000,
"tcp_keepalive": 30,
"tcp_backlog": 1024,
"timeout": 600
}
EOF
Troubleshooting
Diagnose proxy issues:
# Find proxies with errors
redisctl enterprise proxy list -q "[?status != 'active']"
# Check proxy distribution
redisctl enterprise proxy list -q "[].node_uid" | jq -s 'group_by(.) | map({node: .[0], count: length})'
# Monitor connection distribution
for proxy in 1 2 3; do
echo "Proxy $proxy:"
redisctl enterprise proxy get "$proxy" -q 'databases[].{db: name, connections: connections}' -o table
done
# Check proxy resource usage
redisctl enterprise proxy list -q "[].{proxy: uid, cpu: cpu_usage, memory: memory_usage, connections: connections}" -o table
Capacity Planning
Plan proxy capacity:
# Calculate total connections
redisctl enterprise proxy list -q "[].connections" | jq -s 'add'
# Get average connections per proxy
redisctl enterprise proxy list -q "[].connections" | jq -s 'add/length'
# Find proxies near connection limit
redisctl enterprise proxy list -q "[?connections > max_connections * \`0.8\`].{proxy: uid, usage_pct: (connections / max_connections * \`100\`)}"
# Resource utilization summary
redisctl enterprise proxy list -q "{total_proxies: length(@), avg_cpu: avg([].cpu_usage), avg_memory: avg([].memory_usage), total_connections: sum([].connections)}"
Configuration Examples
Basic Proxy Configuration
{
"threads": 4,
"max_connections": 10000,
"timeout": 300
}
High-Performance Configuration
{
"threads": 16,
"max_connections": 50000,
"tcp_keepalive": 30,
"tcp_backlog": 2048,
"timeout": 600,
"tcp_nodelay": true
}
Resource-Constrained Configuration
{
"threads": 2,
"max_connections": 5000,
"tcp_keepalive": 120,
"tcp_backlog": 256,
"timeout": 120
}
Best Practices
- Load Distribution: Ensure proxies are evenly distributed across nodes
- Thread Tuning: Set threads based on CPU cores and expected load
- Connection Limits: Set appropriate connection limits based on available resources
- Monitoring: Regularly monitor proxy metrics for performance issues
- Gradual Changes: Test configuration changes on individual proxies before applying globally
- Resource Planning: Plan proxy resources based on expected client connections
Integration with Monitoring
Export proxy metrics for monitoring systems:
# Export metrics to monitoring system
redisctl enterprise proxy list -o json | \
jq '.[] | {
timestamp: now,
proxy_id: .uid,
node_id: .node_uid,
connections: .connections,
cpu_usage: .cpu_usage,
memory_usage: .memory_usage
}' | \
curl -X POST http://metrics-collector/ingest -d @-
# Create Prometheus-compatible metrics
redisctl enterprise proxy list -q "[].{proxy: uid, metric: @}" | \
jq -r '.[] | "
redis_proxy_connections{proxy=\"\(.proxy)\"} \(.metric.connections)
redis_proxy_cpu_usage{proxy=\"\(.proxy)\"} \(.metric.cpu_usage)
redis_proxy_memory_mb{proxy=\"\(.proxy)\"} \(.metric.memory_usage)
"'
Troubleshooting
High CPU Usage
When proxies show high CPU usage:
# Identify high-CPU proxies
redisctl enterprise proxy list -q "[?cpu_usage > \`80\`]"
# Check thread configuration
redisctl enterprise proxy get <uid> -q 'threads'
# Increase threads
redisctl enterprise proxy update <uid> --data '{"threads": 8}'
# Monitor after change
watch -n 5 "redisctl enterprise proxy get <uid> -q 'cpu_usage'"
Connection Issues
When experiencing connection problems:
# Check connection limits
redisctl enterprise proxy list -q "[].{proxy: uid, current: connections, max: max_connections, pct: (connections / max_connections * \`100\`)}"
# Find proxies at capacity
redisctl enterprise proxy list -q "[?connections >= max_connections * \`0.95\`]"
# Increase connection limits
redisctl enterprise proxy update <uid> --data '{"max_connections": 20000}'
Related Commands
redisctl enterprise node
- View nodes hosting proxiesredisctl enterprise database
- Manage databases served by proxiesredisctl enterprise stats
- View detailed statistics including proxy metricsredisctl enterprise cluster
- View cluster-wide proxy configuration
Usage Report
The usage report commands provide access to Redis Enterprise cluster usage data, important for license compliance, capacity planning, and resource utilization analysis.
Available Commands
Get Usage Report
Retrieve the current usage report for the cluster:
# Get full usage report
redisctl enterprise usage-report get
# Get usage report as YAML
redisctl enterprise usage-report get -o yaml
# Extract specific metrics
redisctl enterprise usage-report get -q '{cluster: cluster_name, databases: total_databases, memory_gb: total_memory_gb}'
# Get database-specific usage
redisctl enterprise usage-report get -q 'databases[].{name: name, memory: memory_mb, shards: shard_count}'
Export Usage Report
Export usage report to a file for reporting or analysis:
# Export to JSON file
redisctl enterprise usage-report export --output usage-report.json
# Export to CSV for spreadsheet analysis
redisctl enterprise usage-report export --output usage-report.csv --format csv
# Export with specific date in filename
redisctl enterprise usage-report export -o "usage-$(date +%Y%m%d).json"
# Export filtered data
redisctl enterprise usage-report export -o databases.json -q 'databases'
Output Examples
Usage Report Structure
{
"cluster_name": "production-cluster",
"cluster_uid": "cluster-12345",
"report_date": "2024-03-15T10:00:00Z",
"license": {
"type": "enterprise",
"expiry": "2025-01-01T00:00:00Z",
"shards_limit": 100,
"memory_limit_gb": 1024
},
"usage": {
"total_databases": 25,
"total_shards": 75,
"total_memory_gb": 512,
"total_nodes": 5,
"total_cpus": 40
},
"databases": [
{
"uid": 1,
"name": "cache-db",
"memory_mb": 8192,
"shard_count": 4,
"replication": true,
"persistence": "aof",
"modules": ["search", "json"]
},
{
"uid": 2,
"name": "session-store",
"memory_mb": 4096,
"shard_count": 2,
"replication": false,
"persistence": "none",
"modules": []
}
],
"nodes": [
{
"uid": 1,
"address": "node1.cluster.local",
"cpus": 8,
"memory_gb": 128,
"databases": 5,
"shards": 15
}
]
}
Common Use Cases
License Compliance
Monitor usage against license limits:
# Check current usage vs limits
redisctl enterprise usage-report get -q '{
shards_used: usage.total_shards,
shards_limit: license.shards_limit,
shards_available: license.shards_limit - usage.total_shards,
memory_used_gb: usage.total_memory_gb,
memory_limit_gb: license.memory_limit_gb,
memory_available_gb: license.memory_limit_gb - usage.total_memory_gb
}'
# Check license expiry
redisctl enterprise usage-report get -q 'license.expiry'
# Alert if approaching limits
usage=$(redisctl enterprise usage-report get -q '{
shard_pct: (usage.total_shards / license.shards_limit * 100),
memory_pct: (usage.total_memory_gb / license.memory_limit_gb * 100)
}')
Capacity Planning
Analyze resource utilization for capacity planning:
# Get growth metrics
redisctl enterprise usage-report export -o usage-$(date +%Y%m).json
# Database memory distribution
redisctl enterprise usage-report get -q 'databases | sort_by(@, &memory_mb) | reverse(@)[:10]' -o table
# Shards per database
redisctl enterprise usage-report get -q 'databases[].{name: name, shards: shard_count}' -o table
# Node utilization
redisctl enterprise usage-report get -q 'nodes[].{node: address, memory_gb: memory_gb, databases: databases, shards: shards}' -o table
Module Usage Analysis
Track module adoption and usage:
# List databases with modules
redisctl enterprise usage-report get -q 'databases[?length(modules) > `0`].{name: name, modules: modules}'
# Count module usage
redisctl enterprise usage-report get -q 'databases[].modules[] | group_by(@) | [].{module: [0], count: length(@)}'
# Find databases with specific module
redisctl enterprise usage-report get -q 'databases[?contains(modules, `search`)].name'
Regular Reporting
Create automated usage reports:
#!/bin/bash
# Monthly usage report script
REPORT_DIR="/var/reports/redis"
DATE=$(date +%Y%m%d)
MONTH=$(date +%B-%Y)
# Create report directory
mkdir -p "$REPORT_DIR"
# Export full report
redisctl enterprise usage-report export -o "$REPORT_DIR/usage-$DATE.json"
# Create summary CSV
redisctl enterprise usage-report get -q '{
date: report_date,
databases: usage.total_databases,
shards: usage.total_shards,
memory_gb: usage.total_memory_gb,
nodes: usage.total_nodes
}' | jq -r '[.date, .databases, .shards, .memory_gb, .nodes] | @csv' >> "$REPORT_DIR/usage-summary.csv"
# Email report
echo "Redis Enterprise Usage Report for $MONTH" | \
mail -s "Redis Usage Report - $MONTH" \
-a "$REPORT_DIR/usage-$DATE.json" \
ops-team@company.com
Chargeback/Showback
Generate department or team usage reports:
# Assuming database names include team identifiers
# e.g., "team-a-cache", "team-b-sessions"
# Group databases by team
for team in team-a team-b team-c; do
echo "Usage for $team:"
redisctl enterprise usage-report get \
-q "databases[?contains(name, '$team')].{name: name, memory_mb: memory_mb, shards: shard_count}" \
-o table
done
# Calculate team memory usage
redisctl enterprise usage-report get -q 'databases[].{
team: split(name, `-`)[0],
memory_mb: memory_mb
}' | jq -s 'group_by(.team) | map({
team: .[0].team,
total_memory_mb: map(.memory_mb) | add,
database_count: length
})'
Export Formats
JSON Export
Full structured data for programmatic processing:
# Export and process with jq
redisctl enterprise usage-report export -o report.json
cat report.json | jq '.databases | length'
# Export and upload to S3
redisctl enterprise usage-report export -o /tmp/usage.json
aws s3 cp /tmp/usage.json s3://bucket/redis-reports/$(date +%Y/%m)/usage.json
CSV Export
Tabular format for spreadsheet analysis:
# Export to CSV
redisctl enterprise usage-report export -o report.csv -f csv
# Export specific data as CSV
redisctl enterprise usage-report get -q 'databases' | \
jq -r '["name","memory_mb","shards"], (.[] | [.name, .memory_mb, .shard_count]) | @csv' > databases.csv
# Import to Google Sheets
redisctl enterprise usage-report export -o /tmp/usage.csv -f csv
gcloud auth login
gdrive upload /tmp/usage.csv
Integration Examples
Monitoring Systems
Send usage metrics to monitoring systems:
# Prometheus metrics format
redisctl enterprise usage-report get -o json | jq -r '
"redis_cluster_databases \(.usage.total_databases)",
"redis_cluster_shards \(.usage.total_shards)",
"redis_cluster_memory_gb \(.usage.total_memory_gb)",
"redis_cluster_nodes \(.usage.total_nodes)",
"redis_license_shards_limit \(.license.shards_limit)",
"redis_license_memory_limit_gb \(.license.memory_limit_gb)"
' | curl -X POST http://pushgateway:9091/metrics/job/redis-usage --data-binary @-
# Datadog metrics
redisctl enterprise usage-report get -o json | \
python -c "
import json, sys
from datadog import initialize, api
data = json.load(sys.stdin)
api.Metric.send([
{'metric': 'redis.usage.databases', 'points': data['usage']['total_databases']},
{'metric': 'redis.usage.shards', 'points': data['usage']['total_shards']},
{'metric': 'redis.usage.memory_gb', 'points': data['usage']['total_memory_gb']}
])
"
Ticketing Systems
Create tickets for capacity warnings:
#!/bin/bash
# Check usage and create tickets
USAGE=$(redisctl enterprise usage-report get -o json)
SHARD_PCT=$(echo $USAGE | jq '.usage.total_shards / .license.shards_limit * 100')
MEMORY_PCT=$(echo $USAGE | jq '.usage.total_memory_gb / .license.memory_limit_gb * 100')
if (( $(echo "$SHARD_PCT > 80" | bc -l) )); then
echo "High shard usage: ${SHARD_PCT}%" | \
gh issue create --title "Redis Cluster: High Shard Usage Alert" \
--body "Shard usage is at ${SHARD_PCT}% of licensed capacity"
fi
if (( $(echo "$MEMORY_PCT > 80" | bc -l) )); then
echo "High memory usage: ${MEMORY_PCT}%" | \
jira create --project OPS --type Alert \
--summary "Redis Cluster: High Memory Usage" \
--description "Memory usage is at ${MEMORY_PCT}% of licensed capacity"
fi
Best Practices
- Regular Exports: Schedule regular exports for historical tracking
- Automated Monitoring: Set up automated checks for license limits
- Trend Analysis: Compare reports over time to identify growth patterns
- Capacity Alerts: Configure alerts when approaching license limits
- Cost Attribution: Use naming conventions to enable chargeback/showback
- Archive Reports: Keep historical reports for compliance and auditing
Troubleshooting
Report Generation Issues
If usage reports fail to generate:
# Check cluster status
redisctl enterprise cluster get -q 'name'
# Verify authentication
redisctl enterprise auth test
# Check with raw API
redisctl api enterprise get /v1/usage_report
Export Failures
When exports fail:
# Check write permissions
touch test-file.json && rm test-file.json
# Verify disk space
df -h .
# Try different format
redisctl enterprise usage-report export -o report.json
redisctl enterprise usage-report export -o report.csv -f csv
Related Commands
redisctl enterprise cluster
- View cluster informationredisctl enterprise database list
- List all databasesredisctl enterprise stats
- View detailed statisticsredisctl enterprise node list
- View node resources
Database Migration
The migration commands provide tools for database import/export operations and migration status tracking in Redis Enterprise.
Available Commands
Get Migration Status
Check the status of a specific migration operation:
# Get migration status
redisctl enterprise migration get 12345
# Get migration status as YAML
redisctl enterprise migration get 12345 -o yaml
# Extract specific fields
redisctl enterprise migration get 12345 -q '{status: status, progress: progress_percentage}'
# Check if migration is complete
redisctl enterprise migration get 12345 -q 'status == "completed"'
Export Database
Export database data for backup or migration:
# Export database
redisctl enterprise migration export 1
# Export and save task ID
TASK_ID=$(redisctl enterprise migration export 1 -q 'task_id')
# Monitor export progress
redisctl enterprise action get $TASK_ID
# Export with specific options (via database commands)
redisctl enterprise database export 1 --data '{
"export_type": "rdb",
"compression": "gzip"
}'
Import Database
Import data into a database:
# Import from RDB file URL
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/backup.rdb.gz",
"import_type": "rdb"
}
EOF
# Import from another database
redisctl enterprise migration import 2 --data '{
"source_type": "database",
"source_database_uid": 1
}'
# Import from file
redisctl enterprise migration import 3 --data @import-config.json
Output Examples
Migration Status
{
"uid": 12345,
"status": "in_progress",
"type": "import",
"database_uid": 1,
"started": "2024-03-15T10:00:00Z",
"progress_percentage": 65,
"estimated_completion": "2024-03-15T10:30:00Z",
"bytes_transferred": 1073741824,
"total_bytes": 1649267441
}
Export Response
{
"task_id": "task-export-67890",
"status": "queued",
"database_uid": 1,
"export_location": "s3://backups/db1-20240315.rdb.gz"
}
Import Response
{
"task_id": "task-import-11111",
"status": "started",
"database_uid": 2,
"source": "https://storage.example.com/backup.rdb.gz"
}
Common Use Cases
Database Backup
Create and manage database backups:
# Export database for backup
redisctl enterprise migration export 1
# Check export status
redisctl enterprise action list -q "[?contains(name, 'export')]"
# Download exported file (if accessible)
EXPORT_URL=$(redisctl enterprise action get <task_id> -q 'result.export_url')
curl -o backup.rdb.gz "$EXPORT_URL"
Database Cloning
Clone a database within the cluster:
# Export source database
EXPORT_TASK=$(redisctl enterprise migration export 1 -q 'task_id')
# Wait for export to complete
redisctl enterprise action wait $EXPORT_TASK
# Get export location
EXPORT_LOC=$(redisctl enterprise action get $EXPORT_TASK -q 'result.location')
# Import to new database
cat <<EOF | redisctl enterprise migration import 2 --data -
{
"source_type": "internal",
"source_location": "$EXPORT_LOC"
}
EOF
Cross-Cluster Migration
Migrate databases between clusters:
# On source cluster: Export database
redisctl enterprise migration export 1
# Note the export location
# Transfer file to destination cluster storage
# (Use appropriate method: S3, FTP, SCP, etc.)
# On destination cluster: Import database
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/export.rdb.gz",
"skip_verify_ssl": false
}
EOF
Scheduled Backups
Automate regular database exports:
#!/bin/bash
# backup.sh - Daily backup script
DBS=$(redisctl enterprise database list -q '[].uid' -o json | jq -r '.[]')
for DB in $DBS; do
echo "Backing up database $DB"
TASK=$(redisctl enterprise migration export $DB -q 'task_id')
# Store task IDs for monitoring
echo "$TASK:$DB:$(date +%Y%m%d)" >> backup-tasks.log
done
# Monitor all backup tasks
while read line; do
TASK=$(echo $line | cut -d: -f1)
DB=$(echo $line | cut -d: -f2)
STATUS=$(redisctl enterprise action get $TASK -q 'status')
echo "Database $DB backup: $STATUS"
done < backup-tasks.log
Migration Monitoring
Track migration progress and handle issues:
# List all migration-related tasks
redisctl enterprise action list -q "[?contains(name, 'migration') || contains(name, 'import') || contains(name, 'export')]"
# Monitor specific migration
MIGRATION_ID=12345
while true; do
STATUS=$(redisctl enterprise migration get $MIGRATION_ID -q 'status')
PROGRESS=$(redisctl enterprise migration get $MIGRATION_ID -q 'progress_percentage')
echo "Status: $STATUS, Progress: $PROGRESS%"
[ "$STATUS" = "completed" ] && break
sleep 10
done
# Check for errors
redisctl enterprise migration get $MIGRATION_ID -q 'error'
Error Handling
Handle migration failures:
# Check migration error details
redisctl enterprise migration get <uid> -q '{status: status, error: error_message, failed_at: failed_timestamp}'
# List failed migrations
redisctl enterprise action list -q "[?status == 'failed' && contains(name, 'migration')]"
# Retry failed import
FAILED_CONFIG=$(redisctl enterprise migration get <uid> -q 'configuration')
echo "$FAILED_CONFIG" | redisctl enterprise migration import <bdb_uid> --data -
Best Practices
- Pre-Migration Checks: Verify source and target compatibility
- Test Migrations: Always test with non-production data first
- Monitor Progress: Track migration status throughout the process
- Verify Data: Confirm data integrity after migration
- Schedule Wisely: Run large migrations during maintenance windows
- Keep Backups: Maintain backups before starting migrations
Troubleshooting
Import Failures
When imports fail:
# Check database status
redisctl enterprise database get <bdb_uid> -q 'status'
# Verify available memory
redisctl enterprise database get <bdb_uid> -q '{memory_size: memory_size, used_memory: used_memory}'
# Check cluster resources
redisctl enterprise cluster get -q 'resources'
# Review error logs
redisctl enterprise logs get --filter "database=$BDB_UID"
Export Issues
When exports fail:
# Check disk space on nodes
redisctl enterprise node list -q '[].{node: uid, disk_free: disk_free_size}'
# Verify database is accessible
redisctl enterprise database get <bdb_uid> -q 'status'
# Check export permissions
redisctl enterprise database get <bdb_uid> -q 'backup_configuration'
Related Commands
redisctl enterprise database
- Database management including import/exportredisctl enterprise action
- Track migration tasksredisctl enterprise cluster
- Check cluster resourcesredisctl enterprise logs
- View migration-related logs
CRDB Tasks
CRDB tasks are background operations related to Active-Active (CRDB) databases in Redis Enterprise. These commands allow you to monitor and manage async tasks for CRDB operations like synchronization, migration, and backup.
Overview
CRDB tasks include:
- Database synchronization between participating clusters
- Data migration operations
- Backup and restore operations for Active-Active databases
- Replication and conflict resolution tasks
- Schema changes across participating clusters
Available Commands
List All CRDB Tasks
List all CRDB tasks with optional filtering:
# List all CRDB tasks
redisctl enterprise crdb-task list
# Filter by task status
redisctl enterprise crdb-task list --status running
redisctl enterprise crdb-task list --status completed
redisctl enterprise crdb-task list --status failed
# Filter by task type
redisctl enterprise crdb-task list --type sync
redisctl enterprise crdb-task list --type migration
redisctl enterprise crdb-task list --type backup
# Filter by CRDB UID
redisctl enterprise crdb-task list --crdb-uid 1
# Combine filters
redisctl enterprise crdb-task list --status running --type sync --crdb-uid 1
# Output as table
redisctl enterprise crdb-task list -o table
Get Task Details
Get detailed information about a specific CRDB task:
# Get task by ID
redisctl enterprise crdb-task get <task_id>
# Get specific fields using JMESPath
redisctl enterprise crdb-task get <task_id> -q "status"
redisctl enterprise crdb-task get <task_id> -q "{id: task_id, status: status, type: type}"
Check Task Status
Quick status check for a CRDB task:
# Get just the status
redisctl enterprise crdb-task status <task_id>
Get Task Progress
Monitor task progress information:
# Get progress details
redisctl enterprise crdb-task progress <task_id>
# Get progress percentage only
redisctl enterprise crdb-task progress <task_id> -q "progress_percent"
Get Task Logs
Retrieve logs for a CRDB task:
# Get task logs
redisctl enterprise crdb-task logs <task_id>
List Tasks by CRDB
List all tasks for a specific Active-Active database:
# List all tasks for a CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid>
# Filter by status for specific CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid> --status running
# Filter by type for specific CRDB
redisctl enterprise crdb-task list-by-crdb <crdb_uid> --type sync
Task Control Operations
Cancel Task
Cancel a running CRDB task:
# Cancel with confirmation
redisctl enterprise crdb-task cancel <task_id>
# Cancel without confirmation
redisctl enterprise crdb-task cancel <task_id> --force
Retry Failed Task
Retry a failed CRDB task:
redisctl enterprise crdb-task retry <task_id>
Note: Retry functionality may not be available for all task types or Redis Enterprise versions.
Pause/Resume Tasks
Pause and resume CRDB tasks:
# Pause a running task
redisctl enterprise crdb-task pause <task_id>
# Resume a paused task
redisctl enterprise crdb-task resume <task_id>
Note: Pause/resume functionality may not be supported for all task types.
Task Types
Common CRDB task types include:
- sync - Data synchronization between clusters
- migration - Data migration operations
- backup - CRDB backup operations
- restore - CRDB restore operations
- rebalance - Shard rebalancing across clusters
- schema_change - Schema modifications across participating clusters
- conflict_resolution - Resolving data conflicts between clusters
Task Statuses
CRDB tasks can have the following statuses:
- pending - Task is queued for execution
- running - Task is currently executing
- completed - Task completed successfully
- failed - Task failed with errors
- canceled - Task was canceled by user
- paused - Task is paused (if supported)
Examples
Monitor CRDB Synchronization
# List all sync tasks
redisctl enterprise crdb-task list --type sync
# Check status of specific sync task
TASK_ID="task-12345"
redisctl enterprise crdb-task status $TASK_ID
# Monitor progress
watch -n 5 "redisctl enterprise crdb-task progress $TASK_ID"
Handle Failed Migration
# Find failed migration tasks
redisctl enterprise crdb-task list --type migration --status failed
# Get error details
redisctl enterprise crdb-task get <failed_task_id> -q "error"
# Retry the migration
redisctl enterprise crdb-task retry <failed_task_id>
Monitor CRDB Backup
# Start monitoring backup task
CRDB_UID=1
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --type backup --status running
# Get progress updates
BACKUP_TASK="backup-task-123"
while [ "$(redisctl enterprise crdb-task status $BACKUP_TASK)" = "running" ]; do
echo "Progress: $(redisctl enterprise crdb-task progress $BACKUP_TASK -q progress_percent)%"
sleep 10
done
Cancel Long-Running Task
# Find long-running tasks
redisctl enterprise crdb-task list --status running -o table
# Cancel specific task
redisctl enterprise crdb-task cancel <task_id> --force
Practical Scripts
Task Monitoring Script
#!/bin/bash
# Monitor all CRDB tasks for a specific database
CRDB_UID=$1
if [ -z "$CRDB_UID" ]; then
echo "Usage: $0 <crdb_uid>"
exit 1
fi
echo "Monitoring tasks for CRDB $CRDB_UID..."
while true; do
clear
echo "=== CRDB $CRDB_UID Tasks ==="
echo ""
# Get running tasks
echo "Running Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status running -o table
# Get failed tasks
echo -e "\nFailed Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status failed -o table
# Get completed tasks (last 5)
echo -e "\nRecent Completed Tasks:"
redisctl enterprise crdb-task list-by-crdb $CRDB_UID --status completed -q "tasks[:5]" -o table
sleep 30
done
Task Health Check
#!/bin/bash
# Check health of all CRDB tasks
echo "CRDB Task Health Report"
echo "======================="
# Check for failed tasks
FAILED_COUNT=$(redisctl enterprise crdb-task list --status failed -q "tasks | length")
echo "Failed tasks: $FAILED_COUNT"
if [ "$FAILED_COUNT" -gt 0 ]; then
echo "Failed task details:"
redisctl enterprise crdb-task list --status failed -q "tasks[].{id: task_id, type: type, error: error_message}"
fi
# Check for stuck tasks (running > 1 hour)
echo -e "\nLong-running tasks (>1 hour):"
redisctl enterprise crdb-task list --status running -q "tasks[?duration_seconds > \`3600\`]"
# Check task distribution by type
echo -e "\nTask distribution by type:"
for type in sync migration backup restore; do
COUNT=$(redisctl enterprise crdb-task list --type $type -q "tasks | length")
echo " $type: $COUNT"
done
Automated Task Retry
#!/bin/bash
# Automatically retry failed tasks
# Get all failed tasks
FAILED_TASKS=$(redisctl enterprise crdb-task list --status failed -q "tasks[].task_id" -o json | jq -r '.[]')
for task_id in $FAILED_TASKS; do
echo "Retrying task $task_id..."
# Get task type for logging
TASK_TYPE=$(redisctl enterprise crdb-task get $task_id -q "type")
# Attempt retry
if redisctl enterprise crdb-task retry $task_id; then
echo "Successfully initiated retry for $TASK_TYPE task $task_id"
else
echo "Failed to retry $TASK_TYPE task $task_id - manual intervention required"
fi
sleep 5
done
Integration with CRDB Commands
CRDB task commands work alongside regular CRDB commands:
# Create a CRDB (returns task_id)
TASK_ID=$(redisctl enterprise crdb create --data @crdb.json -q "task_id")
# Monitor the creation task
redisctl enterprise crdb-task progress $TASK_ID
# Wait for completion
while [ "$(redisctl enterprise crdb-task status $TASK_ID)" = "running" ]; do
sleep 10
done
# Check if successful
if [ "$(redisctl enterprise crdb-task status $TASK_ID)" = "completed" ]; then
echo "CRDB created successfully"
else
echo "CRDB creation failed"
redisctl enterprise crdb-task get $TASK_ID -q "error"
fi
Best Practices
- Monitor Critical Tasks - Set up monitoring for backup and migration tasks
- Handle Failures Promptly - Check failed tasks regularly and retry or escalate
- Track Long-Running Tasks - Monitor tasks that run longer than expected
- Use Filtering - Filter by status and type to focus on relevant tasks
- Automate Monitoring - Create scripts to track task health
- Log Task History - Keep records of completed and failed tasks for auditing
Troubleshooting
Tasks Not Listed
# Verify CRDB exists
redisctl enterprise crdb list
# Check if tasks endpoint is available
redisctl enterprise api get /crdb_tasks
Cannot Cancel Task
# Check task status first
redisctl enterprise crdb-task get <task_id> -q "status"
# Only running tasks can be canceled
# Completed or failed tasks cannot be canceled
Retry Not Available
Some task types or Redis Enterprise versions may not support retry:
- Check Redis Enterprise version compatibility
- Consider creating a new task instead of retrying
- Review task configuration for issues
Progress Not Updating
# Check if task supports progress reporting
redisctl enterprise crdb-task get <task_id> -q "supports_progress"
# Some quick tasks may complete before progress is reported
Related Commands
enterprise crdb
- CRDB management operationsenterprise action
- General action/task monitoringenterprise database
- Regular database operationsapi enterprise
- Direct API access for advanced operations
Actions (Async Tasks)
Actions in Redis Enterprise represent asynchronous operations or tasks that are running or have completed. The action commands allow you to monitor and manage these background operations.
Overview
Many Redis Enterprise operations are asynchronous, returning an action ID that can be used to track progress. Actions include database creation/deletion, backup operations, imports/exports, and cluster maintenance tasks.
Available Commands
List All Actions
List all actions in the cluster with optional filtering:
# List all actions
redisctl enterprise action list
# Filter by status
redisctl enterprise action list --status completed
redisctl enterprise action list --status running
# Filter by type
redisctl enterprise action list --type bdb_backup
# Combine filters
redisctl enterprise action list --status running --type bdb_import
# Output as table
redisctl enterprise action list -o table
Get Action Details
Get detailed information about a specific action:
# Get action by UID
redisctl enterprise action get <action_uid>
# Get action with specific fields using JMESPath
redisctl enterprise action get <action_uid> -q "status"
Check Action Status
Quick status check for an action (returns just the status field):
redisctl enterprise action status <action_uid>
Cancel Running Action
Cancel a running action:
redisctl enterprise action cancel <action_uid>
List Actions for Database
List all actions for a specific database:
redisctl enterprise action list-for-bdb <bdb_uid>
# Filter by status for specific database
redisctl enterprise action list-for-bdb <bdb_uid> --status running
Action Types
Common action types you'll encounter:
bdb_create
- Database creationbdb_delete
- Database deletionbdb_update
- Database configuration updatebdb_backup
- Database backup operationbdb_import
- Database import operationbdb_export
- Database export operationcrdb_create
- Active-Active database creationnode_join
- Node joining clustercluster_recovery
- Cluster recovery operation
Action Statuses
Actions can have the following statuses:
queued
- Action is queued for executionrunning
- Action is currently executingcompleted
- Action completed successfullyfailed
- Action failed with errorscanceled
- Action was canceled
Examples
Monitor Database Creation
# Create a database (returns action_uid)
ACTION_UID=$(redisctl enterprise database create --data @db.json -q "action_uid")
# Check status
redisctl enterprise action status $ACTION_UID
# Get full details when complete
redisctl enterprise action get $ACTION_UID
List Recent Failed Actions
# List failed actions in table format
redisctl enterprise action list --status failed -o table
# Get details of a failed action
redisctl enterprise action get <failed_action_uid> -q "{error: error_message, started: start_time}"
Cancel Long-Running Import
# List running imports
redisctl enterprise action list --status running --type bdb_import
# Cancel specific import
redisctl enterprise action cancel <import_action_uid>
Monitor All Database Actions
# Watch all actions for a database
watch -n 5 "redisctl enterprise action list-for-bdb 1 -o table"
Integration with Async Operations
The action commands work seamlessly with the --wait
flag available on create/update/delete operations:
# This uses action monitoring internally
redisctl enterprise database create --data @db.json --wait
# Equivalent to manually monitoring:
ACTION_UID=$(redisctl enterprise database create --data @db.json -q "action_uid")
while [ "$(redisctl enterprise action status $ACTION_UID)" = "running" ]; do
sleep 5
done
API Versions
The action commands support both v1 and v2 API endpoints:
- v2 endpoints (
/v2/actions
) are preferred when available - v1 endpoints (
/v1/actions
) are used as fallback - Both return the same data structure
Best Practices
- Always check action status for async operations before proceeding
- Use filtering to reduce output when listing many actions
- Save action UIDs from create/update operations for tracking
- Set up monitoring for critical long-running actions
- Check failed actions for error details to diagnose issues
Related Commands
enterprise database
- Database operations that create actionsenterprise cluster
- Cluster operations that create actionsenterprise crdb
- Active-Active operations that create actions
Alert Management Commands
Manage alerts for Redis Enterprise clusters, nodes, and databases.
Overview
The alerts commands provide comprehensive monitoring and management of alerts across your Redis Enterprise deployment. You can:
- List and filter alerts by type and severity
- Get details on specific alerts
- Manage alert settings
- View alerts at cluster, node, and database levels
Commands
List All Alerts
List all alerts across the cluster with optional filtering:
# List all alerts
redisctl enterprise alerts list
# Filter by alert type (cluster, node, bdb)
redisctl enterprise alerts list --filter-type cluster
# Filter by severity (info, warning, error, critical)
redisctl enterprise alerts list --severity error
# Combine filters
redisctl enterprise alerts list --filter-type node --severity warning
Get Specific Alert
Get details for a specific alert by UID:
redisctl enterprise alerts get 123
Cluster Alerts
View alerts at the cluster level:
# Get all cluster alerts
redisctl enterprise alerts cluster
# Get specific cluster alert by name
redisctl enterprise alerts cluster --alert cluster_license_about_to_expire
Node Alerts
View alerts for nodes:
# Get all node alerts
redisctl enterprise alerts node
# Get alerts for specific node
redisctl enterprise alerts node 1
# Get specific alert for a node
redisctl enterprise alerts node 1 --alert node_ephemeral_storage
Database Alerts
View alerts for databases:
# Get all database alerts
redisctl enterprise alerts database
# Get alerts for specific database
redisctl enterprise alerts database 1
# Get specific alert for a database
redisctl enterprise alerts database 1 --alert bdb_backup_failed
Alert Settings
Manage alert configuration settings:
# Get current alert settings
redisctl enterprise alerts settings-get
# Update alert settings
redisctl enterprise alerts settings-update --data '{
"cluster_license_about_to_expire": {
"enabled": true,
"threshold": "30"
}
}'
# Update from file
redisctl enterprise alerts settings-update --data @alert-settings.json
# Update from stdin
echo '{"node_ephemeral_storage": {"enabled": true, "threshold": "80"}}' | \
redisctl enterprise alerts settings-update --data -
Output Formats
All commands support multiple output formats:
# JSON output (default)
redisctl enterprise alerts list -o json
# YAML output
redisctl enterprise alerts list -o yaml
# Table output
redisctl enterprise alerts list -o table
JMESPath Filtering
Use JMESPath queries to filter and transform output:
# Get only alert names
redisctl enterprise alerts list -q '[].name'
# Get alerts with severity error or critical
redisctl enterprise alerts list -q "[?severity=='error' || severity=='critical']"
# Get alert count by type
redisctl enterprise alerts list -q 'length(@)'
# Get specific fields
redisctl enterprise alerts settings-get -q 'node_ephemeral_storage'
Common Use Cases
Monitor Critical Alerts
# List all critical alerts
redisctl enterprise alerts list --severity critical -o table
# Check for license expiration
redisctl enterprise alerts cluster --alert cluster_license_about_to_expire
Alert Monitoring Script
#!/bin/bash
# Monitor for critical alerts
CRITICAL_ALERTS=$(redisctl enterprise alerts list --severity critical -o json)
if [ $(echo "$CRITICAL_ALERTS" | jq 'length') -gt 0 ]; then
echo "Critical alerts found:"
echo "$CRITICAL_ALERTS" | jq -r '.[] | "\(.type): \(.name) - \(.description)"'
exit 1
fi
Adjust Alert Thresholds
# Set more aggressive storage thresholds
redisctl enterprise alerts settings-update --data '{
"node_ephemeral_storage": {
"enabled": true,
"threshold": "60"
},
"node_persistent_storage": {
"enabled": true,
"threshold": "60"
}
}'
Check Database Health
# Get all database alerts for monitoring
for db_id in $(redisctl enterprise database list -q '[].uid'); do
echo "Checking database $db_id..."
redisctl enterprise alerts database $db_id
done
Alert Types
Cluster Alerts
cluster_ca_cert_about_to_expire
- CA certificate expiration warningcluster_certs_about_to_expire
- SSL certificate expiration warningcluster_license_about_to_expire
- License expiration warningcluster_node_operation_failed
- Node operation failurecluster_ocsp_query_failed
- OCSP query failurecluster_ocsp_status_revoked
- Certificate revoked via OCSP
Node Alerts
node_checks_error
- Node health check errorsnode_ephemeral_storage
- Ephemeral storage threshold exceedednode_free_flash
- Flash storage threshold exceedednode_internal_certs_about_to_expire
- Internal certificate expirationnode_persistent_storage
- Persistent storage threshold exceeded
Database Alerts
bdb_backup_failed
- Database backup failurebdb_crdt_sync_error
- Active-Active synchronization errorbdb_high_latency
- High latency detectedbdb_high_memory
- Memory usage threshold exceededbdb_replica_sync_error
- Replica synchronization error
Notes
- Alert thresholds are configured in the cluster settings
- Some alerts have configurable thresholds (e.g., storage, certificate expiration)
- Critical alerts should be addressed immediately
- Use profiles to manage multiple Redis Enterprise deployments:
redisctl -p production enterprise alerts list --severity critical
Debug Info Commands
Collect diagnostic information and support packages for troubleshooting Redis Enterprise clusters.
Overview
Debug info commands gather comprehensive diagnostic data from Redis Enterprise clusters, nodes, and databases. As of Phase 1 improvements, these commands now properly download binary tar.gz support packages that can be directly uploaded to Redis Support.
Available Commands
Collect Cluster Support Package
# Download cluster-wide support package (recommended)
redisctl enterprise debug-info all
# With custom output file
redisctl enterprise debug-info all --file /tmp/cluster-support.tar.gz
# Use new API endpoint (for Redis Enterprise 7.4+)
redisctl enterprise debug-info all --use-new-api
Output: Downloads a tar.gz file containing:
- Complete cluster configuration
- All node information and logs
- Database configurations
- System metrics and diagnostics
- Network configuration
- Performance data
Default filename: support-package-cluster-{timestamp}.tar.gz
Collect Node Support Package
# Download support package for all nodes
redisctl enterprise debug-info node
# Download for specific node
redisctl enterprise debug-info node 1
# With custom output
redisctl enterprise debug-info node 1 --file /tmp/node1-support.tar.gz
Output: Downloads a tar.gz file containing:
- Node configuration and state
- System resources and metrics
- Local log files
- Process information
- Network configuration
Default filename:
- All nodes:
support-package-nodes-{timestamp}.tar.gz
- Specific node:
support-package-node-{uid}-{timestamp}.tar.gz
Collect Database Support Package
# Download support package for specific database
redisctl enterprise debug-info database 1
# With custom output
redisctl enterprise debug-info database 1 --file /tmp/db1-support.tar.gz
# Use new API endpoint
redisctl enterprise debug-info database 1 --use-new-api
Output: Downloads a tar.gz file containing:
- Database configuration
- Shard distribution and state
- Replication information
- Performance metrics
- Recent operations and logs
Default filename: support-package-db-{uid}-{timestamp}.tar.gz
Binary Download Support (Phase 1)
Starting with v0.5.1, all debug-info commands properly handle binary responses:
# Downloads actual tar.gz file (not JSON)
redisctl enterprise debug-info all
# Verify the downloaded file
file support-package-cluster-*.tar.gz
# Output: gzip compressed data, from Unix
# Extract and view contents
tar -tzf support-package-cluster-*.tar.gz | head
API Endpoint Compatibility
The tool supports both old (deprecated) and new API endpoints:
Command | Old Endpoint (default) | New Endpoint (--use-new-api) |
---|---|---|
all | /v1/debuginfo/all | /v1/cluster/debuginfo |
node | /v1/debuginfo/node | /v1/nodes/{uid}/debuginfo |
database | /v1/debuginfo/all/bdb/{uid} | /v1/bdbs/{uid}/debuginfo |
Note: Old endpoints are deprecated as of Redis Enterprise 7.4. Use --use-new-api
for newer clusters.
Common Use Cases
Quick Support Package for Troubleshooting
# Generate support package with automatic naming
redisctl enterprise debug-info all
# Output shows:
# ✓ Support package created successfully
# File: support-package-cluster-20250916-110539.tar.gz
# Size: 305.7 KB
Preparing for Support Ticket
# 1. Generate cluster support package
redisctl enterprise debug-info all --file support-case-12345.tar.gz
# 2. Verify the file
ls -lh support-case-12345.tar.gz
file support-case-12345.tar.gz
# 3. Upload to Redis Support portal
# Reference your case number: 12345
Database-Specific Issues
# Generate package for problematic database
redisctl enterprise debug-info database 1
# The package includes database-specific logs and metrics
# Upload directly to support ticket
Automated Collection Script
#!/bin/bash
# Collect support packages for all components
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
OUTPUT_DIR="./support-$TIMESTAMP"
mkdir -p "$OUTPUT_DIR"
echo "Collecting cluster support package..."
redisctl enterprise debug-info all \
--file "$OUTPUT_DIR/cluster.tar.gz"
echo "Collecting node support packages..."
for node_id in 1 2 3; do
redisctl enterprise debug-info node $node_id \
--file "$OUTPUT_DIR/node-$node_id.tar.gz"
done
echo "Support packages saved to $OUTPUT_DIR"
Important Notes
Security Considerations
- Support packages contain sensitive information (hostnames, IPs, configurations)
- Review contents before sharing if needed
- Delete local copies after uploading to support
- Use secure channels for transmission
Performance Impact
- Package generation may temporarily impact cluster performance
- Large clusters can generate packages over 1GB
- Run during maintenance windows when possible
- Network bandwidth considerations for remote clusters
File Management
- Files are saved in current directory by default
- Use
--file
to specify custom location - Automatic timestamp prevents overwriting
- Clean up old support packages regularly
Progress Indicators
The tool now shows progress during package generation:
⠋ Generating support package...
✓ Support package created successfully
File: support-package-cluster-20250916-110539.tar.gz
Size: 305.7 KB
Troubleshooting
Authentication Errors
If you get authentication errors, ensure correct credentials:
# Check your profile
redisctl profile list
# Use environment variables for testing
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="your_password"
export REDIS_ENTERPRISE_INSECURE="true"
Large File Sizes
For very large support packages:
# Stream directly to compressed file
redisctl enterprise debug-info all --file >(gzip -9 > support.tar.gz)
# Split large files for upload
split -b 100M support-package.tar.gz support-part-
Verify Package Contents
# List contents without extracting
tar -tzf support-package-cluster-*.tar.gz
# Extract specific files
tar -xzf support-package-cluster-*.tar.gz logs/
# View package info
gzip -l support-package-cluster-*.tar.gz
Related Commands
- Support Package Commands - Enhanced support package workflow (Phase 2)
- Logs Commands - View cluster logs directly
- Stats Commands - Monitor performance metrics
- Cluster Commands - Check cluster health
Diagnostics
The diagnostics commands provide tools for monitoring and troubleshooting Redis Enterprise cluster health, running diagnostic checks, and generating diagnostic reports.
Overview
Redis Enterprise includes a built-in diagnostics system that performs various health checks on the cluster, nodes, and databases. These checks help identify potential issues before they become critical problems.
Available Commands
Get Diagnostics Configuration
Retrieve the current diagnostics configuration:
# Get full diagnostics config
redisctl enterprise diagnostics get
# Get specific configuration fields
redisctl enterprise diagnostics get -q "enabled_checks"
Update Diagnostics Configuration
Modify diagnostics settings:
# Update from JSON file
redisctl enterprise diagnostics update --data @diagnostics-config.json
# Update from stdin
echo '{"check_interval": 300}' | redisctl enterprise diagnostics update --data -
# Disable specific checks
redisctl enterprise diagnostics update --data '{"disabled_checks": ["memory_check", "disk_check"]}'
Run Diagnostics Checks
Trigger diagnostic checks manually:
# Run all diagnostics
redisctl enterprise diagnostics run
# Run with specific parameters
redisctl enterprise diagnostics run --data '{"checks": ["connectivity", "resources"]}'
List Available Checks
View all available diagnostic checks:
# List all checks
redisctl enterprise diagnostics list-checks
# Output as table
redisctl enterprise diagnostics list-checks -o table
Get Latest Report
Retrieve the most recent diagnostics report:
# Get latest report
redisctl enterprise diagnostics last-report
# Get specific sections
redisctl enterprise diagnostics last-report -q "cluster_health"
Get Specific Report
Retrieve a diagnostics report by ID:
# Get report by ID
redisctl enterprise diagnostics get-report <report_id>
# Get report summary only
redisctl enterprise diagnostics get-report <report_id> -q "summary"
List All Reports
View all available diagnostics reports:
# List all reports
redisctl enterprise diagnostics list-reports
# List recent reports only
redisctl enterprise diagnostics list-reports --data '{"limit": 10}'
# Filter by date range
redisctl enterprise diagnostics list-reports --data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}'
Diagnostic Check Types
Common diagnostic checks include:
-
Resource Checks
- Memory utilization
- CPU usage
- Disk space
- Network bandwidth
-
Cluster Health
- Node connectivity
- Replication status
- Shard distribution
- Quorum status
-
Database Health
- Endpoint availability
- Persistence status
- Backup status
- Module functionality
-
Security Checks
- Certificate expiration
- Authentication status
- Encryption settings
- ACL configuration
Configuration Examples
Enable Automatic Diagnostics
{
"enabled": true,
"auto_run": true,
"check_interval": 3600,
"retention_days": 30,
"email_alerts": true,
"alert_recipients": ["ops@example.com"]
}
Configure Check Thresholds
{
"thresholds": {
"memory_usage_percent": 80,
"disk_usage_percent": 85,
"cpu_usage_percent": 75,
"certificate_expiry_days": 30
}
}
Disable Specific Checks
{
"disabled_checks": [
"backup_validation",
"module_check"
],
"check_timeout": 30
}
Practical Examples
Daily Health Check Script
#!/bin/bash
# Run daily diagnostics and email report
# Run diagnostics
redisctl enterprise diagnostics run
# Get latest report
REPORT=$(redisctl enterprise diagnostics last-report)
# Check for critical issues
CRITICAL=$(echo "$REPORT" | jq '.issues | map(select(.severity == "critical")) | length')
if [ "$CRITICAL" -gt 0 ]; then
# Send alert for critical issues
echo "$REPORT" | mail -s "Redis Enterprise: Critical Issues Found" ops@example.com
fi
Monitor Cluster Health
# Continuous health monitoring
watch -n 60 'redisctl enterprise diagnostics last-report -q "summary" -o table'
Generate Monthly Report
# Get all reports for the month
redisctl enterprise diagnostics list-reports \
--data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}' \
-o json > monthly-diagnostics.json
# Extract key metrics
jq '.[] | {date: .timestamp, health_score: .summary.health_score}' monthly-diagnostics.json
Pre-Maintenance Check
# Run comprehensive diagnostics before maintenance
redisctl enterprise diagnostics run --data '{
"comprehensive": true,
"include_logs": true,
"validate_backups": true
}'
# Wait for completion and check results
sleep 30
redisctl enterprise diagnostics last-report -q "ready_for_maintenance"
Report Structure
Diagnostics reports typically include:
{
"report_id": "diag-12345",
"timestamp": "2024-01-15T10:30:00Z",
"cluster_id": "cluster-1",
"summary": {
"health_score": 95,
"total_checks": 50,
"passed": 48,
"warnings": 1,
"failures": 1
},
"cluster_health": {
"nodes": [...],
"databases": [...],
"replication": {...}
},
"resource_usage": {
"memory": {...},
"cpu": {...},
"disk": {...}
},
"issues": [
{
"severity": "warning",
"component": "node-2",
"message": "Disk usage at 82%",
"recommendation": "Consider adding storage"
}
],
"recommendations": [...]
}
Best Practices
- Schedule Regular Checks - Run diagnostics daily or weekly
- Monitor Trends - Track health scores over time
- Set Up Alerts - Configure email alerts for critical issues
- Archive Reports - Keep historical reports for trend analysis
- Pre-Maintenance Checks - Always run diagnostics before maintenance
- Custom Thresholds - Adjust thresholds based on your environment
Integration with Monitoring
The diagnostics system can be integrated with external monitoring tools:
# Export to Prometheus format
redisctl enterprise diagnostics last-report -q "metrics" | \
prometheus-push-gateway
# Send to logging system
redisctl enterprise diagnostics last-report | \
logger -t redis-diagnostics
# Create JIRA ticket for issues
ISSUES=$(redisctl enterprise diagnostics last-report -q "issues")
if [ -n "$ISSUES" ]; then
create-jira-ticket --project OPS --summary "Redis Diagnostics Issues" --description "$ISSUES"
fi
Troubleshooting
Diagnostics Not Running
# Check if diagnostics are enabled
redisctl enterprise diagnostics get -q "enabled"
# Enable diagnostics
redisctl enterprise diagnostics update --data '{"enabled": true}'
Reports Not Generated
# Check last run time
redisctl enterprise diagnostics get -q "last_run"
# Trigger manual run
redisctl enterprise diagnostics run
Missing Checks
# List disabled checks
redisctl enterprise diagnostics get -q "disabled_checks"
# Re-enable all checks
redisctl enterprise diagnostics update --data '{"disabled_checks": []}'
Related Commands
enterprise cluster
- Cluster management and healthenterprise stats
- Performance statisticsenterprise logs
- System logs and eventsenterprise action
- Monitor diagnostic task progress
Endpoint Management
The endpoint commands provide access to Redis Enterprise database endpoint statistics and availability monitoring.
Note: Redis Enterprise manages most endpoint configurations through database commands. These commands provide monitoring and statistics capabilities.
Available Commands
Get Endpoint Statistics
Get aggregate statistics for all database endpoints in the cluster:
# Get all endpoint statistics
redisctl enterprise endpoint stats
# Get statistics as YAML
redisctl enterprise endpoint stats -o yaml
# Filter to specific metrics
redisctl enterprise endpoint stats -q '[].{name: endpoint_name, connections: current_connections}'
# Get statistics for endpoints with high connection counts
redisctl enterprise endpoint stats -q "[?current_connections > `100`]"
The statistics include:
- Connection metrics (current, total, failed)
- Request/response rates
- Latency information
- Error counts
- Bandwidth usage
Check Endpoint Availability
Check the availability status of a specific database endpoint:
# Check endpoint availability for database 1
redisctl enterprise endpoint availability 1
# Get availability as table
redisctl enterprise endpoint availability 1 -o table
# Extract specific availability information
redisctl enterprise endpoint availability 1 -q 'available'
Availability information includes:
- Current availability status
- Node availability
- Shard distribution
- Failover status
- Connection health
Output Examples
Endpoint Statistics
[
{
"endpoint_name": "redis-12345.cluster.local:16379",
"bdb_uid": 1,
"current_connections": 45,
"total_connections": 12543,
"failed_connections": 2,
"requests_per_sec": 5432,
"responses_per_sec": 5430,
"avg_latency_ms": 0.8,
"bandwidth_in_mbps": 12.5,
"bandwidth_out_mbps": 8.3,
"errors_per_sec": 0.1
}
]
Endpoint Availability
{
"bdb_uid": 1,
"available": true,
"endpoints": [
{
"addr": "redis-12345.cluster.local:16379",
"node": 1,
"role": "master",
"status": "active"
}
],
"shards_placement": "optimal",
"last_failover": null
}
Common Use Cases
Monitoring Endpoint Health
Monitor endpoint statistics and set up alerts:
# Check endpoints with high error rates
redisctl enterprise endpoint stats -q "[?errors_per_sec > `10`]"
# Monitor endpoints with connection issues
redisctl enterprise endpoint stats -q "[?failed_connections > `0`].{name: endpoint_name, failed: failed_connections}"
# Check latency across all endpoints
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, latency: avg_latency_ms}" -o table
Availability Monitoring
Check database endpoint availability during maintenance:
# Check availability for critical databases
for db in 1 2 3; do
echo "Database $db:"
redisctl enterprise endpoint availability $db -q 'available'
done
# Get detailed availability for troubleshooting
redisctl enterprise endpoint availability 1 -o yaml
Performance Analysis
Analyze endpoint performance metrics:
# Get top endpoints by connection count
redisctl enterprise endpoint stats -q "reverse(sort_by([],¤t_connections))[:5]" -o table
# Find endpoints with bandwidth issues
redisctl enterprise endpoint stats -q "[?bandwidth_in_mbps > `100` || bandwidth_out_mbps > `100`]"
# Compare request/response rates
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, req_rate: requests_per_sec, resp_rate: responses_per_sec, diff: requests_per_sec - responses_per_sec}"
Integration with Monitoring
Export endpoint metrics for monitoring systems:
# Export to monitoring format
redisctl enterprise endpoint stats -o json > endpoint_metrics.json
# Create CSV for analysis
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, connections: current_connections, latency: avg_latency_ms, errors: errors_per_sec}" | jq -r '["endpoint","connections","latency","errors"], (.[] | [.endpoint, .connections, .latency, .errors]) | @csv'
# Stream to monitoring pipeline
while true; do
redisctl enterprise endpoint stats -q '[].{timestamp: now(), metrics: @}' | \
curl -X POST http://metrics-collector/ingest -d @-
sleep 60
done
Troubleshooting
High Connection Counts
If endpoints show high connection counts:
# Identify affected endpoints
redisctl enterprise endpoint stats -q "[?current_connections > `1000`]"
# Check database configuration
redisctl enterprise database get <bdb_uid> -q '{max_connections: max_connections, current: @ | current_connections}'
# Monitor connection trends
for i in {1..10}; do
redisctl enterprise endpoint stats -q "[].{endpoint: endpoint_name, connections: current_connections}" -o table
sleep 30
done
Availability Issues
When endpoints report availability problems:
# Check specific database endpoint
redisctl enterprise endpoint availability <bdb_uid>
# Verify node status
redisctl enterprise node list -q "[?status != 'active']"
# Check shard distribution
redisctl enterprise database get <bdb_uid> -q 'shards_placement'
Best Practices
- Regular Monitoring: Set up regular checks of endpoint statistics to catch issues early
- Baseline Metrics: Establish baseline performance metrics for comparison
- Alert Thresholds: Configure alerts based on your specific workload patterns
- Correlation: Correlate endpoint metrics with database and node statistics
- Capacity Planning: Use connection and bandwidth metrics for capacity planning
Related Commands
redisctl enterprise database
- Manage databases and their endpointsredisctl enterprise stats
- View detailed statisticsredisctl enterprise node
- Check node status affecting endpointsredisctl enterprise cluster
- View cluster-wide endpoint configuration
Job Scheduler
The job scheduler commands allow you to manage and configure scheduled background jobs in Redis Enterprise. These jobs handle critical maintenance tasks like backups, log rotation, certificate renewal, and health checks.
Overview
Redis Enterprise runs several scheduled jobs automatically to maintain cluster health and perform routine maintenance. The job scheduler commands let you view and customize the schedule and configuration of these jobs.
Available Commands
Get Configuration
Retrieve the current job scheduler configuration:
# Get all job scheduler settings
redisctl enterprise job-scheduler get
# Get specific job configuration using JMESPath
redisctl enterprise job-scheduler get -q "backup_job_settings"
# Output as table
redisctl enterprise job-scheduler get -o table
Update Configuration
Modify job scheduler settings:
# Update from JSON file
redisctl enterprise job-scheduler update --data @scheduler-config.json
# Update from stdin
echo '{"backup_job_settings": {"cron_expression": "*/10 * * * *"}}' | \
redisctl enterprise job-scheduler update --data -
# Update inline
redisctl enterprise job-scheduler update --data '{
"log_rotation_job_settings": {
"cron_expression": "0 */6 * * *",
"enabled": true
}
}'
Scheduled Job Types
Backup Job
Manages automatic database backups:
{
"backup_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true
}
}
Database Usage Report
Generates usage statistics for databases:
{
"bdb_usage_report_job_settings": {
"cron_expression": "0 */1 * * *",
"enabled": true,
"file_retention_days": 365
}
}
Certificate Rotation
Handles automatic certificate renewal:
{
"cert_rotation_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true,
"expiry_days_before_rotation": 60
}
}
Log Rotation
Manages log file rotation and cleanup:
{
"log_rotation_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true
}
}
Node Health Checks
Performs periodic node health validation:
{
"node_checks_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true
}
}
Redis Cleanup
Cleans up temporary Redis data:
{
"redis_cleanup_job_settings": {
"cron_expression": "0 * * * *"
}
}
CCS Log Rotation
Rotates cluster configuration service logs:
{
"rotate_ccs_job_settings": {
"cron_expression": "*/5 * * * *",
"enabled": true,
"file_suffix": "5min",
"rotate_max_num": 24
}
}
Cron Expression Format
Job schedules use standard cron expression format:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
│ │ │ │ │
* * * * *
Common Patterns
*/5 * * * *
- Every 5 minutes0 * * * *
- Every hour0 0 * * *
- Daily at midnight0 2 * * 0
- Weekly on Sunday at 2 AM0 0 1 * *
- Monthly on the 1st at midnight
Examples
Adjust Backup Frequency
Change backups from every 5 minutes to every 30 minutes:
redisctl enterprise job-scheduler update --data '{
"backup_job_settings": {
"cron_expression": "*/30 * * * *"
}
}'
Configure Aggressive Log Rotation
Rotate logs every hour and keep fewer files:
redisctl enterprise job-scheduler update --data '{
"log_rotation_job_settings": {
"cron_expression": "0 * * * *",
"enabled": true
},
"rotate_ccs_job_settings": {
"cron_expression": "0 * * * *",
"file_suffix": "hourly",
"rotate_max_num": 12
}
}'
Extend Certificate Renewal Window
Check certificates 90 days before expiry:
redisctl enterprise job-scheduler update --data '{
"cert_rotation_job_settings": {
"expiry_days_before_rotation": 90
}
}'
Reduce Database Report Retention
Keep usage reports for only 30 days:
redisctl enterprise job-scheduler update --data '{
"bdb_usage_report_job_settings": {
"file_retention_days": 30
}
}'
Configuration Templates
Production Environment
High-frequency backups with extended retention:
{
"backup_job_settings": {
"cron_expression": "*/15 * * * *",
"enabled": true
},
"bdb_usage_report_job_settings": {
"cron_expression": "0 0 * * *",
"enabled": true,
"file_retention_days": 730
},
"cert_rotation_job_settings": {
"cron_expression": "0 0 * * *",
"enabled": true,
"expiry_days_before_rotation": 90
},
"log_rotation_job_settings": {
"cron_expression": "0 */4 * * *",
"enabled": true
}
}
Development Environment
Less frequent operations to reduce overhead:
{
"backup_job_settings": {
"cron_expression": "0 */6 * * *",
"enabled": true
},
"bdb_usage_report_job_settings": {
"cron_expression": "0 0 * * 0",
"enabled": true,
"file_retention_days": 7
},
"node_checks_job_settings": {
"cron_expression": "0 */12 * * *",
"enabled": true
}
}
Monitoring Job Execution
Jobs create actions that can be monitored:
# Check recent backup jobs
redisctl enterprise action list --type backup_job
# Monitor job execution
watch -n 60 'redisctl enterprise action list --status running -o table'
Best Practices
- Balance Frequency vs Load - More frequent jobs provide better protection but increase system load
- Align with Maintenance Windows - Schedule intensive jobs during low-traffic periods
- Monitor Job Success - Regularly check that scheduled jobs complete successfully
- Test Configuration Changes - Verify new schedules work as expected before production deployment
- Document Custom Schedules - Keep notes on why default schedules were modified
Limitations
- Some jobs cannot be disabled (marked as internal scheduled jobs)
- Cron expressions must be valid or the update will fail
- Changes take effect at the next scheduled run
- Job execution history is available through the actions API
Troubleshooting
Jobs Not Running
# Check if job is enabled
redisctl enterprise job-scheduler get -q "backup_job_settings.enabled"
# Verify cron expression
redisctl enterprise job-scheduler get -q "backup_job_settings.cron_expression"
Failed Job Updates
# Check current configuration
redisctl enterprise job-scheduler get
# Validate JSON before updating
echo '{"backup_job_settings": {"enabled": true}}' | jq .
# Try update with valid configuration
redisctl enterprise job-scheduler update --data '{"backup_job_settings": {"enabled": true}}'
Related Commands
enterprise action
- Monitor job execution statusenterprise cluster
- Cluster configuration that affects jobsenterprise database
- Database backup operationsenterprise logs
- View logs generated by scheduled jobs
JSON Schema
The JSON schema command provides access to the Redis Enterprise API schema definitions, useful for validation, documentation, and code generation.
Available Commands
Get JSON Schema
Retrieve the complete JSON schema for the Redis Enterprise API:
# Get full JSON schema
redisctl enterprise jsonschema get
# Get schema as YAML
redisctl enterprise jsonschema get -o yaml
# Extract specific schema definitions
redisctl enterprise jsonschema get -q 'definitions'
# Get schema for a specific resource
redisctl enterprise jsonschema get -q 'definitions.bdb'
# List all available definitions
redisctl enterprise jsonschema get -q 'definitions | keys(@)'
Output Examples
Schema Structure
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Redis Enterprise API Schema",
"version": "1.0.0",
"definitions": {
"bdb": {
"type": "object",
"properties": {
"uid": {
"type": "integer",
"description": "Database unique ID"
},
"name": {
"type": "string",
"description": "Database name"
},
"memory_size": {
"type": "integer",
"description": "Memory limit in bytes"
},
"shards_count": {
"type": "integer",
"minimum": 1,
"description": "Number of database shards"
}
},
"required": ["name", "memory_size"]
},
"node": {
"type": "object",
"properties": {
"uid": {
"type": "integer",
"description": "Node unique ID"
},
"addr": {
"type": "string",
"format": "ipv4",
"description": "Node IP address"
}
}
}
},
"paths": {
"/v1/bdbs": {
"post": {
"requestBody": {
"$ref": "#/definitions/bdb"
}
}
}
}
}
Common Use Cases
API Validation
Validate request payloads against the schema:
# Get schema for database creation
redisctl enterprise jsonschema get -q 'definitions.bdb'
# Extract required fields
redisctl enterprise jsonschema get -q 'definitions.bdb.required'
# Get property types
redisctl enterprise jsonschema get -q 'definitions.bdb.properties | to_entries[] | {property: .key, type: .value.type}'
Code Generation
Generate TypeScript or other language definitions:
# Export schema for code generation
redisctl enterprise jsonschema get -o json > redis-enterprise-schema.json
# Extract definitions for specific resources
redisctl enterprise jsonschema get -q 'definitions.{database: bdb, cluster: cluster, node: node}' > resources.json
# Generate TypeScript interfaces (using external tool)
redisctl enterprise jsonschema get | npx json-schema-to-typescript > redis-enterprise.d.ts
Documentation
Extract schema information for documentation:
# Get all resource definitions
redisctl enterprise jsonschema get -q 'definitions | keys(@)' -o json
# Get descriptions for properties
redisctl enterprise jsonschema get -q 'definitions.bdb.properties | to_entries[] | {property: .key, description: .value.description}'
# List all API paths
redisctl enterprise jsonschema get -q 'paths | keys(@)'
# Get operations for a path
redisctl enterprise jsonschema get -q 'paths."/v1/bdbs" | keys(@)'
Schema Discovery
Explore available schemas and their structures:
# List all top-level schema properties
redisctl enterprise jsonschema get -q 'keys(@)'
# Find schemas with specific properties
redisctl enterprise jsonschema get -q 'definitions | to_entries[] | select(.value.properties.memory_size) | .key'
# Get enum values for properties
redisctl enterprise jsonschema get -q 'definitions.*.properties.* | select(.enum) | {property: @, values: .enum}'
# Find required properties across all schemas
redisctl enterprise jsonschema get -q 'definitions | to_entries[] | {schema: .key, required: .value.required}'
Integration Examples
Validation Script
Create a validation script using the schema:
#!/bin/bash
# validate-payload.sh
SCHEMA=$(redisctl enterprise jsonschema get -q 'definitions.bdb')
PAYLOAD=$1
echo "$PAYLOAD" | jq --argjson schema "$SCHEMA" '
# Simple validation example
if .name == null then
error("name is required")
elif .memory_size == null then
error("memory_size is required")
else
.
end
'
OpenAPI Generation
Convert to OpenAPI specification:
# Extract and format for OpenAPI
redisctl enterprise jsonschema get -o json | jq '{
openapi: "3.0.0",
info: {
title: "Redis Enterprise API",
version: .version
},
components: {
schemas: .definitions
},
paths: .paths
}' > openapi.json
Schema Comparison
Compare schemas across versions:
# Save current schema
redisctl enterprise jsonschema get -o json > schema-current.json
# Later, compare with new version
redisctl enterprise jsonschema get -o json > schema-new.json
diff <(jq -S . schema-current.json) <(jq -S . schema-new.json)
# Find new properties
jq -r '.definitions | keys(@)' schema-new.json | \
comm -13 <(jq -r '.definitions | keys(@)' schema-current.json | sort) -
Best Practices
- Cache Schema: The schema doesn't change frequently, so cache it locally
- Version Control: Store schema snapshots in version control for tracking changes
- Validation: Use the schema to validate payloads before API calls
- Code Generation: Generate client code from schema for type safety
- Documentation: Keep schema-based documentation up to date
Troubleshooting
Schema Retrieval Issues
If schema retrieval fails:
# Check API connectivity
redisctl enterprise cluster get -q 'name'
# Try raw API access
redisctl api enterprise get /v1/jsonschema
# Check with curl
curl -k -u "$REDIS_ENTERPRISE_USER:$REDIS_ENTERPRISE_PASSWORD" \
https://$REDIS_ENTERPRISE_URL/v1/jsonschema
Schema Validation
Validate that the schema is well-formed:
# Check if valid JSON
redisctl enterprise jsonschema get | jq empty && echo "Valid JSON"
# Validate schema structure
redisctl enterprise jsonschema get | jq 'has("definitions") and has("$schema")'
# Check for required sections
redisctl enterprise jsonschema get -q '[has("definitions"), has("properties"), has("paths")] | all'
Related Commands
redisctl api enterprise
- Direct API access for testingredisctl enterprise database create
- Use schema for creating resourcesredisctl enterprise cluster
- Cluster configuration that follows schema
LDAP Integration Commands
Configure and manage LDAP authentication for Redis Enterprise.
Overview
LDAP commands enable integration with Active Directory and other LDAP services for centralized authentication and authorization in Redis Enterprise clusters.
LDAP Configuration Commands
Get LDAP Configuration
redisctl enterprise ldap get
Retrieves the current LDAP configuration for the cluster.
Update LDAP Configuration
redisctl enterprise ldap update --data '{
"server_url": "ldaps://ldap.company.com:636",
"bind_dn": "CN=redis,OU=ServiceAccounts,DC=company,DC=com",
"bind_password": "password",
"user_search_base": "OU=Users,DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"group_search_base": "OU=Groups,DC=company,DC=com",
"group_search_filter": "(member={0})"
}'
Delete LDAP Configuration
redisctl enterprise ldap delete
Removes LDAP configuration, reverting to local authentication only.
Test LDAP Connection
redisctl enterprise ldap test
Tests the LDAP connection and configuration.
LDAP Mappings Commands
LDAP mappings define how LDAP groups map to Redis Enterprise roles.
List LDAP Mappings
redisctl enterprise ldap-mappings list
Shows all configured LDAP group-to-role mappings.
Get Specific Mapping
redisctl enterprise ldap-mappings get <uid>
Create LDAP Mapping
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Admins,OU=Groups,DC=company,DC=com",
"role": "admin",
"email": "redis-admins@company.com"
}'
Maps an LDAP group to a Redis Enterprise role.
Update LDAP Mapping
redisctl enterprise ldap-mappings update <uid> --data '{
"role": "db_admin"
}'
Delete LDAP Mapping
redisctl enterprise ldap-mappings delete <uid>
Common Use Cases
Setting Up Active Directory Integration
# 1. Configure LDAP connection
redisctl enterprise ldap update --data '{
"server_url": "ldaps://dc01.company.com:636",
"bind_dn": "CN=Redis Service,OU=Services,DC=company,DC=com",
"bind_password": "${LDAP_BIND_PASSWORD}",
"user_search_base": "OU=Users,DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"certificate": "-----BEGIN CERTIFICATE-----..."
}'
# 2. Test the connection
redisctl enterprise ldap test
# 3. Create role mappings
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Admins,OU=Groups,DC=company,DC=com",
"role": "admin"
}'
redisctl enterprise ldap-mappings create --data '{
"ldap_group": "CN=Redis-Users,OU=Groups,DC=company,DC=com",
"role": "db_viewer"
}'
Troubleshooting LDAP Authentication
# Check current configuration
redisctl enterprise ldap get
# Test with specific user (requires additional test data)
redisctl enterprise ldap test --data '{
"username": "testuser",
"password": "testpass"
}'
# View all mappings
redisctl enterprise ldap-mappings list -o table
Migrating from Local to LDAP Authentication
# 1. Keep local admin account active
redisctl enterprise user update admin@redis.local --data '{
"auth_method": "local"
}'
# 2. Configure LDAP
redisctl enterprise ldap update --data @ldap-config.json
# 3. Create mappings for existing roles
redisctl enterprise role list -q '[].name' | while read role; do
echo "Map LDAP group for role: $role"
done
# 4. Test LDAP authentication before disabling local auth
redisctl enterprise ldap test
Configuration Examples
Basic Active Directory
{
"server_url": "ldaps://ad.company.com:636",
"bind_dn": "redis-service@company.com",
"bind_password": "password",
"user_search_base": "DC=company,DC=com",
"user_search_filter": "(sAMAccountName={0})",
"group_search_base": "DC=company,DC=com",
"group_search_filter": "(member={0})"
}
OpenLDAP
{
"server_url": "ldap://openldap.company.com:389",
"bind_dn": "cn=admin,dc=company,dc=com",
"bind_password": "password",
"user_search_base": "ou=people,dc=company,dc=com",
"user_search_filter": "(uid={0})",
"group_search_base": "ou=groups,dc=company,dc=com",
"group_search_filter": "(memberUid={0})"
}
Security Considerations
- Always use LDAPS (LDAP over SSL) for production
- Store bind passwords in environment variables or secrets management
- Use service accounts with minimal privileges for bind DN
- Regularly rotate bind account passwords
- Test configuration changes in non-production first
Troubleshooting
Connection Issues
# Check network connectivity
nc -zv ldap.company.com 636
# Verify certificate
openssl s_client -connect ldap.company.com:636 -showcerts
# Test with ldapsearch
ldapsearch -H ldaps://ldap.company.com:636 \
-D "CN=redis,OU=Services,DC=company,DC=com" \
-w password \
-b "DC=company,DC=com" \
"(sAMAccountName=testuser)"
Authentication Failures
- Verify bind DN and password
- Check user search base and filter
- Ensure group memberships are correct
- Review Redis Enterprise logs
- Test with
ldap test
command
Related Commands
- User Commands - Manage local users
- Role Commands - Configure roles and permissions
- Auth Commands - Authentication settings
License Management Commands
Manage Redis Enterprise licenses with comprehensive tools for compliance monitoring, multi-instance management, and automated workflows.
Overview
The license commands provide powerful capabilities for managing Redis Enterprise licenses:
- View and update license information
- Monitor expiration across multiple instances
- Generate compliance reports
- Bulk license updates across deployments
- Automated monitoring and alerting
Core License Commands
Get License Information
# Get full license details
redisctl enterprise license get
# Get specific fields with JMESPath
redisctl enterprise license get -q 'expiration_date'
redisctl enterprise license get -q '{name: cluster_name, expires: expiration_date}'
Update License
# Update with JSON data
redisctl enterprise license update --data '{
"license": "YOUR_LICENSE_KEY_HERE"
}'
# Update from file
redisctl enterprise license update --data @new-license.json
# Update from stdin
echo '{"license": "..."}' | redisctl enterprise license update --data -
Upload License File
# Upload a license file directly
redisctl enterprise license upload --file /path/to/license.txt
# Supports both raw license text and JSON format
redisctl enterprise license upload --file license.json
Validate License
# Validate license before applying
redisctl enterprise license validate --data @license.json
# Validate from stdin
cat license.txt | redisctl enterprise license validate --data -
Check License Expiration
# Get expiration information
redisctl enterprise license expiry
# Check if expiring soon
redisctl enterprise license expiry -q 'warning'
# Get days remaining
redisctl enterprise license expiry -q 'days_remaining'
View Licensed Features
# List all licensed features
redisctl enterprise license features
# Check specific features
redisctl enterprise license features -q 'flash_enabled'
redisctl enterprise license features -q 'modules'
License Usage Report
# Get current usage vs limits
redisctl enterprise license usage
# Get RAM usage
redisctl enterprise license usage -q 'ram'
# Check shard availability
redisctl enterprise license usage -q 'shards.available'
Multi-Instance License Workflows
License Audit Across All Profiles
# Audit all configured Redis Enterprise instances
redisctl enterprise workflow license audit
# Show only expiring licenses (within 30 days)
redisctl enterprise workflow license audit --expiring
# Show only expired licenses
redisctl enterprise workflow license audit --expired
# Export as JSON for processing
redisctl enterprise workflow license audit -o json > license-audit.json
Bulk License Updates
# Update license across all enterprise profiles
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json
# Update specific profiles
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west,staging" \
--data @new-license.json
# Dry run to see what would be updated
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json \
--dry-run
License Compliance Report
# Generate comprehensive compliance report
redisctl enterprise workflow license report
# Export as CSV for spreadsheets
redisctl enterprise workflow license report --format csv > compliance-report.csv
# Generate JSON report for automation
redisctl enterprise workflow license report -o json
License Monitoring
# Monitor all profiles for expiring licenses
redisctl enterprise workflow license monitor
# Custom warning threshold (default 30 days)
redisctl enterprise workflow license monitor --warning-days 60
# Exit with error code if any licenses are expiring (for CI/CD)
redisctl enterprise workflow license monitor --fail-on-warning
Automation Examples
CI/CD License Check
#!/bin/bash
# Check license status in CI/CD pipeline
if ! redisctl enterprise workflow license monitor --warning-days 14 --fail-on-warning; then
echo "ERROR: License issues detected!"
exit 1
fi
License Expiration Script
#!/bin/bash
# Email alert for expiring licenses
AUDIT=$(redisctl enterprise workflow license audit --expiring -o json)
COUNT=$(echo "$AUDIT" | jq 'length')
if [ "$COUNT" -gt 0 ]; then
echo "Warning: $COUNT licenses expiring soon!" | \
mail -s "Redis Enterprise License Alert" admin@company.com
echo "$AUDIT" | jq -r '.[] |
"Profile: \(.profile) - Expires: \(.expiration_date) (\(.days_remaining) days)"'
fi
Monthly Compliance Report
#!/bin/bash
# Generate monthly compliance report
REPORT_DATE=$(date +%Y-%m)
REPORT_FILE="license-compliance-${REPORT_DATE}.csv"
# Generate CSV report
redisctl enterprise workflow license report --format csv > "$REPORT_FILE"
# Email the report
echo "Please find attached the monthly license compliance report." | \
mail -s "Redis License Report - $REPORT_DATE" \
-a "$REPORT_FILE" \
compliance@company.com
Automated License Renewal
#!/bin/bash
# Automatically apply new license when available
LICENSE_FILE="/secure/path/new-license.json"
if [ -f "$LICENSE_FILE" ]; then
# Validate the license first
if redisctl enterprise license validate --data @"$LICENSE_FILE"; then
# Apply to all production instances
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west" \
--data @"$LICENSE_FILE"
# Archive the applied license
mv "$LICENSE_FILE" "/secure/path/applied/$(date +%Y%m%d)-license.json"
else
echo "ERROR: Invalid license file!"
exit 1
fi
fi
Profile Management for Multi-Instance
Setup Multiple Profiles
# Add production profiles
redisctl profile set prod-east \
--deployment-type enterprise \
--url https://redis-east.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_EAST
redisctl profile set prod-west \
--deployment-type enterprise \
--url https://redis-west.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_WEST
# Add staging profile
redisctl profile set staging \
--deployment-type enterprise \
--url https://redis-staging.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_STAGING
Check License Per Profile
# Check specific profile
redisctl -p prod-east enterprise license expiry
redisctl -p prod-west enterprise license usage
redisctl -p staging enterprise license features
Common Use Cases
Pre-Renewal Planning
# Get usage across all instances for capacity planning
for profile in $(redisctl profile list -q '[].name'); do
echo "=== Profile: $profile ==="
redisctl -p "$profile" enterprise license usage -o yaml
done
License Synchronization
# Ensure all instances have the same license
MASTER_LICENSE=$(redisctl -p prod-east enterprise license get -o json)
echo "$MASTER_LICENSE" | \
redisctl enterprise workflow license bulk-update \
--profiles "prod-west,staging,dev" \
--data -
Compliance Dashboard Data
# Generate JSON data for dashboard
{
echo '{"timestamp": "'$(date -Iseconds)'",'
echo '"instances": '
redisctl enterprise workflow license audit -o json
echo '}'
} > dashboard-data.json
Output Formats
All commands support multiple output formats:
# JSON output (default)
redisctl enterprise license get -o json
# YAML output
redisctl enterprise license get -o yaml
# Table output
redisctl enterprise license get -o table
JMESPath Filtering
Use JMESPath queries to extract specific information:
# Get expiration dates for all profiles
redisctl enterprise workflow license audit -q '[].{profile: profile, expires: expiration_date}'
# Filter only expiring licenses
redisctl enterprise workflow license audit -q "[?expiring_soon==`true`]"
# Get usage percentages
redisctl enterprise license usage -q '{
ram_used_pct: (ram.used_gb / ram.limit_gb * `100`),
shards_used_pct: (shards.used / shards.limit * `100`)
}'
Troubleshooting
Common Issues
-
License validation fails
# Check license format redisctl enterprise license validate --data @license.json
-
Bulk update fails for some profiles
# Use dry-run to identify issues redisctl enterprise workflow license bulk-update --profiles all --data @license.json --dry-run
-
Monitoring shows unexpected results
# Verify profile configurations redisctl profile list # Test connection to each profile for p in $(redisctl profile list -q '[].name'); do echo "Testing $p..." redisctl -p "$p" enterprise cluster get -q 'name' || echo "Failed: $p" done
Notes
- License files can be in JSON format or raw license text
- Workflow commands operate on all configured enterprise profiles
- Use
--dry-run
for bulk operations to preview changes - Monitor commands can integrate with CI/CD pipelines using exit codes
- CSV export format is ideal for spreadsheet analysis and reporting
- All sensitive license data should be handled securely
OCSP Certificate Validation Commands
Manage Online Certificate Status Protocol (OCSP) for certificate validation in Redis Enterprise.
Overview
OCSP commands configure and manage certificate revocation checking for enhanced security in Redis Enterprise clusters. OCSP provides real-time certificate validation without requiring Certificate Revocation Lists (CRLs).
Available Commands
Get OCSP Configuration
redisctl enterprise ocsp get
Retrieves current OCSP settings including:
- OCSP functionality status (enabled/disabled)
- Responder URL
- Query frequency
- Recovery settings
- Response timeout
Update OCSP Configuration
redisctl enterprise ocsp update --data '{
"ocsp_functionality": true,
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600,
"response_timeout": 5,
"recovery_frequency": 60,
"recovery_max_tries": 5
}'
Get OCSP Status
redisctl enterprise ocsp status
Shows the current operational status of OCSP validation.
Test OCSP Validation
redisctl enterprise ocsp test
Tests OCSP configuration and certificate validation.
Enable OCSP
redisctl enterprise ocsp enable
Quick command to enable OCSP validation with current settings.
Disable OCSP
redisctl enterprise ocsp disable
Quick command to disable OCSP validation.
Configuration Examples
Basic OCSP Setup
{
"ocsp_functionality": true,
"responder_url": "http://ocsp.digicert.com",
"query_frequency": 3600,
"response_timeout": 5
}
High-Security Configuration
{
"ocsp_functionality": true,
"responder_url": "https://ocsp.internal.company.com",
"query_frequency": 900,
"response_timeout": 3,
"recovery_frequency": 30,
"recovery_max_tries": 10,
"require_ocsp_response": true,
"cache_response": true,
"cache_duration": 3600
}
Common Use Cases
Initial OCSP Setup
# 1. Check current configuration
redisctl enterprise ocsp get
# 2. Configure OCSP responder
redisctl enterprise ocsp update --data '{
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600
}'
# 3. Test configuration
redisctl enterprise ocsp test
# 4. Enable OCSP
redisctl enterprise ocsp enable
# 5. Verify status
redisctl enterprise ocsp status
Troubleshooting Certificate Issues
# Check if OCSP is causing connection issues
redisctl enterprise ocsp status
# Temporarily disable for testing
redisctl enterprise ocsp disable
# Test certificates manually
openssl ocsp -issuer issuer.crt \
-cert server.crt \
-url http://ocsp.company.com \
-resp_text
# Re-enable after fixing
redisctl enterprise ocsp enable
Monitoring OCSP Health
#!/bin/bash
# Monitor OCSP status and alert on failures
while true; do
STATUS=$(redisctl enterprise ocsp status -q 'validation_status')
if [[ "$STATUS" != "healthy" ]]; then
echo "OCSP validation unhealthy: $STATUS"
# Send alert
fi
sleep 300
done
Configuration Parameters
Parameter | Description | Default |
---|---|---|
ocsp_functionality | Enable/disable OCSP | false |
responder_url | OCSP responder endpoint | "" |
query_frequency | Seconds between OCSP queries | 3600 |
response_timeout | Timeout for OCSP responses | 1 |
recovery_frequency | Retry interval on failure | 60 |
recovery_max_tries | Maximum retry attempts | 5 |
Security Best Practices
-
Use HTTPS for OCSP Responder
redisctl enterprise ocsp update --data '{ "responder_url": "https://ocsp.company.com" }'
-
Configure Appropriate Timeouts
- Balance between security and availability
- Consider network latency to responder
-
Monitor OCSP Health
- Set up alerts for OCSP failures
- Track response times and success rates
-
Test Before Production
- Verify responder connectivity
- Test with actual certificates
- Check failover behavior
Troubleshooting
OCSP Responder Unreachable
# Check network connectivity
curl -I http://ocsp.company.com
# Verify DNS resolution
nslookup ocsp.company.com
# Test with OpenSSL
openssl ocsp -url http://ocsp.company.com -timeout 5
Certificate Validation Failures
# Get detailed status
redisctl enterprise ocsp status -o json
# Check logs for OCSP errors
redisctl enterprise logs list --filter "OCSP"
# Test specific certificate
redisctl enterprise ocsp test --data '{
"certificate": "-----BEGIN CERTIFICATE-----..."
}'
Performance Impact
# Monitor query times
redisctl enterprise ocsp status -q 'average_response_time'
# Adjust query frequency if needed
redisctl enterprise ocsp update --data '{
"query_frequency": 7200
}'
Output Examples
Configuration Output
{
"ocsp_functionality": true,
"responder_url": "http://ocsp.company.com",
"query_frequency": 3600,
"response_timeout": 5,
"recovery_frequency": 60,
"recovery_max_tries": 5,
"last_check": "2025-09-15T10:30:00Z",
"next_check": "2025-09-15T11:30:00Z"
}
Status Output
{
"enabled": true,
"validation_status": "healthy",
"certificates_checked": 12,
"certificates_valid": 12,
"certificates_revoked": 0,
"last_success": "2025-09-15T10:30:00Z",
"failures_count": 0
}
Related Commands
- Cluster Commands - Cluster security settings
- Auth Commands - Authentication configuration
- Certificate Commands - Certificate management
Service Management Commands
Configure and manage internal Redis Enterprise services.
Overview
Service commands provide control over Redis Enterprise's internal services including the cluster manager, proxy, statistics collector, and other system components.
Available Commands
List Services
redisctl enterprise services list
Shows all available services and their current status.
Get Service Configuration
redisctl enterprise services get <service_name>
Retrieves configuration for a specific service.
Update Service Configuration
redisctl enterprise services update <service_name> --data '{
"enabled": true,
"port": 8080,
"log_level": "info"
}'
Restart Service
redisctl enterprise services restart <service_name>
Restarts a specific service across the cluster.
Get Service Status
redisctl enterprise services status <service_name>
Shows detailed status information for a service.
Enable Service
redisctl enterprise services enable <service_name>
Enables a previously disabled service.
Disable Service
redisctl enterprise services disable <service_name>
Disables a service (use with caution).
Common Services
Service | Description | Critical |
---|---|---|
cm_server | Cluster Manager Server | Yes |
crdb_coordinator | Active-Active Coordinator | For CRDB |
crdb_worker | Active-Active Worker | For CRDB |
mdns_server | Multicast DNS Server | No |
pdns_server | PowerDNS Server | Yes |
saslauthd | SASL Authentication | For LDAP |
stats_archiver | Statistics Archiver | No |
cnm_http | Cluster Node Manager | Yes |
cnm_https | Secure CNM | Yes |
Common Use Cases
Checking Service Health
# List all services with status
redisctl enterprise services list -o table
# Check specific critical service
redisctl enterprise services status cm_server
# Get services in JSON for monitoring
redisctl enterprise services list -o json | jq '.[] | select(.status != "running")'
Troubleshooting Service Issues
# 1. Check service status
redisctl enterprise services status pdns_server
# 2. Review service configuration
redisctl enterprise services get pdns_server
# 3. Restart if needed
redisctl enterprise services restart pdns_server
# 4. Verify after restart
sleep 10
redisctl enterprise services status pdns_server
Managing Statistics Collection
# Check stats archiver
redisctl enterprise services get stats_archiver
# Adjust retention settings
redisctl enterprise services update stats_archiver --data '{
"retention_days": 30,
"collection_interval": 60
}'
# Restart to apply changes
redisctl enterprise services restart stats_archiver
LDAP Service Management
# Enable SASL for LDAP authentication
redisctl enterprise services enable saslauthd
# Configure SASL service
redisctl enterprise services update saslauthd --data '{
"mechanisms": ["ldap"],
"ldap_servers": "ldap://ldap.company.com",
"ldap_search_base": "dc=company,dc=com"
}'
# Restart SASL service
redisctl enterprise services restart saslauthd
Service Configuration Examples
Cluster Manager Configuration
{
"enabled": true,
"port": 9443,
"bind_address": "0.0.0.0",
"log_level": "info",
"max_connections": 1000,
"timeout": 30
}
DNS Service Configuration
{
"enabled": true,
"port": 53,
"cache_size": 10000,
"negative_ttl": 60,
"query_timeout": 2,
"recursion": false
}
Monitoring Scripts
Service Health Check
#!/bin/bash
# Monitor critical services
CRITICAL_SERVICES="cm_server pdns_server cnm_https"
for service in $CRITICAL_SERVICES; do
STATUS=$(redisctl enterprise services status $service -q 'status')
if [[ "$STATUS" != "running" ]]; then
echo "ALERT: Service $service is $STATUS"
# Send notification
fi
done
Service Performance Monitoring
# Track service resource usage
redisctl enterprise services list -o json | jq -r '.[] |
"\(.name): CPU=\(.cpu_usage)% MEM=\(.memory_mb)MB"'
Safety Considerations
Critical Services
Never disable these services:
cm_server
- Cluster managercnm_http/https
- Node managementpdns_server
- DNS resolution
Pre-Restart Checks
# Before restarting a service
# 1. Check cluster health
redisctl enterprise cluster status
# 2. Verify no ongoing operations
redisctl enterprise action list
# 3. Consider maintenance window
echo "Current load:"
redisctl enterprise stats cluster -q 'operations_per_second'
Service Dependencies
Some services depend on others:
saslauthd
requires LDAP configurationcrdb_*
services require Active-Active setupstats_archiver
requires sufficient disk space
Troubleshooting
Service Won't Start
# Check logs
redisctl enterprise logs list --filter "service_name=$SERVICE"
# Verify configuration
redisctl enterprise services get $SERVICE
# Check system resources
df -h # Disk space
free -m # Memory
Service Consuming High Resources
# Get detailed status
redisctl enterprise services status $SERVICE -o json
# Check configuration limits
redisctl enterprise services get $SERVICE -q 'resource_limits'
# Adjust if needed
redisctl enterprise services update $SERVICE --data '{
"max_memory": "2G",
"max_cpu": 2
}'
Output Examples
Service List Output
[
{
"name": "cm_server",
"status": "running",
"enabled": true,
"pid": 1234,
"uptime": "7d 2h 15m",
"cpu_usage": 2.5,
"memory_mb": 512
},
{
"name": "pdns_server",
"status": "running",
"enabled": true,
"pid": 1235,
"uptime": "7d 2h 15m",
"cpu_usage": 0.5,
"memory_mb": 128
}
]
Service Status Output
{
"name": "cm_server",
"status": "running",
"enabled": true,
"configuration": {
"port": 9443,
"log_level": "info"
},
"statistics": {
"requests_processed": 1000000,
"errors": 0,
"average_response_ms": 50
},
"health": {
"status": "healthy",
"last_check": "2025-09-15T10:30:00Z"
}
}
Related Commands
- Cluster Commands - Cluster-wide operations
- Node Commands - Node-specific management
- Logs Commands - Service log viewing
DNS Suffix Management
The suffix commands allow you to manage DNS suffixes for Redis Enterprise database endpoints. DNS suffixes provide custom domain names for database endpoints, useful for multi-tenant deployments and organizing databases by environment or purpose.
Available Commands
List DNS Suffixes
List all configured DNS suffixes in the cluster:
# List all DNS suffixes
redisctl enterprise suffix list
# List suffixes as a table
redisctl enterprise suffix list -o table
# Filter to active suffixes
redisctl enterprise suffix list -q "[?status == 'active']"
# Get suffix names only
redisctl enterprise suffix list -q "[].name"
Get DNS Suffix Details
Get detailed information about a specific DNS suffix:
# Get suffix details
redisctl enterprise suffix get example.redis.local
# Get suffix in YAML format
redisctl enterprise suffix get example.redis.local -o yaml
# Extract specific fields
redisctl enterprise suffix get example.redis.local -q '{name: name, dns_servers: dns_servers}'
# Check if suffix is in use
redisctl enterprise suffix get example.redis.local -q 'in_use'
Output Examples
DNS Suffix List
[
{
"name": "prod.redis.local",
"status": "active",
"dns_servers": ["10.0.1.53", "10.0.2.53"],
"databases": 5,
"created": "2024-01-15T10:30:00Z"
},
{
"name": "dev.redis.local",
"status": "active",
"dns_servers": ["10.0.3.53"],
"databases": 12,
"created": "2024-02-20T14:15:00Z"
}
]
DNS Suffix Details
{
"name": "prod.redis.local",
"status": "active",
"dns_servers": ["10.0.1.53", "10.0.2.53"],
"dns_zone": "redis.local",
"ttl": 60,
"databases": [
{
"bdb_uid": 1,
"name": "cache-db",
"endpoint": "cache-db.prod.redis.local:16379"
},
{
"bdb_uid": 2,
"name": "session-db",
"endpoint": "session-db.prod.redis.local:16380"
}
],
"in_use": true,
"created": "2024-01-15T10:30:00Z",
"modified": "2024-03-01T09:45:00Z"
}
Common Use Cases
Environment-Based Suffixes
Organize databases by environment using DNS suffixes:
# List production suffixes
redisctl enterprise suffix list -q "[?contains(name, 'prod')]"
# List development suffixes
redisctl enterprise suffix list -q "[?contains(name, 'dev')]"
# Check staging suffix configuration
redisctl enterprise suffix get staging.redis.local
Multi-Tenant Deployments
Manage suffixes for multi-tenant scenarios:
# List suffixes by tenant
redisctl enterprise suffix list -q "[?contains(name, 'tenant')]" -o table
# Get tenant-specific suffix
redisctl enterprise suffix get tenant-a.redis.local
# Count databases per suffix
redisctl enterprise suffix list -q "[].{suffix: name, database_count: databases}"
DNS Configuration Verification
Verify DNS suffix configurations:
# Check DNS servers for all suffixes
redisctl enterprise suffix list -q "[].{name: name, servers: dns_servers}"
# Find suffixes with specific DNS server
redisctl enterprise suffix list -q "[?contains(dns_servers, '10.0.1.53')]"
# Verify TTL settings
redisctl enterprise suffix list -q "[].{name: name, ttl: ttl}" -o table
Integration Examples
Database Creation with Suffix
When creating databases, specify the DNS suffix:
# Create database with specific suffix
cat <<EOF | redisctl enterprise database create --data -
{
"name": "app-cache",
"memory_size": 1073741824,
"dns_suffix_name": "prod.redis.local"
}
EOF
# Verify database endpoint
redisctl enterprise database get <bdb_uid> -q 'endpoint'
Monitoring Suffix Usage
Monitor DNS suffix utilization:
# Check suffix usage
for suffix in $(redisctl enterprise suffix list -q "[].name" -o json | jq -r '.[]'); do
echo "Suffix: $suffix"
redisctl enterprise suffix get "$suffix" -q 'databases' | jq length
done
# Find unused suffixes
redisctl enterprise suffix list -q "[?databases == \`0\`].name"
# Get suffix with most databases
redisctl enterprise suffix list -q "max_by(@, &databases).{name: name, count: databases}"
DNS Server Management
Manage DNS server configurations:
# List all unique DNS servers
redisctl enterprise suffix list -q "[].dns_servers[]" | jq -s 'unique'
# Find suffixes by DNS server count
redisctl enterprise suffix list -q "[?length(dns_servers) > \`1\`]"
# Check DNS server availability
for server in $(redisctl enterprise suffix list -q "[].dns_servers[]" | jq -r 'unique[]'); do
echo "Checking DNS server: $server"
dig @$server test.redis.local +short
done
Best Practices
- Naming Convention: Use consistent naming patterns for suffixes (e.g.,
<environment>.<domain>
) - DNS Server Redundancy: Configure multiple DNS servers for high availability
- TTL Settings: Set appropriate TTL values based on your DNS infrastructure
- Environment Separation: Use different suffixes for different environments
- Documentation: Maintain documentation of suffix assignments and purposes
Troubleshooting
Suffix Not Resolving
If DNS suffixes are not resolving:
# Check suffix configuration
redisctl enterprise suffix get <suffix_name>
# Verify DNS servers
redisctl enterprise suffix get <suffix_name> -q 'dns_servers'
# Check database endpoints using the suffix
redisctl enterprise database list -q "[?dns_suffix_name == '<suffix_name>']"
# Test DNS resolution
dig @<dns_server> <database>.<suffix_name>
Database Endpoint Issues
When databases aren't accessible via suffix:
# Check database suffix assignment
redisctl enterprise database get <bdb_uid> -q 'dns_suffix_name'
# Verify suffix is active
redisctl enterprise suffix get <suffix_name> -q 'status'
# List all endpoints for suffix
redisctl enterprise suffix get <suffix_name> -q 'databases[].endpoint'
Related Commands
redisctl enterprise database
- Create and manage databases with DNS suffixesredisctl enterprise cluster
- View cluster-wide DNS configurationredisctl enterprise endpoint
- Monitor endpoint availability and statistics
Enterprise Workflows
Workflows are multi-step operations that automate complex Redis Enterprise management tasks. They combine multiple API calls, handle asynchronous operations, and provide progress feedback.
Available Workflows
List Workflows
# List all available workflows
redisctl enterprise workflow list
# JSON output for scripting
redisctl enterprise workflow list --output json
Initialize Cluster
The init-cluster
workflow automates the complete setup of a new Redis Enterprise cluster, including bootstrapping and optional database creation.
# Initialize with default settings
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword"
# Initialize with custom cluster name and database
redisctl enterprise workflow init-cluster \
--name "production-cluster" \
--username "admin@redis.local" \
--password "YourSecurePassword" \
--database-name "my-database" \
--database-memory-gb 2
# Skip database creation
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "YourSecurePassword" \
--skip-database
Parameters
Parameter | Description | Default |
---|---|---|
--name | Cluster name | redis-cluster |
--username | Admin username | admin@redis.local |
--password | Admin password (required) | - |
--skip-database | Skip creating default database | false |
--database-name | Name for default database | default-db |
--database-memory-gb | Memory size in GB for database | 1 |
--wait | Wait for operations to complete | true |
--wait-timeout | Maximum wait time in seconds | 600 |
What it does
- Checks cluster status - Verifies if cluster needs initialization
- Bootstraps cluster - Creates cluster with specified name and credentials
- Waits for stabilization - Ensures cluster is ready for operations
- Creates database (optional) - Sets up initial database with specified configuration
- Verifies connectivity - Tests database with PING command
Output Formats
Workflows support structured output for automation:
# JSON output
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123" \
--output json
# YAML output
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123" \
--output yaml
Example JSON output:
{
"success": true,
"message": "Cluster initialized successfully",
"outputs": {
"cluster_name": "redis-cluster",
"username": "admin@cluster.local",
"database_created": true,
"database_name": "default-db"
}
}
Docker Development
For testing workflows with Docker:
# Start Redis Enterprise container
docker compose up -d
# Wait for container to be ready
sleep 10
# Initialize cluster
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "Redis123"
# Clean up
docker compose down -v
Environment Variables
Workflows respect standard environment variables:
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_INSECURE="true"
# Password can be set via environment
export REDIS_ENTERPRISE_INIT_PASSWORD="Redis123"
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local"
Error Handling
Workflows provide clear error messages and maintain partial progress:
- If cluster is already initialized, workflow reports success without re-bootstrapping
- If database creation fails, cluster remains initialized and can be managed manually
- Network failures include retry logic with configurable timeouts
Future Workflows
Additional workflows are planned for common operations:
- upgrade-cluster - Orchestrate cluster version upgrades
- backup-restore - Automated backup and restore operations
- migrate-database - Database migration between clusters
- security-hardening - Apply security best practices
See the Workflows Feature Guide for architectural details and information about creating custom workflows.
Raw API Access
Direct access to any Redis Enterprise REST API endpoint.
Basic Usage
redisctl api enterprise <method> <path> [options]
Methods: get
, post
, put
, patch
, delete
Examples
GET Requests
# Get cluster information
redisctl api enterprise get /v1/cluster
# Get all databases
redisctl api enterprise get /v1/bdbs
# Get specific database
redisctl api enterprise get /v1/bdbs/1
# Get all nodes
redisctl api enterprise get /v1/nodes
# Get node statistics
redisctl api enterprise get /v1/nodes/1/stats
# Get with query parameters
redisctl api enterprise get "/v1/bdbs?fields=name,port,status"
POST Requests
# Create database (with JSON file)
redisctl api enterprise post /v1/bdbs \
--data @database.json
# Create database (with inline JSON)
redisctl api enterprise post /v1/bdbs \
--data '{
"name": "my-database",
"memory_size": 10737418240,
"port": 12000,
"replication": true
}'
# Bootstrap cluster
redisctl api enterprise post /v1/bootstrap \
--data '{
"action": "create_cluster",
"cluster": {
"name": "my-cluster"
},
"credentials": {
"username": "admin@cluster.local",
"password": "admin-password"
}
}'
PUT Requests
# Update database configuration
redisctl api enterprise put /v1/bdbs/1 \
--data '{"memory_size": 21474836480}'
# Update cluster settings
redisctl api enterprise put /v1/cluster \
--data '{"name": "Production Cluster"}'
DELETE Requests
# Delete database
redisctl api enterprise delete /v1/bdbs/1
# Remove node from cluster
redisctl api enterprise delete /v1/nodes/3
Common Endpoints
Cluster Management
/v1/cluster
- Cluster information and settings/v1/bootstrap
- Bootstrap operations/v1/license
- License management/v1/ocsp
- OCSP configuration/v1/cm_settings
- Cluster manager settings
Database Operations (BDB)
/v1/bdbs
- Database list and creation/v1/bdbs/{id}
- Database details and management/v1/bdbs/{id}/actions
- Database actions (flush, restart)/v1/bdbs/{id}/stats
- Database statistics
Node Management
/v1/nodes
- Node list/v1/nodes/{id}
- Node details/v1/nodes/{id}/actions
- Node actions/v1/nodes/{id}/stats
- Node statistics
User & Access Control
/v1/users
- User management/v1/roles
- Role definitions/v1/acl_roles
- Redis ACL roles/v1/ldap_mappings
- LDAP integration
Sharding & Replication
/v1/shards
- Shard management/v1/shards/{id}/actions
- Shard operations/v1/bdbs/{id}/endpoints
- Database endpoints
Active-Active (CRDB)
/v1/crdbs
- CRDB list and creation/v1/crdbs/{id}
- CRDB management/v1/crdbs/{id}/participating_clusters
- Participating clusters
Monitoring & Alerts
/v1/bdbs/{id}/alerts
- Database alerts/v1/nodes/{id}/alerts
- Node alerts/v1/cluster/alerts
- Cluster alerts/v1/logs
- System logs
Modules
/v1/modules
- Module management/v1/modules/{id}
- Module details
Working with Certificates
For self-signed certificates:
# Allow insecure connections
export REDIS_ENTERPRISE_INSECURE=true
# Or use system certificate store
redisctl api enterprise get /v1/cluster \
--ca-cert /path/to/ca.crt
Pagination
Many endpoints support pagination:
# Get databases with pagination
redisctl api enterprise get "/v1/bdbs?offset=0&limit=10"
# Get next page
redisctl api enterprise get "/v1/bdbs?offset=10&limit=10"
Filtering Results
# Get only specific fields
redisctl api enterprise get "/v1/bdbs?fields=name,port,status"
# Filter with JMESPath after retrieval
redisctl api enterprise get /v1/bdbs \
-q "[?status=='active'].{name:name,port:port}"
Async Operations
Some operations return task IDs:
# Create database (returns task)
TASK_ID=$(redisctl api enterprise post /v1/bdbs \
--data @database.json \
-q "task_id")
# Check task status
redisctl api enterprise get /v1/tasks/$TASK_ID
# Wait for completion
while [ "$(redisctl api enterprise get /v1/tasks/$TASK_ID -q status)" != "completed" ]; do
sleep 2
done
Error Handling
# Check HTTP status
if ! redisctl api enterprise get /v1/bdbs/999; then
echo "Database not found"
fi
# Get detailed error information
redisctl api enterprise get /v1/bdbs/999 --verbose
Tips
- Use
/v1/swagger
to get API documentation - Most IDs in Enterprise are integers (unlike Cloud's UUIDs)
- Memory sizes are in bytes
- Use
--insecure
for development clusters with self-signed certs - Check the Redis Enterprise API docs for detailed endpoint information
Async Operations
The redisctl
CLI provides comprehensive support for asynchronous operations across both Redis Cloud and Redis Enterprise APIs. All create, update, and delete operations support the --wait
flag family for tracking long-running operations.
Overview
Many Redis Cloud API operations are asynchronous, returning immediately with a task ID while the operation continues in the background. The --wait
flags allow you to:
- Wait for operations to complete before returning
- Track progress with visual indicators
- Set custom timeouts for long operations
- Configure polling intervals
Wait Flag Options
Flag | Description | Default |
---|---|---|
--wait | Wait for operation to complete | Timeout: 600s |
--wait-timeout <seconds> | Custom timeout duration | 600 |
--wait-interval <seconds> | Polling interval | 10 |
Basic Usage
# Create database and wait for completion
redisctl cloud database create --subscription-id 12345 \
--data @database.json --wait
# With custom timeout for large operations
redisctl cloud database create --subscription-id 12345 \
--data @large-db.json --wait --wait-timeout 1800
# With faster polling for quick operations
redisctl cloud database update --subscription-id 12345 \
--database-id 67890 --data @updates.json \
--wait --wait-interval 2
Progress Tracking
When using the --wait
flag, redisctl provides real-time progress tracking:
Creating database...
⠋ Waiting for task 12345 to complete... (10s)
⠙ Status: processing (20s)
⠹ Status: processing (30s)
✓ Database creation completed successfully
Supported Operations
Async operations are supported across all major command categories:
- Database Operations - Create, update, delete, import, backup, migrate
- Subscription Management - Regular and fixed subscriptions
- Network Connectivity - VPC Peering, PSC, Transit Gateway
- ACL Management - Rules, roles, and users
- User & Account Management - Users and provider accounts
Error Handling
Timeout Behavior
If an operation exceeds the timeout:
- The CLI exits with an error
- The task continues running in the background
- You can check status using the task ID
# Operation times out
Error: Operation timed out after 600 seconds. Task 12345 is still running.
# Check task status manually
redisctl cloud task get 12345
Recovery Options
# Retry with longer timeout
redisctl cloud database create --data @database.json \
--wait --wait-timeout 1800
# Check task status without waiting
redisctl cloud task list --status pending
Best Practices
Choosing Timeouts
- Small operations: Default 600s is usually sufficient
- Large databases: Increase to 1800s (30 min) or more
- Bulk operations: Consider 3600s (1 hour) for very large datasets
- Network operations: May need longer timeouts in some regions
Polling Intervals
- Default (10s): Good balance for most operations
- Fast operations (2-5s): For operations you expect to complete quickly
- Long operations (30-60s): Reduce API calls for very long operations
Automation
The --wait
flags are designed for automation:
#!/bin/bash
# CI/CD pipeline example
set -e # Exit on error
# Create infrastructure
redisctl cloud subscription create --data @prod-sub.json \
--wait --wait-timeout 1800
SUB_ID=$(redisctl cloud subscription list -q "[0].id" -o json)
redisctl cloud database create --subscription-id $SUB_ID \
--data @prod-db.json --wait --wait-timeout 900
echo "Infrastructure ready!"
Parallel Operations
You can run multiple async operations in parallel:
#!/bin/bash
# Create multiple databases in parallel
for i in {1..5}; do
redisctl cloud database create --subscription-id 12345 \
--data @db-$i.json --wait &
done
# Wait for all background jobs
wait
echo "All databases created!"
Implementation Details
All async operations use the centralized handle_async_response
function which:
- Extracts task IDs from API responses
- Polls for task completion
- Provides consistent progress indicators
- Handles timeouts and errors uniformly
The system automatically detects task IDs from various response formats:
taskId
field in responselinks
array with task references- Nested task objects
Output Formats
redisctl supports multiple output formats to suit different use cases.
Available Formats
Auto (Default)
Automatically selects the best format based on context:
# Auto-detects format
redisctl cloud database list
JSON
Machine-readable JSON output:
redisctl cloud database list -o json
YAML
Human-readable structured format:
redisctl cloud database list -o yaml
Table
Formatted tables for human consumption:
redisctl cloud database list -o table
JMESPath Filtering
Use the -q
flag for powerful JSON queries:
# Get all database names
redisctl cloud database list -q "[].name"
# Filter by status
redisctl cloud database list -q "[?status=='active']"
# Custom projections
redisctl cloud database list -q "[].{name: name, memory: planMemoryLimit}"
Advanced Queries
# Sort by memory
redisctl cloud database list -q "sort_by(@, &planMemoryLimit)"
# Find databases with specific modules
redisctl cloud database list -q "[?modules[?name=='RediSearch']]"
# Complex filtering (memory > 250MB)
redisctl cloud database list -q "[?planMemoryLimit > `250`].{name: name, region: region, memory: planMemoryLimit}"
Working with Other Tools
jq Integration
# Filter with jq
redisctl cloud database list -o json | jq '.[] | select(.name | contains("prod"))'
# Extract IDs
redisctl cloud database list -o json | jq -r '.[].databaseId'
yq for YAML
redisctl cloud database list -o yaml | yq '.[] | select(.status == "active")'
Scripting Examples
Batch Operations
# Get all database IDs
IDS=($(redisctl cloud database list -q "[].databaseId" -o json | jq -r '.[]'))
# Process each database
for ID in "${IDS[@]}"; do
redisctl cloud database get $ID
done
Output Redirection
# Save to file
redisctl cloud database list -o json > databases.json
# Append to log
redisctl cloud database list >> operations.log
# Error handling
redisctl cloud database list 2> errors.log || echo "Failed"
Environment Detection
redisctl automatically detects the output environment:
- Terminal: Defaults to table format for readability
- Pipe: Defaults to JSON for processing
- Redirect: Defaults to JSON for storage
Override with -o
flag when needed.
Format-Specific Features
Table Features
- Automatic column width adjustment
- Row highlighting for important data
- Pagination for large datasets
- Color support when terminal supports it
JSON Features
- Pretty-printed by default
- Compact mode available with
--compact
- Proper escaping for special characters
- Null values handled correctly
YAML Features
- Comments for clarity
- Multi-line string support
- Proper indentation
- Type preservation
Error Handling
Different formats handle errors differently:
JSON Errors
{
"error": "Authentication failed",
"details": "Invalid API key"
}
Table Errors
Error: Authentication failed
Details: Invalid API key
YAML Errors
error: Authentication failed
details: Invalid API key
Performance Considerations
- JSON: Fastest parsing, smallest size
- YAML: Human-readable, larger size
- Table: Terminal rendering overhead
Examples
Save Configuration
redisctl cloud database get 12345 -o yaml > database-config.yaml
Generate Reports
# CSV-like output for spreadsheets
redisctl cloud database list -o json | \
jq -r '.[] | [.name, .status, .memory] | @csv'
Monitor Changes
# Watch for inactive databases
watch -n 10 'redisctl cloud database list -o table -q "[?status!='"'"'active'"'"']"'
Tips and Tricks
-
Default Format: Set
REDISCTL_OUTPUT
environment variableexport REDISCTL_OUTPUT=json
-
Raw Output: Use
-r
or--raw
for unformatted outputredisctl cloud database list -q "[].id" -r
-
Silent Mode: Suppress non-essential output
redisctl cloud database create --data @db.json 2> errors.log
-
Pretty Print: Control JSON formatting
redisctl cloud database list 2>/dev/null
Complex Workflows
Health Dashboard
#!/bin/bash
while true; do
clear
echo "=== Database Health ==="
redisctl cloud database list -o table -q "[?status!='active']"
echo ""
echo "=== Resource Usage ==="
redisctl cloud database list -o json | \
jq -r '.[] | "\(.name): \(.usedMemoryInMb)MB / \(.memoryLimitInGb)GB"'
sleep 60
done
Automated Reporting
#!/bin/bash
REPORT_DATE=$(date +%Y-%m-%d)
REPORT_FILE="database-report-${REPORT_DATE}.json"
# Collect all database information
{
echo "{"
echo " \"report_date\": \"${REPORT_DATE}\","
echo " \"databases\": "
redisctl cloud database list -o json | jq -r '
map({
name: .name,
status: .status,
region: .region,
memory_gb: .memoryLimitInGb,
throughput: .throughputMeasurement
})
'
echo "}"
} > "$REPORT_FILE"
echo "Report saved to $REPORT_FILE"
Best Practices
- Use JSON for automation - Most reliable for parsing
- Use Table for human review - Easiest to read
- Use YAML for configuration - Best for config files
- Use JMESPath for filtering - More powerful than jq for simple queries
- Combine tools - Use redisctl with jq, yq, awk for complex processing
Profile Management
Profiles allow you to manage multiple Redis Cloud and Enterprise environments with different credentials and settings.
Profile Configuration
Profiles are stored in ~/.config/redisctl/config.toml
:
default_profile = "cloud-prod"
[profiles.cloud-dev]
deployment_type = "cloud"
api_key = "dev-key-123"
api_secret = "dev-secret-456"
api_url = "https://api.redislabs.com/v1"
[profiles.cloud-prod]
deployment_type = "cloud"
api_key = "prod-key-789"
api_secret = "prod-secret-012"
[profiles.enterprise-local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "localpass"
insecure = true
[profiles.enterprise-prod]
deployment_type = "enterprise"
url = "https://redis-cluster.company.com:9443"
username = "admin@company.com"
password = "prodpass"
Profile Commands
List Profiles
# Show all configured profiles
redisctl profile list
# Example output:
# Available profiles:
# cloud-dev (cloud)
# * cloud-prod (cloud) [default]
# enterprise-local (enterprise)
# enterprise-prod (enterprise)
Set Default Profile
# Set default profile
redisctl profile default cloud-prod
# Verify
redisctl profile list
Get Profile Details
# Show profile configuration (passwords are masked)
redisctl profile get cloud-dev
# Example output:
# Profile: cloud-dev
# Type: cloud
# API Key: dev-key-123
# API Secret: ****
# API URL: https://api.redislabs.com/v1
Set Profile Values
# Update API key
redisctl profile set cloud-dev api_key "new-key-123"
# Update API secret
redisctl profile set cloud-dev api_secret "new-secret-456"
# Update Enterprise URL
redisctl profile set enterprise-prod url "https://new-cluster:9443"
Remove Profile
# Remove a profile
redisctl profile remove old-profile
Using Profiles
Explicit Profile Selection
# Use specific profile for a command
redisctl cloud database list --profile cloud-dev
# Override default profile
redisctl --profile enterprise-prod cluster info
Profile Resolution Order
redisctl resolves profiles in this order:
--profile
command-line flagREDISCTL_PROFILE
environment variabledefault_profile
in config file- Error if no profile can be determined
Environment Variable Support
Variable Expansion in Profiles
Profiles support environment variable expansion:
[profiles.cloud-dynamic]
deployment_type = "cloud"
api_key = "${REDIS_CLOUD_KEY}"
api_secret = "${REDIS_CLOUD_SECRET}"
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
Setting Variables
# Set environment variables
export REDIS_CLOUD_KEY="my-api-key"
export REDIS_CLOUD_SECRET="my-secret"
export REDIS_API_URL="https://custom-api.example.com"
# Use profile with variable expansion
redisctl cloud database list --profile cloud-dynamic
Default Values
# Use default if variable not set
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
username = "${REDIS_USER:-admin@cluster.local}"
Advanced Profile Management
Multiple Environments
# Development environments
[profiles.dev-us]
deployment_type = "cloud"
api_key = "${DEV_US_KEY}"
api_secret = "${DEV_US_SECRET}"
[profiles.dev-eu]
deployment_type = "cloud"
api_key = "${DEV_EU_KEY}"
api_secret = "${DEV_EU_SECRET}"
# Staging environments
[profiles.staging-us]
deployment_type = "cloud"
api_key = "${STAGING_US_KEY}"
api_secret = "${STAGING_US_SECRET}"
# Production environments
[profiles.prod-us]
deployment_type = "cloud"
api_key = "${PROD_US_KEY}"
api_secret = "${PROD_US_SECRET}"
[profiles.prod-eu]
deployment_type = "cloud"
api_key = "${PROD_EU_KEY}"
api_secret = "${PROD_EU_SECRET}"
Profile Switching Script
#!/bin/bash
# Switch between environments
ENV=$1
REGION=$2
case "$ENV" in
dev|staging|prod)
redisctl profile default "${ENV}-${REGION}"
echo "Switched to ${ENV}-${REGION}"
;;
*)
echo "Usage: $0 [dev|staging|prod] [us|eu]"
exit 1
;;
esac
CI/CD Integration
# GitHub Actions example
jobs:
deploy:
steps:
- name: Configure Redis Profile
run: |
mkdir -p ~/.config/redisctl
cat > ~/.config/redisctl/config.toml <<EOF
[profiles.ci]
deployment_type = "cloud"
api_key = "${{ secrets.REDIS_API_KEY }}"
api_secret = "${{ secrets.REDIS_API_SECRET }}"
EOF
- name: Deploy Database
run: |
redisctl --profile ci database create \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--data @database.json --wait
Secure Credential Storage
Using OS Keyring (Recommended)
When compiled with the secure-storage
feature, redisctl can store credentials in your operating system's secure keyring instead of plaintext in the config file.
Supported Platforms
- macOS: Keychain
- Windows: Windows Credential Store
- Linux: Secret Service (GNOME Keyring, KWallet)
Installation with Secure Storage
# Install from source with secure storage
cargo install redisctl --features secure-storage
# Or build locally
cargo build --release --features secure-storage
Creating Secure Profiles
# Create profile with keyring storage
redisctl profile set prod-secure \
--deployment cloud \
--api-key "your-api-key" \
--api-secret "your-api-secret" \
--use-keyring # Store in OS keyring
# For Enterprise profiles
redisctl profile set enterprise-secure \
--deployment enterprise \
--url "https://cluster.example.com:9443" \
--username "admin@example.com" \
--password "your-password" \
--use-keyring
How It Works
When using --use-keyring
, credentials are:
- Stored securely in your OS keyring
- Referenced in config.toml with
keyring:
prefix - Retrieved automatically when needed
Example config.toml with keyring references:
[profiles.prod-secure]
deployment_type = "cloud"
api_key = "keyring:prod-secure-api-key" # Stored in keyring
api_secret = "keyring:prod-secure-api-secret" # Stored in keyring
api_url = "https://api.redislabs.com/v1" # Non-sensitive, plaintext
Storage Priority
Credentials are resolved in this order:
- Environment variables (highest priority)
- OS keyring (if value starts with
keyring:
) - Plaintext in config file (fallback)
Managing Keyring Credentials
# Update credentials (will update keyring if already using it)
redisctl profile set prod-secure \
--api-key "new-key" \
--use-keyring
# View profile (keyring values are masked)
redisctl profile show prod-secure
# Output:
# Profile: prod-secure
# Type: cloud
# API Key: keyring:...
# API URL: https://api.redislabs.com/v1
Security Best Practices
Credential Storage Options
Choose the appropriate storage method based on your security requirements:
-
OS Keyring (Most Secure)
- Use
--use-keyring
when creating profiles - Credentials encrypted by OS
- Requires
secure-storage
feature
redisctl profile set prod --use-keyring ...
- Use
-
Environment Variables (CI/CD Friendly)
- No storage, runtime only
- Good for automation
export REDIS_CLOUD_API_KEY="key" export REDIS_CLOUD_API_SECRET="secret"
-
Plaintext Config (Development Only)
- Simple but insecure
- Only for development/testing
- Protect with file permissions:
chmod 600 ~/.config/redisctl/config.toml
Security Checklist
- Never commit credentials: Add config.toml to .gitignore
- Use keyring for production: Store production credentials securely
- Rotate credentials regularly: Update API keys periodically
- Audit profile usage: Monitor credential access
- Use environment variables in CI/CD: Keep secrets out of config files
Secure Profile Templates
Production with Keyring
# Create secure production profile
redisctl profile set production \
--deployment cloud \
--api-key "$PROD_KEY" \
--api-secret "$PROD_SECRET" \
--use-keyring
CI/CD with Environment Variables
# config.toml for CI/CD
[profiles.ci]
deployment_type = "cloud"
api_key = "${REDIS_CLOUD_API_KEY}"
api_secret = "${REDIS_CLOUD_API_SECRET}"
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
Development with Mixed Storage
# Development profile with mixed storage
[profiles.dev]
deployment_type = "enterprise"
url = "https://dev-cluster:9443" # Non-sensitive
username = "dev@example.com" # Non-sensitive
password = "keyring:dev-password" # Sensitive, in keyring
insecure = true # Dev setting
Profile Audit
#!/bin/bash
# Audit profile usage
echo "Profile Audit Report"
echo "==================="
for profile in $(redisctl profile list | grep -E '^\s+' | awk '{print $1}'); do
echo -e "\nProfile: $profile"
echo "Last used: $(grep -l "profile.*$profile" ~/.bash_history | tail -1)"
# Check for hardcoded credentials
if grep -q "api_key = \"" ~/.config/redisctl/config.toml; then
echo "WARNING: Hardcoded credentials detected!"
fi
done
Workflows
Workflows are a powerful feature of redisctl that automate complex, multi-step operations. Instead of running multiple commands manually and managing the state between them, workflows handle the entire process with proper error handling, progress tracking, and rollback capabilities.
Overview
Workflows solve common challenges when managing Redis deployments:
- Complex operations requiring multiple API calls in sequence
- Asynchronous operations that need polling and status checking
- Error recovery with proper cleanup and state management
- Progress visibility for long-running operations
- Reproducibility through consistent execution patterns
How Workflows Work
Each workflow is a self-contained operation that:
- Validates prerequisites - Checks current state before making changes
- Executes steps sequentially - Performs operations in the correct order
- Handles async operations - Waits for tasks to complete with progress feedback
- Manages errors gracefully - Provides clear error messages and recovery options
- Returns structured results - Outputs can be consumed programmatically
Available Workflows
Redis Enterprise
- init-cluster - Complete cluster initialization with bootstrap and database setup
Redis Cloud (Future)
- provision-subscription - Create subscription with databases and networking
- setup-aa-database - Configure Active-Active database across regions
Using Workflows
Interactive Mode
Run workflows with human-readable output:
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "SecurePass123"
Output:
Initializing Redis Enterprise cluster...
Bootstrap completed successfully
Cluster is ready
Creating default database 'default-db'...
Database created successfully (ID: 1)
Database connectivity verified (PING successful)
Cluster initialization completed successfully
Cluster name: redis-cluster
Admin user: admin@cluster.local
Database: default-db (1GB)
Access endpoints:
Web UI: https://localhost:8443
API: https://localhost:9443
Programmatic Mode
Use structured output for automation:
# Get JSON output
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "SecurePass123" \
--output json \
--skip-database
{
"success": true,
"message": "Cluster initialized successfully",
"outputs": {
"cluster_name": "redis-cluster",
"username": "admin@cluster.local",
"database_created": false,
"database_name": "default-db"
}
}
CI/CD Integration
Workflows are ideal for CI/CD pipelines:
# GitHub Actions example
- name: Initialize Redis Enterprise
run: |
redisctl enterprise workflow init-cluster \
--username "${{ secrets.REDIS_USER }}" \
--password "${{ secrets.REDIS_PASSWORD }}" \
--output json \
--wait-timeout 300
Async Operation Handling
Workflows handle asynchronous operations transparently:
# Workflows support standard async flags
redisctl enterprise workflow init-cluster \
--username "admin@cluster.local" \
--password "SecurePass123" \
--wait \
--wait-timeout 600
The workflow will:
- Submit operations asynchronously
- Poll for completion status
- Show progress indicators
- Handle timeouts gracefully
Error Handling
Workflows provide robust error handling:
Partial Success
If a workflow partially completes (e.g., cluster initialized but database creation fails):
- The successful steps are preserved
- Clear error messages explain what failed
- Recovery instructions are provided
Idempotency
Workflows check current state before making changes:
- Running init-cluster on an initialized cluster returns success without re-bootstrapping
- Operations are safe to retry
Validation
Prerequisites are checked before execution:
- Required permissions are verified
- Resource availability is confirmed
- Configuration validity is checked
Workflow Architecture
Trait-Based Design
Workflows implement a common trait for consistency:
#![allow(unused)] fn main() { pub trait Workflow: Send + Sync { fn name(&self) -> &str; fn description(&self) -> &str; fn execute(&self, context: WorkflowContext, args: WorkflowArgs) -> Pin<Box<dyn Future<Output = Result<WorkflowResult>> + Send>>; } }
Registry Pattern
Workflows are registered at startup:
#![allow(unused)] fn main() { let registry = WorkflowRegistry::new(); registry.register(InitClusterWorkflow::new()); registry.register(UpgradeClusterWorkflow::new()); }
Context and Arguments
Each workflow receives:
- Context: Connection manager, profile, output format, timeouts
- Arguments: User-provided parameters as key-value pairs
Results
Workflows return structured results:
- Success/failure status
- Human-readable message
- Structured outputs for programmatic consumption
Best Practices
When to Use Workflows
Use workflows for:
- Initial setup - Bootstrapping new environments
- Complex migrations - Multi-step data or configuration changes
- Disaster recovery - Automated failover and recovery procedures
- Routine maintenance - Standardized update and backup procedures
When to Use Direct Commands
Use direct commands for:
- Simple queries - Getting status or configuration
- Single operations - Creating one resource
- Debugging - Investigating specific issues
- Custom scripts - Operations not covered by workflows
Creating Custom Workflows
While redisctl provides built-in workflows, you can create custom workflows by:
- Scripting existing commands - Combine redisctl commands in bash/python
- Using the libraries - Build Rust applications with redis-cloud/redis-enterprise crates
- Contributing workflows - Submit PRs for commonly needed workflows
Example custom workflow script:
#!/bin/bash
# Custom workflow: setup-monitoring.sh
# Create monitoring database
DB_ID=$(redisctl enterprise database create \
--name "monitoring" \
--memory-gb 1 \
--output json | jq -r '.uid')
# Configure alerts
redisctl enterprise database update $DB_ID \
--alert-settings '{"memory_threshold": 80}'
# Setup metrics export
redisctl enterprise stats config \
--database $DB_ID \
--export-interval 60
echo "Monitoring setup complete for database $DB_ID"
Future Enhancements
Planned workflow improvements:
- Workflow templates - Parameterized workflows for common patterns
- Conditional logic - Branching based on state or user input
- Rollback support - Automatic undo for failed operations
- Workflow composition - Building complex workflows from simpler ones
- Progress streaming - Real-time updates for long operations
See Also
- Enterprise Workflows - Enterprise-specific workflow documentation
- Async Operations - Understanding async operation handling
- Output Formats - Working with structured output
Common Recipes
Note: We're developing a powerful workflow system that will automate many of these common tasks. Soon you'll be able to run pre-built workflows like
redisctl workflow run create-ha-database
instead of manually executing multiple commands. Stay tuned!
This page will contain common recipes and patterns for using redisctl effectively. In the meantime, check out:
Quick Examples
Create a High-Availability Database
# Coming soon as a workflow!
# redisctl workflow run create-ha-database --name prod-cache --size 2gb
# Current manual process:
redisctl cloud database create --data '{
"name": "prod-cache",
"memoryLimitInGb": 2,
"replication": true,
"dataPersistence": "aof-every-1-second"
}' --wait
Migrate Database Between Regions
# Coming soon as a workflow!
# redisctl workflow run migrate-database --source 12345 --target-region us-west-2
# Current manual process involves multiple steps...
Set Up Monitoring
# Coming soon as a workflow!
# redisctl workflow run setup-monitoring --database 12345 --prometheus-url http://prometheus:9090
Workflow System Preview
The upcoming workflow system will provide:
- Pre-built workflows for common operations
- Custom workflow definitions in YAML/JSON
- Parameterized templates for reusable patterns
- Conditional logic and error handling
- Progress tracking with detailed output
- Rollback capabilities for safety
Example workflow definition (coming soon):
name: create-ha-database
description: Create a high-availability database with best practices
parameters:
- name: database_name
required: true
- name: size_gb
default: 1
- name: region
default: us-east-1
steps:
- name: create_subscription
command: cloud subscription create
data:
name: "{{ database_name }}-subscription"
- name: create_database
command: cloud database create
data:
name: "{{ database_name }}"
memoryLimitInGb: "{{ size_gb }}"
replication: true
dataPersistence: aof-every-1-second
wait: true
- name: configure_alerts
command: cloud database alert create
data:
threshold: 80
metric: memory-usage
Current Best Practices
Until workflows are available, here are some patterns:
Use JSON Files for Complex Operations
# Save configuration in files
cat > database.json <<EOF
{
"name": "production-db",
"memoryLimitInGb": 4,
"replication": true
}
EOF
redisctl cloud database create --data @database.json --wait
Chain Commands with Shell Scripts
#!/bin/bash
# Create database and wait for completion
DB_ID=$(redisctl cloud database create --data @config.json --wait -o json | jq -r '.resourceId')
# Configure ACL
redisctl cloud acl create --database $DB_ID --data @acl.json
# Set up monitoring
redisctl cloud metrics enable --database $DB_ID
Use Profiles for Different Environments
# Development
redisctl --profile dev database list
# Staging
redisctl --profile staging database list
# Production
redisctl --profile prod database list
See Also
Managing Production Databases
This tutorial covers best practices for managing Redis databases in production using redisctl.
Prerequisites
- redisctl installed and configured
- Appropriate API credentials with production access
- Understanding of Redis concepts (memory, persistence, replication)
Setting Up Production Profiles
First, create separate profiles for different environments:
# Development environment
redisctl profile set dev-cloud \
--deployment cloud \
--api-key "$DEV_API_KEY" \
--api-secret "$DEV_API_SECRET"
# Staging environment
redisctl profile set staging-cloud \
--deployment cloud \
--api-key "$STAGING_API_KEY" \
--api-secret "$STAGING_API_SECRET"
# Production environment
redisctl profile set prod-cloud \
--deployment cloud \
--api-key "$PROD_API_KEY" \
--api-secret "$PROD_API_SECRET"
# Set production as default
redisctl profile default prod-cloud
Creating a Production Database
Step 1: Prepare Database Configuration
Create a production database configuration file:
{
"name": "prod-cache-01",
"memoryLimitInGb": 16,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 100000
},
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{
"name": "RedisJSON"
},
{
"name": "RediSearch"
}
],
"alerts": [
{
"name": "dataset-size",
"value": 80
},
{
"name": "throughput-higher-than",
"value": 90000
},
{
"name": "throughput-lower-than",
"value": 1000
},
{
"name": "latency",
"value": 5
}
],
"backup": {
"interval": 6,
"enabled": true
},
"clustering": {
"enabled": true,
"shardCount": 3
}
}
Step 2: Create the Database
# Create database and wait for completion
redisctl cloud database create \
--subscription-id 123456 \
--data @prod-database.json \
--wait \
--wait-timeout 900
# Verify creation
redisctl cloud database list --subscription-id 123456 -o table
Step 3: Configure Network Access
Set up VPC peering for secure access:
# Create VPC peering
redisctl cloud connectivity create-vpc \
--subscription-id 123456 \
--data @vpc-peering.json \
--wait
# Verify connection
redisctl cloud connectivity list-vpc --subscription-id 123456
Monitoring Production Databases
Health Checks
Create a monitoring script:
#!/bin/bash
# monitor-redis.sh
PROFILE="prod-cloud"
SUBSCRIPTION_ID="123456"
# Check all databases
DATABASES=$(redisctl --profile $PROFILE cloud database list \
--subscription-id $SUBSCRIPTION_ID \
-q "[].{id: databaseId, name: name, status: status}")
echo "$DATABASES" | jq -c '.[]' | while read db; do
ID=$(echo $db | jq -r .id)
NAME=$(echo $db | jq -r .name)
STATUS=$(echo $db | jq -r .status)
if [ "$STATUS" != "active" ]; then
echo "ALERT: Database $NAME ($ID) is not active: $STATUS"
# Send alert (PagerDuty, Slack, etc.)
fi
done
# Check memory usage
for db_id in $(echo "$DATABASES" | jq -r '.[].id'); do
DB_INFO=$(redisctl --profile $PROFILE cloud database get \
--subscription-id $SUBSCRIPTION_ID \
--database-id $db_id)
MEMORY_USED=$(echo $DB_INFO | jq -r .memoryUsageInMB)
MEMORY_LIMIT=$(echo $DB_INFO | jq -r .memoryLimitInGB)
MEMORY_LIMIT_MB=$((MEMORY_LIMIT * 1024))
USAGE_PERCENT=$((MEMORY_USED * 100 / MEMORY_LIMIT_MB))
if [ $USAGE_PERCENT -gt 80 ]; then
echo "WARNING: Database $db_id memory usage at ${USAGE_PERCENT}%"
fi
done
Performance Metrics
Track key performance indicators:
# Get database metrics
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
name: name,
ops: throughputMeasurement.value,
connections: connectionsUsed,
memory: memoryUsageInMB,
evicted: evictedObjects
}"
# Monitor over time
while true; do
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "throughputMeasurement.value" >> ops.log
sleep 60
done
Scaling Operations
Vertical Scaling (Resize)
# Increase memory limit
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 32}' \
--wait
# Increase throughput
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"throughputMeasurement": {
"by": "operations-per-second",
"value": 200000
}
}' \
--wait
Horizontal Scaling (Sharding)
For Redis Enterprise:
# Add shards
redisctl enterprise database update \
--database-id 1 \
--data '{"shardCount": 5}' \
--wait
Backup and Recovery
Automated Backups
Configure backup schedule:
# Enable backups every 4 hours
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"backup": {
"enabled": true,
"interval": 4
}
}'
Manual Backups
# Create manual backup before maintenance
redisctl cloud database backup \
--subscription-id 123456 \
--database-id 789 \
--wait
# List available backups
redisctl cloud database list-backups \
--subscription-id 123456 \
--database-id 789
Restore from Backup
# Prepare import configuration
cat > import.json <<EOF
{
"sourceType": "s3",
"importFromUri": ["s3://backup-bucket/backup-2024-01-15.rdb"],
"s3Credentials": {
"accessKey": "$AWS_ACCESS_KEY",
"secretKey": "$AWS_SECRET_KEY"
}
}
EOF
# Import data
redisctl cloud database import \
--subscription-id 123456 \
--database-id 789 \
--data @import.json \
--wait
Maintenance Operations
Rolling Updates
Update databases with zero downtime:
#!/bin/bash
# rolling-update.sh
DATABASES=(789 790 791)
UPDATE='{"dataEvictionPolicy": "volatile-lru"}'
for db_id in "${DATABASES[@]}"; do
echo "Updating database $db_id..."
# Remove from load balancer
remove_from_lb $db_id
# Update database
redisctl cloud database update \
--subscription-id 123456 \
--database-id $db_id \
--data "$UPDATE" \
--wait
# Health check
while true; do
STATUS=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id $db_id \
-q "status")
if [ "$STATUS" = "active" ]; then
break
fi
sleep 10
done
# Add back to load balancer
add_to_lb $db_id
echo "Database $db_id updated successfully"
sleep 30 # Wait before next update
done
Module Management
Add or update modules:
# Add RedisTimeSeries module
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{
"modules": [
{"name": "RedisJSON"},
{"name": "RediSearch"},
{"name": "RedisTimeSeries"}
]
}' \
--wait
Security Best Practices
Access Control
Configure ACL rules:
# Create ACL rule
redisctl cloud acl create-rule \
--subscription-id 123456 \
--database-id 789 \
--data '{
"name": "read-only-user",
"rule": "+@read ~* -@dangerous"
}'
# Create user with ACL
redisctl cloud acl create-user \
--subscription-id 123456 \
--database-id 789 \
--data '{
"username": "app-reader",
"password": "$(openssl rand -base64 32)",
"aclRule": "read-only-user"
}'
Password Rotation
#!/bin/bash
# rotate-passwords.sh
# Generate new password
NEW_PASSWORD=$(openssl rand -base64 32)
# Update database password
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data "{\"password\": \"$NEW_PASSWORD\"}" \
--wait
# Store in secret manager
aws secretsmanager update-secret \
--secret-id redis-prod-password \
--secret-string "$NEW_PASSWORD"
# Update application configuration
kubectl set secret redis-secret password="$NEW_PASSWORD"
Troubleshooting Common Issues
High Memory Usage
# Check memory stats
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
used: memoryUsageInMB,
limit: memoryLimitInGB,
evicted: evictedObjects
}"
# If evictions are happening, increase memory or adjust policy
redisctl cloud database update \
--subscription-id 123456 \
--database-id 789 \
--data '{"memoryLimitInGb": 24}'
Connection Issues
# Check connection limit
DB_INFO=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id 789)
CONNECTIONS_USED=$(echo $DB_INFO | jq -r .connectionsUsed)
CONNECTIONS_LIMIT=$(echo $DB_INFO | jq -r .connectionsLimit)
if [ $CONNECTIONS_USED -gt $((CONNECTIONS_LIMIT * 80 / 100)) ]; then
echo "Warning: Using $CONNECTIONS_USED of $CONNECTIONS_LIMIT connections"
# Increase connection limit or investigate connection leaks
fi
Performance Degradation
# Check slow log equivalent (through metrics)
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "{
latency: latency,
ops: throughputMeasurement.value,
cpu: cpuUsagePercentage
}"
# If CPU is high, consider sharding or upgrading
Best Practices Summary
- Always use profiles for different environments
- Enable replication for production databases
- Configure appropriate persistence (AOF or RDB)
- Set up monitoring and alerts before issues occur
- Automate backups and test restore procedures
- Use VPC peering for secure network access
- Implement proper ACLs for security
- Plan for scaling before you need it
- Document your database configurations
- Test changes in staging before production
Next Steps
Setting Up Monitoring
Learn how to monitor Redis Cloud and Enterprise deployments using redisctl with various monitoring stacks.
Overview
Effective monitoring requires:
- Regular health checks
- Metric collection
- Alert configuration
- Dashboard visualization
- Log aggregation
Monitoring Architecture
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ redisctl │────▶│ Redis APIs │────▶│ Metrics │
│ Scripts │ │ Cloud/Ent. │ │ Exporters │
└─────────────┘ └──────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Grafana │◀────│ Prometheus │◀────│ Format │
│ Dashboards │ │ Storage │ │ Conversion │
└─────────────┘ └──────────────┘ └─────────────┘
Basic Health Monitoring
Health Check Script
Create a basic health monitor:
#!/bin/bash
# health-check.sh
set -euo pipefail
# Configuration
PROFILE="${REDIS_PROFILE:-prod-cloud}"
CHECK_INTERVAL="${CHECK_INTERVAL:-60}"
ALERT_WEBHOOK="${ALERT_WEBHOOK}"
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
}
send_alert() {
local level=$1
local message=$2
if [ -n "$ALERT_WEBHOOK" ]; then
curl -X POST "$ALERT_WEBHOOK" \
-H 'Content-Type: application/json' \
-d "{\"level\": \"$level\", \"message\": \"$message\"}"
fi
case $level in
ERROR) echo -e "${RED}[ERROR]${NC} $message" ;;
WARNING) echo -e "${YELLOW}[WARN]${NC} $message" ;;
INFO) echo -e "${GREEN}[INFO]${NC} $message" ;;
esac
}
check_databases() {
local subscription_id=$1
# Get all databases
local databases=$(redisctl --profile $PROFILE cloud database list \
--subscription-id $subscription_id \
-q "[].{id: databaseId, name: name, status: status}" 2>/dev/null)
if [ -z "$databases" ]; then
send_alert "ERROR" "Failed to fetch databases for subscription $subscription_id"
return 1
fi
echo "$databases" | jq -c '.[]' | while read db; do
local id=$(echo $db | jq -r .id)
local name=$(echo $db | jq -r .name)
local status=$(echo $db | jq -r .status)
if [ "$status" != "active" ]; then
send_alert "ERROR" "Database $name ($id) is not active: $status"
else
log "Database $name ($id) is healthy"
fi
done
}
# Main monitoring loop
while true; do
log "Starting health check..."
# Get all subscriptions
SUBSCRIPTIONS=$(redisctl --profile $PROFILE cloud subscription list \
-q "[].id" 2>/dev/null | jq -r '.[]')
for sub_id in $SUBSCRIPTIONS; do
check_databases $sub_id
done
log "Health check complete. Sleeping for ${CHECK_INTERVAL}s..."
sleep $CHECK_INTERVAL
done
Prometheus Integration
Metrics Exporter
Create a Prometheus exporter for Redis metrics:
#!/usr/bin/env python3
# redis_exporter.py
import json
import subprocess
import time
from prometheus_client import start_http_server, Gauge, Counter
import os
# Prometheus metrics
db_memory_used = Gauge('redis_memory_used_mb', 'Memory used in MB', ['database', 'subscription'])
db_memory_limit = Gauge('redis_memory_limit_gb', 'Memory limit in GB', ['database', 'subscription'])
db_connections = Gauge('redis_connections_used', 'Connections used', ['database', 'subscription'])
db_ops = Gauge('redis_operations_per_second', 'Operations per second', ['database', 'subscription'])
db_status = Gauge('redis_database_status', 'Database status (1=active, 0=inactive)', ['database', 'subscription'])
def get_databases(profile, subscription_id):
"""Fetch database list using redisctl"""
cmd = [
'redisctl', '--profile', profile, 'cloud', 'database', 'list',
'--subscription-id', str(subscription_id), '-o', 'json'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return json.loads(result.stdout)
except Exception as e:
print(f"Error fetching databases: {e}")
return []
def get_database_details(profile, subscription_id, database_id):
"""Fetch detailed database metrics"""
cmd = [
'redisctl', '--profile', profile, 'cloud', 'database', 'get',
'--subscription-id', str(subscription_id),
'--database-id', str(database_id),
'-o', 'json'
]
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return json.loads(result.stdout)
except Exception as e:
print(f"Error fetching database {database_id}: {e}")
return None
def collect_metrics():
"""Collect metrics from all databases"""
profile = os.getenv('REDIS_PROFILE', 'prod-cloud')
subscriptions = os.getenv('REDIS_SUBSCRIPTIONS', '').split(',')
for sub_id in subscriptions:
if not sub_id:
continue
databases = get_databases(profile, sub_id)
for db in databases:
db_id = db.get('databaseId')
db_name = db.get('name', f'db-{db_id}')
# Get detailed metrics
details = get_database_details(profile, sub_id, db_id)
if not details:
continue
# Update Prometheus metrics
labels = {'database': db_name, 'subscription': sub_id}
db_memory_used.labels(**labels).set(details.get('memoryUsageInMB', 0))
db_memory_limit.labels(**labels).set(details.get('memoryLimitInGB', 0))
db_connections.labels(**labels).set(details.get('connectionsUsed', 0))
throughput = details.get('throughputMeasurement', {})
db_ops.labels(**labels).set(throughput.get('value', 0))
status_value = 1 if details.get('status') == 'active' else 0
db_status.labels(**labels).set(status_value)
print(f"Updated metrics for {db_name}")
def main():
"""Main exporter loop"""
port = int(os.getenv('EXPORTER_PORT', '9090'))
interval = int(os.getenv('SCRAPE_INTERVAL', '30'))
# Start Prometheus HTTP server
start_http_server(port)
print(f"Exporter listening on port {port}")
while True:
try:
collect_metrics()
except Exception as e:
print(f"Error collecting metrics: {e}")
time.sleep(interval)
if __name__ == '__main__':
main()
Prometheus Configuration
Configure Prometheus to scrape the exporter:
# prometheus.yml
global:
scrape_interval: 30s
evaluation_interval: 30s
scrape_configs:
- job_name: 'redis-metrics'
static_configs:
- targets: ['localhost:9090']
labels:
environment: 'production'
service: 'redis'
# Alert rules
rule_files:
- 'redis_alerts.yml'
Alert Rules
Define Prometheus alert rules:
# redis_alerts.yml
groups:
- name: redis_alerts
interval: 30s
rules:
- alert: RedisHighMemoryUsage
expr: |
(redis_memory_used_mb / (redis_memory_limit_gb * 1024)) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.database }}"
description: "Database {{ $labels.database }} is using {{ $value | humanizePercentage }} of available memory"
- alert: RedisDatabaseDown
expr: redis_database_status == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Database {{ $labels.database }} is down"
description: "Database {{ $labels.database }} has been inactive for more than 2 minutes"
- alert: RedisHighConnections
expr: redis_connections_used > 900
for: 5m
labels:
severity: warning
annotations:
summary: "High connection count on {{ $labels.database }}"
description: "Database {{ $labels.database }} has {{ $value }} active connections"
- alert: RedisLowThroughput
expr: redis_operations_per_second < 100
for: 10m
labels:
severity: info
annotations:
summary: "Low throughput on {{ $labels.database }}"
description: "Database {{ $labels.database }} has only {{ $value }} ops/sec"
Grafana Dashboards
Dashboard Configuration
Create a comprehensive Grafana dashboard:
{
"dashboard": {
"title": "Redis Production Monitoring",
"panels": [
{
"title": "Database Status",
"type": "stat",
"targets": [
{
"expr": "sum(redis_database_status)",
"legendFormat": "Active Databases"
}
]
},
{
"title": "Memory Usage",
"type": "graph",
"targets": [
{
"expr": "redis_memory_used_mb",
"legendFormat": "{{ database }}"
}
]
},
{
"title": "Operations/Second",
"type": "graph",
"targets": [
{
"expr": "redis_operations_per_second",
"legendFormat": "{{ database }}"
}
]
},
{
"title": "Connection Count",
"type": "graph",
"targets": [
{
"expr": "redis_connections_used",
"legendFormat": "{{ database }}"
}
]
}
]
}
}
Log Monitoring
Centralized Logging with ELK
Ship Redis logs to Elasticsearch:
#!/bin/bash
# ship-logs.sh
# For Redis Enterprise
redisctl enterprise logs list \
--profile prod-enterprise \
--output json | \
jq -c '.[] | {
"@timestamp": .time,
"level": .severity,
"message": .message,
"node": .node_uid,
"component": .component
}' | \
while read log; do
curl -X POST "http://elasticsearch:9200/redis-logs/_doc" \
-H 'Content-Type: application/json' \
-d "$log"
done
Logstash Configuration
Process logs with Logstash:
# logstash.conf
input {
exec {
command => "redisctl enterprise logs list --output json"
interval => 60
codec => "json"
}
}
filter {
date {
match => [ "time", "ISO8601" ]
target => "@timestamp"
}
mutate {
add_field => { "environment" => "production" }
}
if [severity] == "error" {
mutate {
add_tag => [ "alert" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "redis-logs-%{+YYYY.MM.dd}"
}
if "alert" in [tags] {
email {
to => "ops-team@example.com"
subject => "Redis Error Alert"
body => "Error detected: %{message}"
}
}
}
Alerting Integration
Slack Notifications
Send alerts to Slack:
#!/bin/bash
# slack-alert.sh
send_slack_alert() {
local level=$1
local message=$2
local webhook_url="${SLACK_WEBHOOK_URL}"
local color="good"
case $level in
ERROR) color="danger" ;;
WARNING) color="warning" ;;
esac
curl -X POST "$webhook_url" \
-H 'Content-Type: application/json' \
-d "{
\"attachments\": [{
\"color\": \"$color\",
\"title\": \"Redis Alert: $level\",
\"text\": \"$message\",
\"footer\": \"redisctl monitoring\",
\"ts\": $(date +%s)
}]
}"
}
# Monitor and alert
while true; do
STATUS=$(redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-q "status")
if [ "$STATUS" != "active" ]; then
send_slack_alert "ERROR" "Database 789 is $STATUS"
fi
sleep 60
done
PagerDuty Integration
Integrate with PagerDuty for critical alerts:
#!/usr/bin/env python3
# pagerduty_alert.py
import pdpyras
import subprocess
import json
import os
def check_redis_health():
"""Check Redis database health"""
cmd = [
'redisctl', 'cloud', 'database', 'list',
'--subscription-id', os.getenv('SUBSCRIPTION_ID'),
'-o', 'json'
]
result = subprocess.run(cmd, capture_output=True, text=True)
databases = json.loads(result.stdout)
alerts = []
for db in databases:
if db['status'] != 'active':
alerts.append({
'database': db['name'],
'status': db['status'],
'id': db['databaseId']
})
return alerts
def send_pagerduty_alert(session, alerts):
"""Send alert to PagerDuty"""
for alert in alerts:
session.trigger_incident(
summary=f"Redis database {alert['database']} is {alert['status']}",
source="redisctl-monitoring",
severity="error",
custom_details=alert
)
def main():
api_key = os.getenv('PAGERDUTY_API_KEY')
session = pdpyras.APISession(api_key)
alerts = check_redis_health()
if alerts:
send_pagerduty_alert(session, alerts)
if __name__ == '__main__':
main()
Custom Metrics Collection
Performance Baseline
Establish performance baselines:
#!/bin/bash
# baseline.sh
# Collect baseline metrics for 24 hours
DURATION=86400
INTERVAL=60
OUTPUT="baseline_$(date +%Y%m%d).csv"
echo "timestamp,database,ops,latency,memory,cpu" > $OUTPUT
END=$(($(date +%s) + DURATION))
while [ $(date +%s) -lt $END ]; do
TIMESTAMP=$(date +%s)
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789 \
-o json | \
jq -r "\"$TIMESTAMP,prod-db,\(.throughputMeasurement.value),\(.latency),\(.memoryUsageInMB),\(.cpuUsagePercentage)\"" \
>> $OUTPUT
sleep $INTERVAL
done
# Analyze baseline
echo "Baseline collection complete. Analyzing..."
python3 analyze_baseline.py $OUTPUT
Automation with Cron
Schedule monitoring tasks:
# crontab -e
# Health check every 5 minutes
*/5 * * * * /opt/monitoring/health-check.sh
# Collect metrics every minute
* * * * * /opt/monitoring/collect-metrics.sh
# Daily report
0 8 * * * /opt/monitoring/daily-report.sh
# Weekly capacity planning
0 0 * * 0 /opt/monitoring/capacity-planning.sh
# Backup monitoring config
0 2 * * * /opt/monitoring/backup-monitoring.sh
Best Practices
- Monitor proactively - Set up alerts before issues occur
- Use multiple data sources - Combine metrics, logs, and traces
- Set appropriate thresholds - Avoid alert fatigue
- Automate responses - Use runbooks for common issues
- Track trends - Look for patterns over time
- Test alert paths - Ensure alerts reach the right people
- Document procedures - Have clear escalation paths
- Review regularly - Update monitoring as systems evolve
Next Steps
Disaster Recovery
Learn how to prepare for and recover from disasters using redisctl.
Topics Covered
- Backup strategies
- Recovery procedures
- Failover processes
- Data validation
- RTO/RPO planning
Full tutorial coming soon - see Managing Production Databases for backup examples in the meantime.
Network Security
Secure your Redis deployments with proper network configuration.
Topics Covered
- VPC peering setup
- Private endpoints
- Security groups
- SSL/TLS configuration
- Access control
Full tutorial coming soon - see Network Connectivity for basic setup.
CI/CD Integration
Integrate redisctl into your continuous integration and deployment pipelines.
Overview
This tutorial covers:
- Automated database provisioning
- Environment promotion
- Testing with Redis
- GitOps workflows
- Blue-green deployments
GitHub Actions
Database Provisioning Workflow
# .github/workflows/provision-redis.yml
name: Provision Redis Database
on:
workflow_dispatch:
inputs:
environment:
description: 'Environment to deploy to'
required: true
type: choice
options:
- development
- staging
- production
memory_gb:
description: 'Memory limit in GB'
required: true
default: '4'
jobs:
provision:
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment }}
steps:
- uses: actions/checkout@v3
- name: Install redisctl
run: |
curl -L https://github.com/joshrotenberg/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
sudo mv redisctl /usr/local/bin/
redisctl --version
- name: Configure credentials
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_CLOUD_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_CLOUD_API_SECRET }}
run: |
redisctl profile set ci-${{ github.event.inputs.environment }} \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
- name: Create database configuration
run: |
cat > database.json <<EOF
{
"name": "${{ github.event.inputs.environment }}-${{ github.sha }}",
"memoryLimitInGb": ${{ github.event.inputs.memory_gb }},
"protocol": "redis",
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru"
}
EOF
- name: Provision database
id: provision
run: |
OUTPUT=$(redisctl cloud database create \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--data @database.json \
--wait \
--output json)
DB_ID=$(echo "$OUTPUT" | jq -r .databaseId)
ENDPOINT=$(echo "$OUTPUT" | jq -r .publicEndpoint)
PASSWORD=$(echo "$OUTPUT" | jq -r .password)
echo "database_id=$DB_ID" >> $GITHUB_OUTPUT
echo "endpoint=$ENDPOINT" >> $GITHUB_OUTPUT
echo "::add-mask::$PASSWORD"
echo "password=$PASSWORD" >> $GITHUB_OUTPUT
- name: Update deployment configuration
run: |
# Update Kubernetes secret
kubectl create secret generic redis-${{ github.event.inputs.environment }} \
--from-literal=endpoint=${{ steps.provision.outputs.endpoint }} \
--from-literal=password=${{ steps.provision.outputs.password }} \
--dry-run=client -o yaml | kubectl apply -f -
- name: Run smoke tests
run: |
redis-cli -h ${{ steps.provision.outputs.endpoint }} \
-a ${{ steps.provision.outputs.password }} \
PING
Environment Promotion
# .github/workflows/promote-environment.yml
name: Promote Database Configuration
on:
workflow_dispatch:
inputs:
from_env:
description: 'Source environment'
required: true
type: choice
options:
- development
- staging
to_env:
description: 'Target environment'
required: true
type: choice
options:
- staging
- production
jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Export source configuration
run: |
SOURCE_DB=$(redisctl cloud database get \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.from_env)] }} \
--output json)
# Extract configuration
echo "$SOURCE_DB" | jq '{
memoryLimitInGb: .memoryLimitInGb,
throughputMeasurement: .throughputMeasurement,
modules: .modules,
alerts: .alerts,
dataEvictionPolicy: .dataEvictionPolicy,
dataPersistence: .dataPersistence
}' > config.json
- name: Apply to target environment
run: |
redisctl cloud database update \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.to_env)] }} \
--data @config.json \
--wait
- name: Verify promotion
run: |
TARGET_CONFIG=$(redisctl cloud database get \
--subscription-id ${{ vars.SUBSCRIPTION_ID }} \
--database-id ${{ vars[format('{0}_DATABASE_ID', inputs.to_env)] }} \
--output json)
# Compare configurations
SOURCE_MEMORY=$(cat config.json | jq .memoryLimitInGb)
TARGET_MEMORY=$(echo "$TARGET_CONFIG" | jq .memoryLimitInGb)
if [ "$SOURCE_MEMORY" != "$TARGET_MEMORY" ]; then
echo "Configuration mismatch!"
exit 1
fi
GitLab CI/CD
Pipeline Configuration
# .gitlab-ci.yml
stages:
- validate
- deploy
- test
- promote
variables:
REDISCTL_VERSION: "latest"
before_script:
- curl -L https://github.com/joshrotenberg/redisctl/releases/${REDISCTL_VERSION}/download/redisctl-linux-amd64.tar.gz | tar xz
- mv redisctl /usr/local/bin/
- redisctl profile set gitlab --deployment cloud --api-key "$REDIS_API_KEY" --api-secret "$REDIS_API_SECRET"
validate:config:
stage: validate
script:
- |
for file in configs/*.json; do
echo "Validating $file"
jq empty "$file" || exit 1
done
deploy:development:
stage: deploy
environment: development
script:
- |
redisctl cloud database update \
--subscription-id "$DEV_SUBSCRIPTION_ID" \
--database-id "$DEV_DATABASE_ID" \
--data @configs/development.json \
--wait
only:
- develop
deploy:staging:
stage: deploy
environment: staging
script:
- |
redisctl cloud database update \
--subscription-id "$STAGING_SUBSCRIPTION_ID" \
--database-id "$STAGING_DATABASE_ID" \
--data @configs/staging.json \
--wait
only:
- main
test:integration:
stage: test
script:
- |
# Get database endpoint
ENDPOINT=$(redisctl cloud database get \
--subscription-id "$DEV_SUBSCRIPTION_ID" \
--database-id "$DEV_DATABASE_ID" \
-q "publicEndpoint")
# Run tests
npm test -- --redis-endpoint="$ENDPOINT"
dependencies:
- deploy:development
promote:to:production:
stage: promote
environment: production
when: manual
script:
- |
# Export staging config
redisctl cloud database get \
--subscription-id "$STAGING_SUBSCRIPTION_ID" \
--database-id "$STAGING_DATABASE_ID" \
-o json > staging-config.json
# Apply to production
redisctl cloud database update \
--subscription-id "$PROD_SUBSCRIPTION_ID" \
--database-id "$PROD_DATABASE_ID" \
--data @staging-config.json \
--wait
only:
- main
Jenkins Pipeline
Jenkinsfile
// Jenkinsfile
pipeline {
agent any
environment {
REDIS_CLOUD_API_KEY = credentials('redis-cloud-api-key')
REDIS_CLOUD_API_SECRET = credentials('redis-cloud-api-secret')
}
stages {
stage('Setup') {
steps {
sh '''
curl -L https://github.com/joshrotenberg/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
chmod +x redisctl
./redisctl profile set jenkins \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
'''
}
}
stage('Provision Database') {
when {
expression { params.PROVISION_NEW == true }
}
steps {
script {
def dbConfig = readJSON file: 'database-config.json'
dbConfig.name = "${env.JOB_NAME}-${env.BUILD_NUMBER}"
writeJSON file: 'temp-config.json', json: dbConfig
def result = sh(
script: """
./redisctl cloud database create \
--subscription-id ${params.SUBSCRIPTION_ID} \
--data @temp-config.json \
--wait \
--output json
""",
returnStdout: true
).trim()
def db = readJSON text: result
env.DATABASE_ID = db.databaseId
env.DATABASE_ENDPOINT = db.publicEndpoint
}
}
}
stage('Run Tests') {
steps {
sh '''
export REDIS_ENDPOINT="${DATABASE_ENDPOINT}"
npm test
'''
}
}
stage('Cleanup') {
when {
expression { params.CLEANUP == true }
}
steps {
sh """
./redisctl cloud database delete \
--subscription-id ${params.SUBSCRIPTION_ID} \
--database-id ${env.DATABASE_ID} \
--wait
"""
}
}
}
post {
always {
cleanWs()
}
}
}
ArgoCD GitOps
Application Manifest
# argocd/redis-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-databases
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/redis-config
targetRevision: HEAD
path: environments/production
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: false
selfHeal: true
syncOptions:
- CreateNamespace=true
hooks:
- name: provision-redis
manifest: |
apiVersion: batch/v1
kind: Job
metadata:
name: provision-redis
spec:
template:
spec:
containers:
- name: redisctl
image: redisctl/redisctl:latest
command:
- /bin/sh
- -c
- |
redisctl cloud database update \
--subscription-id $SUBSCRIPTION_ID \
--database-id $DATABASE_ID \
--data @/config/database.json \
--wait
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: redis-config
Terraform Integration
Redis Resource Management
# terraform/redis.tf
terraform {
required_providers {
shell = {
source = "scottwinkler/shell"
version = "~> 1.7"
}
}
}
resource "shell_script" "redis_database" {
lifecycle_commands {
create = <<-EOT
redisctl cloud database create \
--subscription-id ${var.subscription_id} \
--data '${jsonencode(var.database_config)}' \
--wait \
--output json
EOT
read = <<-EOT
redisctl cloud database get \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--output json
EOT
update = <<-EOT
redisctl cloud database update \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--data '${jsonencode(var.database_config)}' \
--wait \
--output json
EOT
delete = <<-EOT
redisctl cloud database delete \
--subscription-id ${var.subscription_id} \
--database-id $(cat database_id.txt) \
--wait
EOT
}
environment = {
REDIS_CLOUD_API_KEY = var.redis_api_key
REDIS_CLOUD_API_SECRET = var.redis_api_secret
}
}
output "redis_endpoint" {
value = jsondecode(shell_script.redis_database.output)["publicEndpoint"]
}
output "redis_password" {
value = jsondecode(shell_script.redis_database.output)["password"]
sensitive = true
}
Blue-Green Deployments
Deployment Script
#!/bin/bash
# blue-green-deploy.sh
set -euo pipefail
# Configuration
SUBSCRIPTION_ID="${SUBSCRIPTION_ID}"
BLUE_DB_ID="${BLUE_DATABASE_ID}"
GREEN_DB_ID="${GREEN_DATABASE_ID}"
LOAD_BALANCER="${LOAD_BALANCER_NAME}"
# Determine current active environment
CURRENT_ACTIVE=$(kubectl get service redis-active -o jsonpath='{.spec.selector.version}')
echo "Current active: $CURRENT_ACTIVE"
if [ "$CURRENT_ACTIVE" = "blue" ]; then
TARGET="green"
TARGET_DB_ID="$GREEN_DB_ID"
else
TARGET="blue"
TARGET_DB_ID="$BLUE_DB_ID"
fi
echo "Deploying to $TARGET environment (Database: $TARGET_DB_ID)"
# Update target database
echo "Updating $TARGET database configuration..."
redisctl cloud database update \
--subscription-id "$SUBSCRIPTION_ID" \
--database-id "$TARGET_DB_ID" \
--data @new-config.json \
--wait
# Run health checks
echo "Running health checks on $TARGET..."
ENDPOINT=$(redisctl cloud database get \
--subscription-id "$SUBSCRIPTION_ID" \
--database-id "$TARGET_DB_ID" \
-q "publicEndpoint")
for i in {1..10}; do
if redis-cli -h "$ENDPOINT" PING | grep -q PONG; then
echo "Health check passed"
break
fi
sleep 5
done
# Run smoke tests
echo "Running smoke tests..."
./run-smoke-tests.sh "$ENDPOINT"
# Switch traffic
echo "Switching traffic to $TARGET..."
kubectl patch service redis-active \
-p '{"spec":{"selector":{"version":"'$TARGET'"}}}'
# Monitor for errors
echo "Monitoring for errors..."
sleep 30
ERROR_COUNT=$(kubectl logs -l app=redis,version=$TARGET --tail=100 | grep -c ERROR || true)
if [ "$ERROR_COUNT" -gt 0 ]; then
echo "Errors detected! Rolling back..."
kubectl patch service redis-active \
-p '{"spec":{"selector":{"version":"'$CURRENT_ACTIVE'"}}}'
exit 1
fi
echo "Deployment successful! $TARGET is now active"
Best Practices
- Store credentials securely - Use secrets management systems
- Use dedicated CI/CD profiles - Don't reuse production credentials
- Implement rollback mechanisms - Always have a way to revert
- Test in staging first - Never deploy directly to production
- Monitor deployments - Watch for errors during and after deployment
- Use infrastructure as code - Version control your Redis configurations
- Implement approval gates - Require manual approval for production
- Audit all changes - Log who changed what and when
- Use immutable deployments - Create new resources rather than updating
- Automate validation - Test configurations before applying
Next Steps
Environment Variables
Complete reference of environment variables supported by redisctl
.
Redis Cloud
Variable | Description | Example |
---|---|---|
REDIS_CLOUD_API_KEY | API account key | A3qcymrvqpn9rr... |
REDIS_CLOUD_API_SECRET | API secret key | S3s8ecrrnaguqk... |
REDIS_CLOUD_API_URL | API endpoint (optional) | https://api.redislabs.com/v1 |
Redis Enterprise
Variable | Description | Example |
---|---|---|
REDIS_ENTERPRISE_URL | Cluster API URL | https://cluster:9443 |
REDIS_ENTERPRISE_USER | Username | admin@cluster.local |
REDIS_ENTERPRISE_PASSWORD | Password | your-password |
REDIS_ENTERPRISE_INSECURE | Allow self-signed certs | true or false |
General
Variable | Description | Example |
---|---|---|
REDISCTL_PROFILE | Default profile name | production |
REDISCTL_OUTPUT | Default output format | json , yaml , table |
RUST_LOG | Logging level | error , warn , info , debug |
NO_COLOR | Disable colored output | 1 or any value |
Usage Examples
Basic Setup
# Redis Cloud
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Redis Enterprise
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="password"
export REDIS_ENTERPRISE_INSECURE="true"
Debugging
# Enable debug logging
export RUST_LOG=debug
redisctl api cloud get /
# Trace specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
CI/CD
# GitHub Actions
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
Precedence
Environment variables are overridden by:
- Command-line flags (highest priority)
- Configuration file settings
But override:
- Default values (lowest priority)
Configuration File
Complete reference for the redisctl configuration file format and options.
File Location
The configuration file is stored at:
- Linux/macOS:
~/.config/redisctl/config.toml
- Windows:
%APPDATA%\redis\redisctl\config.toml
View the exact path:
redisctl profile path
File Format
The configuration file uses TOML format:
# Default profile to use when none specified
default_profile = "production"
# Profile definitions
[profiles.production]
deployment_type = "cloud"
api_key = "your-api-key"
api_secret = "your-api-secret"
api_url = "https://api.redislabs.com/v1"
[profiles.enterprise-local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true
Profile Configuration
Cloud Profile
All available options for Redis Cloud profiles:
[profiles.cloud-example]
# Required: Deployment type
deployment_type = "cloud"
# Required: API credentials
api_key = "A3qcymrvqpn9rrgdt40sv5f9yfxob26vx64hwddh8vminqnkgfq"
api_secret = "S3s8ecrrnaguqkvwfvealoe3sn25zqs4wc4lwgo4rb0ud3qm77c"
# Optional: API endpoint (defaults to production)
api_url = "https://api.redislabs.com/v1"
# Optional: Custom timeout (seconds)
timeout = 30
# Optional: Retry configuration
max_retries = 3
retry_delay = 1
Enterprise Profile
All available options for Redis Enterprise profiles:
[profiles.enterprise-example]
# Required: Deployment type
deployment_type = "enterprise"
# Required: Cluster URL
url = "https://cluster.example.com:9443"
# Required: Authentication
username = "admin@example.com"
password = "secure-password"
# Optional: Allow self-signed certificates
insecure = false
# Optional: Custom timeout (seconds)
timeout = 60
# Optional: Client certificate authentication
client_cert = "/path/to/client.crt"
client_key = "/path/to/client.key"
# Optional: Custom CA certificate
ca_cert = "/path/to/ca.crt"
Environment Variable Expansion
The configuration file supports environment variable expansion using ${VAR}
syntax:
Basic Expansion
[profiles.production]
deployment_type = "cloud"
api_key = "${REDIS_CLOUD_API_KEY}"
api_secret = "${REDIS_CLOUD_API_SECRET}"
With Default Values
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
# Use production URL if STAGING_API_URL not set
api_url = "${STAGING_API_URL:-https://api.redislabs.com/v1}"
Complex Example
default_profile = "${REDISCTL_DEFAULT_PROFILE:-development}"
[profiles.development]
deployment_type = "cloud"
api_key = "${DEV_API_KEY}"
api_secret = "${DEV_API_SECRET}"
api_url = "${DEV_API_URL:-https://api.redislabs.com/v1}"
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
[profiles."${DYNAMIC_PROFILE_NAME:-custom}"]
deployment_type = "${DYNAMIC_DEPLOYMENT:-cloud}"
api_key = "${DYNAMIC_API_KEY}"
api_secret = "${DYNAMIC_API_SECRET}"
Multiple Profiles
Organizing by Environment
# Development environments
[profiles.dev-cloud]
deployment_type = "cloud"
api_key = "${DEV_CLOUD_KEY}"
api_secret = "${DEV_CLOUD_SECRET}"
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "dev-admin"
password = "${DEV_ENTERPRISE_PASSWORD}"
insecure = true
# Staging environments
[profiles.staging-cloud]
deployment_type = "cloud"
api_key = "${STAGING_CLOUD_KEY}"
api_secret = "${STAGING_CLOUD_SECRET}"
[profiles.staging-enterprise]
deployment_type = "enterprise"
url = "https://staging-cluster:9443"
username = "staging-admin"
password = "${STAGING_ENTERPRISE_PASSWORD}"
# Production environments
[profiles.prod-cloud]
deployment_type = "cloud"
api_key = "${PROD_CLOUD_KEY}"
api_secret = "${PROD_CLOUD_SECRET}"
[profiles.prod-enterprise]
deployment_type = "enterprise"
url = "https://prod-cluster:9443"
username = "prod-admin"
password = "${PROD_ENTERPRISE_PASSWORD}"
Organizing by Region
[profiles.us-east-1]
deployment_type = "cloud"
api_key = "${US_EAST_API_KEY}"
api_secret = "${US_EAST_SECRET}"
[profiles.eu-west-1]
deployment_type = "cloud"
api_key = "${EU_WEST_API_KEY}"
api_secret = "${EU_WEST_SECRET}"
[profiles.ap-southeast-1]
deployment_type = "cloud"
api_key = "${APAC_API_KEY}"
api_secret = "${APAC_SECRET}"
Advanced Configuration
Team Shared Configuration
Create a shared base configuration:
# team-config.toml (checked into git)
[profiles.team-base]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
# Local overrides (not in git)
# ~/.config/redisctl/config.toml
[profiles.team]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
api_key = "${MY_API_KEY}"
api_secret = "${MY_API_SECRET}"
CI/CD Configuration
# CI/CD specific profiles
[profiles.ci-test]
deployment_type = "cloud"
api_key = "${CI_TEST_API_KEY}"
api_secret = "${CI_TEST_API_SECRET}"
api_url = "${CI_API_URL:-https://api.redislabs.com/v1}"
[profiles.ci-deploy]
deployment_type = "enterprise"
url = "${CI_CLUSTER_URL}"
username = "${CI_USERNAME}"
password = "${CI_PASSWORD}"
insecure = true # CI uses self-signed certs
Security Considerations
File Permissions
Set restrictive permissions on the configuration file:
# Linux/macOS
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# Should show: -rw-------
Credential Storage Best Practices
-
Never commit credentials to version control
# .gitignore config.toml *.secret
-
Use environment variables for sensitive data
[profiles.secure] deployment_type = "cloud" api_key = "${REDIS_API_KEY}" # Set in environment api_secret = "${REDIS_API_SECRET}" # Set in environment
-
Integrate with secret managers
# Set environment variables from secret manager export REDIS_API_KEY=$(vault kv get -field=api_key secret/redis) export REDIS_API_SECRET=$(vault kv get -field=api_secret secret/redis)
Migration from Other Formats
From Environment Variables Only
If currently using only environment variables:
# Create profile from environment
redisctl profile set migrated \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
From JSON Configuration
Convert JSON to TOML:
# old-config.json
{
"profiles": {
"production": {
"type": "cloud",
"apiKey": "key",
"apiSecret": "secret"
}
}
}
# Convert to config.toml
[profiles.production]
deployment_type = "cloud"
api_key = "key"
api_secret = "secret"
Validation
Check Configuration
# Validate profile configuration
redisctl profile show production
# Test authentication
redisctl auth test --profile production
# List all profiles
redisctl profile list
Common Issues
Invalid TOML syntax
# Wrong - missing quotes
[profiles.prod]
deployment_type = cloud # Should be "cloud"
# Correct
[profiles.prod]
deployment_type = "cloud"
Environment variable not found
# This will fail if MY_VAR is not set
api_key = "${MY_VAR}"
# Use default value to prevent failure
api_key = "${MY_VAR:-default-key}"
Profile name with special characters
# Use quotes for profile names with special characters
[profiles."prod-us-east-1"]
deployment_type = "cloud"
Backup and Recovery
Backup Configuration
# Backup current configuration
cp ~/.config/redisctl/config.toml ~/.config/redisctl/config.toml.backup
# Backup with timestamp
cp ~/.config/redisctl/config.toml \
~/.config/redisctl/config.toml.$(date +%Y%m%d_%H%M%S)
Restore Configuration
# Restore from backup
cp ~/.config/redisctl/config.toml.backup ~/.config/redisctl/config.toml
# Verify restoration
redisctl profile list
Example Configurations
Minimal Configuration
# Minimal working configuration
[profiles.default]
deployment_type = "cloud"
api_key = "your-key"
api_secret = "your-secret"
Full-Featured Configuration
# Complete example with all features
default_profile = "production"
# Production Cloud
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
# Staging Cloud with defaults
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
api_url = "https://api.redislabs.com/v1"
# Development Enterprise
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "admin@dev.local"
password = "${DEV_PASSWORD}"
insecure = true
# DR Enterprise with client certs
[profiles.dr-enterprise]
deployment_type = "enterprise"
url = "https://dr-cluster:9443"
username = "admin@dr.local"
password = "${DR_PASSWORD}"
client_cert = "/etc/ssl/client.crt"
client_key = "/etc/ssl/client.key"
ca_cert = "/etc/ssl/ca.crt"
# Local testing
[profiles.local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "test123"
insecure = true
Security Best Practices
This guide covers security best practices for using redisctl in production environments.
Credential Storage
Storage Methods Comparison
Method | Security Level | Use Case | Pros | Cons |
---|---|---|---|---|
OS Keyring | ⭐⭐⭐⭐⭐ High | Production | Encrypted by OS, Most secure | Requires secure-storage feature |
Environment Variables | ⭐⭐⭐⭐ Good | CI/CD, Containers | No file storage, Easy rotation | Must be set each session |
Config File (Plaintext) | ⭐⭐ Low | Development only | Simple setup | Credentials visible in file |
Using OS Keyring (Recommended for Production)
The most secure way to store credentials is using your operating system's keyring:
# Install with secure storage support
cargo install redisctl --features secure-storage
# Create secure profile
redisctl profile set production \
--deployment cloud \
--api-key "your-api-key" \
--api-secret "your-api-secret" \
--use-keyring
Platform Support
- macOS: Uses Keychain (automatic)
- Windows: Uses Credential Manager (automatic)
- Linux: Uses Secret Service (requires GNOME Keyring or KWallet)
How Keyring Storage Works
- Initial Setup: When you use
--use-keyring
, credentials are stored in the OS keyring - Config Reference: The config file stores references like
keyring:production-api-key
- Automatic Retrieval: redisctl automatically retrieves credentials from keyring when needed
- Secure Updates: Credentials can be updated without exposing them in files
Example config with keyring references:
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key" # Actual value in keyring
api_secret = "keyring:production-api-secret" # Actual value in keyring
api_url = "https://api.redislabs.com/v1" # Non-sensitive, plaintext
Environment Variables (CI/CD)
For automated environments, use environment variables:
# Set credentials
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Use in commands (overrides config)
redisctl cloud database list
# Or reference in config
cat > config.toml <<EOF
[profiles.ci]
deployment_type = "cloud"
api_key = "\${REDIS_CLOUD_API_KEY}"
api_secret = "\${REDIS_CLOUD_API_SECRET}"
EOF
GitHub Actions Example
- name: Deploy Database
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
run: |
redisctl cloud database create \
--subscription-id 12345 \
--data @database.json \
--wait
File Permissions
Protect configuration files containing credentials:
# Restrict to owner only
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# -rw------- 1 user user 1234 Jan 15 10:00 config.toml
Credential Rotation
Regular Rotation Schedule
- Generate new credentials in Redis Cloud/Enterprise console
- Update keyring with new credentials:
redisctl profile set production \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring
- Test access with new credentials
- Revoke old credentials in console
Automated Rotation Script
#!/bin/bash
# rotate-credentials.sh
PROFILE="production"
NEW_KEY=$(generate-api-key) # Your key generation method
NEW_SECRET=$(generate-api-secret)
# Update credentials
redisctl profile set "$PROFILE" \
--api-key "$NEW_KEY" \
--api-secret "$NEW_SECRET" \
--use-keyring
# Test new credentials
if redisctl --profile "$PROFILE" cloud subscription list > /dev/null; then
echo "Credential rotation successful"
# Notify old credentials can be revoked
else
echo "Credential rotation failed"
exit 1
fi
Secure Development Practices
Never Commit Credentials
Add to .gitignore
:
# Redis configuration
~/.config/redisctl/config.toml
.redisctl/
*.secret
*_credentials.toml
Use Git Hooks
Pre-commit hook to detect credentials:
#!/bin/bash
# .git/hooks/pre-commit
# Check for API keys
if git diff --cached | grep -E "api_key|api_secret|password" | grep -v "keyring:"; then
echo "ERROR: Potential credentials detected in commit"
echo "Use --use-keyring or environment variables instead"
exit 1
fi
Separate Development and Production
Use different profiles for each environment:
# Development (with keyring for safety)
[profiles.dev]
deployment_type = "cloud"
api_key = "keyring:dev-api-key"
api_secret = "keyring:dev-api-secret"
# Staging
[profiles.staging]
deployment_type = "cloud"
api_key = "keyring:staging-api-key"
api_secret = "keyring:staging-api-secret"
# Production
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key"
api_secret = "keyring:production-api-secret"
Audit and Monitoring
Profile Usage Audit
Monitor which profiles are being used:
# Enable debug logging
export RUST_LOG=debug
# Commands will log profile usage
redisctl --profile production cloud database list
# [DEBUG] Using Redis Cloud profile: production
Access Logging
Create wrapper script for audit logging:
#!/bin/bash
# /usr/local/bin/redisctl-audit
# Log command execution
echo "[$(date)] User: $USER, Command: redisctl $*" >> /var/log/redisctl-audit.log
# Execute actual command
exec /usr/local/bin/redisctl "$@"
Credential Access Monitoring
Monitor keyring access (macOS example):
# View keychain access logs
log show --predicate 'subsystem == "com.apple.securityd"' --last 1h
Network Security
TLS/SSL Verification
Always verify SSL certificates in production:
[profiles.production]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@example.com"
password = "keyring:production-password"
insecure = false # Never true in production
IP Whitelisting
Configure API access from specific IPs only:
- In Redis Cloud console, set IP whitelist
- In Redis Enterprise, configure firewall rules
- Document allowed IPs in team runbook
Incident Response
Compromised Credentials
If credentials are compromised:
- Immediately revoke compromised credentials in console
- Generate new credentials
- Update all systems using the credentials:
# Update all profiles using compromised credentials for profile in $(redisctl profile list | grep production); do redisctl profile set "$profile" \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring done
- Audit access logs for unauthorized usage
- Document incident and update security procedures
Security Checklist
- Using OS keyring for production credentials
- Config files have restricted permissions (600)
- Credentials not committed to version control
- Environment variables used in CI/CD
- Regular credential rotation scheduled
- Audit logging enabled
- SSL verification enabled
- IP whitelisting configured
- Incident response plan documented
- Team trained on security procedures
Additional Resources
Troubleshooting
Solutions for common issues when using redisctl.
Installation Issues
Binary Not Found
Problem: command not found: redisctl
Solutions:
# Check if binary is in PATH
which redisctl
# Add to PATH (Linux/macOS)
export PATH="$PATH:/path/to/redisctl"
echo 'export PATH="$PATH:/path/to/redisctl"' >> ~/.bashrc
# Make executable
chmod +x /path/to/redisctl
# Verify installation
redisctl --version
Permission Denied
Problem: permission denied: redisctl
Solutions:
# Make executable
chmod +x redisctl
# If installed system-wide
sudo chmod +x /usr/local/bin/redisctl
# Check ownership
ls -la $(which redisctl)
SSL Certificate Errors
Problem: Certificate verification failed
Solutions:
# For self-signed certificates (Enterprise)
export REDIS_ENTERPRISE_INSECURE=true
# Update CA certificates (Linux)
sudo update-ca-certificates
# macOS
brew install ca-certificates
Authentication Issues
Invalid Credentials
Problem: 401 Unauthorized
or Authentication failed
Diagnosis:
# Test credentials directly
redisctl auth test --profile prod
# Check environment variables
env | grep REDIS
# Verify profile configuration
redisctl profile show prod
Solutions:
# Re-set credentials
redisctl profile set prod \
--deployment cloud \
--api-key "correct-key" \
--api-secret "correct-secret"
# For Enterprise with special characters in password
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin@domain.com" \
--password 'p@$$w0rd!' # Use single quotes
Profile Not Found
Problem: Profile 'name' not found
Solutions:
# List available profiles
redisctl profile list
# Check config file location
redisctl profile path
# Create missing profile
redisctl profile set missing-profile \
--deployment cloud \
--api-key "$API_KEY" \
--api-secret "$SECRET"
# Set default profile
redisctl profile default prod
Environment Variable Issues
Problem: Environment variables not being read
Solutions:
# Export variables properly
export REDIS_CLOUD_API_KEY="key"
export REDIS_CLOUD_API_SECRET="secret"
# Check if set
echo $REDIS_CLOUD_API_KEY
# Use in same shell or source
source ~/.bashrc
# Debug with trace logging
RUST_LOG=trace redisctl cloud subscription list 2>&1 | grep -i env
Connection Issues
Network Timeout
Problem: Connection timeout
or Failed to connect
Diagnosis:
# Test connectivity
curl -I https://api.redislabs.com/v1/
ping api.redislabs.com
# For Enterprise
curl -k https://your-cluster:9443/v1/bootstrap
# Check DNS
nslookup api.redislabs.com
Solutions:
# Increase timeout (if supported in future versions)
export REDISCTL_TIMEOUT=60
# Check proxy settings
export HTTP_PROXY=http://proxy:8080
export HTTPS_PROXY=http://proxy:8080
# Bypass proxy for local
export NO_PROXY=localhost,127.0.0.1
# Test with curl first
curl -x $HTTPS_PROXY https://api.redislabs.com/v1/
SSL/TLS Errors
Problem: SSL certificate problem
or Certificate verify failed
Solutions for Enterprise:
# Allow self-signed certificates
export REDIS_ENTERPRISE_INSECURE=true
# Or in profile
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin" \
--password "pass" \
--insecure
# Import certificate
# Linux
sudo cp cluster-cert.pem /usr/local/share/ca-certificates/
sudo update-ca-certificates
# macOS
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain cluster-cert.pem
Port Blocked
Problem: Connection refused
Solutions:
# Check if port is open
nc -zv api.redislabs.com 443
nc -zv your-cluster 9443
# Check firewall rules
# Linux
sudo iptables -L -n | grep 9443
# macOS
sudo pfctl -s rules
# Windows
netsh advfirewall firewall show rule name=all
API Errors
Rate Limiting
Problem: 429 Too Many Requests
Solutions:
# Add delay between requests
for sub in $(cat subscriptions.txt); do
redisctl cloud subscription get $sub
sleep 2 # Wait 2 seconds
done
# Implement exponential backoff
retry_with_backoff() {
local max_attempts=5
local attempt=0
local delay=1
while [ $attempt -lt $max_attempts ]; do
if "$@"; then
return 0
fi
echo "Rate limited, waiting ${delay}s..."
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2))
done
return 1
}
retry_with_backoff redisctl cloud subscription list
Resource Not Found
Problem: 404 Not Found
Diagnosis:
# Verify resource exists
redisctl cloud subscription list
redisctl cloud database list --subscription-id 123456
# Check ID format
# Cloud: subscription_id:database_id
# Enterprise: numeric
Solutions:
# Use correct ID format
# Cloud
redisctl cloud database get \
--subscription-id 123456 \
--database-id 789
# Enterprise
redisctl enterprise database get 1
# List to find correct ID
redisctl cloud subscription list -q "[].{id: id, name: name}"
Invalid Request
Problem: 400 Bad Request
Solutions:
# Validate JSON
cat payload.json | jq .
# Check required fields
# Example: database creation requires name
cat > database.json <<EOF
{
"name": "my-database", # Required
"memoryLimitInGb": 1 # Required
}
EOF
# Use schema validation (if available)
redisctl validate database.json
# Test with minimal payload first
echo '{"name": "test", "memoryLimitInGb": 1}' | \
redisctl api cloud post /subscriptions/123/databases --data @-
Command Issues
Command Not Recognized
Problem: Unknown command
Solutions:
# Check available commands
redisctl --help
redisctl cloud --help
redisctl enterprise --help
# Update to latest version
# Download latest from GitHub releases
# Check command syntax
redisctl cloud database list --subscription-id 123 # Correct
redisctl cloud database list 123 # Incorrect
Missing Required Arguments
Problem: Missing required argument
Solutions:
# Check command requirements
redisctl cloud database get --help
# Provide all required arguments
redisctl cloud database get \
--subscription-id 123456 \ # Required
--database-id 789 # Required
# Use environment variables for defaults
export REDIS_SUBSCRIPTION_ID=123456
Output Parsing Errors
Problem: JMESPath query errors or unexpected output
Solutions:
# Test query separately
redisctl cloud subscription list -o json | jq .
redisctl cloud subscription list -q "[].name"
# Escape special characters
redisctl cloud database list -q "[?name=='my-db']" # Correct
redisctl cloud database list -q '[?name==`my-db`]' # Also correct
# Debug output format
redisctl cloud subscription list -o json > output.json
cat output.json | jq '.[] | keys'
Async Operation Issues
Operation Timeout
Problem: Operation timeout
when using --wait
Solutions:
# Increase timeout
redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
--wait \
--wait-timeout 1200 # 20 minutes
# Check operation status manually
TASK_ID=$(redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
-q "taskId")
# Poll manually
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q "status")
echo "Status: $STATUS"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then
break
fi
sleep 30
done
Task Not Found
Problem: Cannot find task ID for async operation
Solutions:
# Check if operation returns task ID
redisctl cloud database create \
--subscription-id 123 \
--data @db.json \
-o json | jq .
# Some operations might not be async
# Check API documentation
# List recent tasks
redisctl api cloud get /tasks --query-params "limit=10"
Configuration Issues
Config File Not Found
Problem: Configuration file not loading
Solutions:
# Check file location
redisctl profile path
# Create config directory
mkdir -p ~/.config/redisctl
# Initialize config
redisctl profile set default \
--deployment cloud \
--api-key "key" \
--api-secret "secret"
# Check permissions
chmod 600 ~/.config/redisctl/config.toml
Environment Variable Expansion
Problem: Variables in config not expanding
Solutions:
# config.toml
[profiles.prod]
deployment_type = "cloud"
api_key = "${REDIS_API_KEY}" # Will expand
api_secret = "$REDIS_SECRET" # Won't expand - needs braces
# With defaults
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
Performance Issues
Slow Response Times
Solutions:
# Enable caching (if implemented)
export REDISCTL_CACHE=true
# Reduce response size
redisctl cloud subscription list --query-params "fields=id,name"
# Use specific queries
redisctl cloud database list -q "[0:5]" # First 5 only
# Parallel processing
for id in $(cat database-ids.txt); do
redisctl cloud database get --subscription-id 123 --database-id $id &
done
wait
Large Output Handling
Solutions:
# Paginate results
LIMIT=50
OFFSET=0
while true; do
RESULTS=$(redisctl api cloud get /subscriptions \
--query-params "limit=$LIMIT&offset=$OFFSET")
# Process results
OFFSET=$((OFFSET + LIMIT))
done
# Stream to file
redisctl cloud database list --subscription-id 123 > databases.json
# Process with streaming tools
redisctl cloud database list --subscription-id 123 | jq -c '.[]' | while read db; do
echo "Processing: $(echo $db | jq -r .name)"
done
Debug Techniques
Enable Debug Logging
# Basic debug
export RUST_LOG=debug
redisctl cloud subscription list
# Trace everything
export RUST_LOG=trace
# Specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
# Save debug output
RUST_LOG=trace redisctl cloud subscription list 2> debug.log
Inspect HTTP Traffic
# Use proxy for inspection
export HTTP_PROXY=http://localhost:8888
# Run Charles Proxy or similar
# Or use trace logging
RUST_LOG=trace redisctl api cloud get /subscriptions 2>&1 | grep -i "http"
Test with Curl
# Replicate redisctl request with curl
# Cloud
curl -H "x-api-key: $API_KEY" \
-H "x-api-secret-key: $SECRET" \
https://api.redislabs.com/v1/subscriptions
# Enterprise
curl -k -u "admin:password" \
https://cluster:9443/v1/cluster
Getting Help
Resources
-
Check documentation
redisctl --help redisctl <command> --help
-
View debug information
redisctl --version RUST_LOG=debug redisctl profile list
-
Report issues
- GitHub Issues: https://github.com/joshrotenberg/redisctl/issues
- Include: version, command, error message, debug output
-
Community support
- Redis Discord
- Stack Overflow with tag
redisctl
Information to Provide
When reporting issues, include:
# Version
redisctl --version
# Command that failed
redisctl cloud database list --subscription-id 123
# Error message
# Full error output
# Debug output
RUST_LOG=debug redisctl cloud database list --subscription-id 123 2>&1
# Environment
uname -a
echo $SHELL
# Config (sanitized)
redisctl profile show prod | sed 's/api_key=.*/api_key=REDACTED/'
Best Practices
Recommended patterns and practices for using redisctl effectively.
Profile Management
Use Separate Profiles for Each Environment
# Development
redisctl profile set dev \
--deployment cloud \
--api-key "$DEV_API_KEY" \
--api-secret "$DEV_SECRET"
# Staging
redisctl profile set staging \
--deployment cloud \
--api-key "$STAGING_API_KEY" \
--api-secret "$STAGING_SECRET"
# Production
redisctl profile set prod \
--deployment cloud \
--api-key "$PROD_API_KEY" \
--api-secret "$PROD_SECRET"
Naming Conventions
Use consistent, descriptive profile names:
# Good
cloud-prod
cloud-staging
enterprise-prod
enterprise-dr
# Avoid
prod1
test
my-profile
Secure Credential Storage
# Use environment variables in config
# ~/.config/redisctl/config.toml
[profiles.prod]
deployment_type = "cloud"
api_key = "${REDIS_PROD_API_KEY}"
api_secret = "${REDIS_PROD_SECRET}"
# Set restrictive permissions
chmod 600 ~/.config/redisctl/config.toml
# Use secret management tools
export REDIS_PROD_API_KEY=$(vault kv get -field=api_key secret/redis/prod)
Command Usage
Always Specify Profile for Production
# Explicit is better than implicit
redisctl --profile prod cloud database list --subscription-id 123
# Avoid relying on default profile for production
redisctl cloud database delete --subscription-id 123 --database-id 456 # Dangerous!
Use Output Formats Appropriately
# Human reading: table
redisctl cloud subscription list -o table
# Scripting: json with jq
redisctl cloud subscription list -o json | jq -r '.[].id'
# Quick checks: query
redisctl cloud database get --subscription-id 123 --database-id 456 -q "status"
Implement Idempotent Operations
# Check before create
check_database_exists() {
local name=$1
redisctl cloud database list --subscription-id 123 \
-q "[?name=='$name'].databaseId" | jq -r '.[]'
}
# Only create if doesn't exist
DB_ID=$(check_database_exists "my-database")
if [ -z "$DB_ID" ]; then
redisctl cloud database create --subscription-id 123 --data @db.json --wait
fi
Error Handling
Always Check Exit Codes
#!/bin/bash
set -euo pipefail # Exit on error, undefined variables, pipe failures
# Check individual commands
if ! redisctl cloud subscription list > /dev/null 2>&1; then
echo "Failed to list subscriptions"
exit 1
fi
# Or use && and ||
redisctl cloud database create --subscription-id 123 --data @db.json --wait \
&& echo "Database created successfully" \
|| { echo "Database creation failed"; exit 1; }
Implement Retry Logic
retry_command() {
local max_attempts=${MAX_ATTEMPTS:-3}
local delay=${RETRY_DELAY:-5}
local attempt=1
while [ $attempt -le $max_attempts ]; do
if "$@"; then
return 0
fi
echo "Attempt $attempt failed. Retrying in ${delay}s..." >&2
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2)) # Exponential backoff
done
echo "Command failed after $max_attempts attempts" >&2
return 1
}
# Usage
retry_command redisctl cloud database list --subscription-id 123
Log Operations
# Create audit log
log_operation() {
local timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
local user=$(whoami)
local command="$*"
echo "[$timestamp] User: $user, Command: $command" >> ~/.redisctl/audit.log
# Execute and log result
if "$@"; then
echo "[$timestamp] Result: SUCCESS" >> ~/.redisctl/audit.log
return 0
else
local exit_code=$?
echo "[$timestamp] Result: FAILED (exit: $exit_code)" >> ~/.redisctl/audit.log
return $exit_code
fi
}
# Usage
log_operation redisctl --profile prod cloud database delete \
--subscription-id 123 --database-id 456 --wait
Async Operations
Always Use --wait for Critical Operations
# Good: Wait for completion
redisctl cloud database create \
--subscription-id 123 \
--data @database.json \
--wait \
--wait-timeout 900
# Risky: Fire and forget
redisctl cloud database create \
--subscription-id 123 \
--data @database.json # Might fail silently
Handle Timeouts Gracefully
# Create with timeout handling
create_database_with_retry() {
local attempt=1
local max_attempts=3
while [ $attempt -le $max_attempts ]; do
echo "Creating database (attempt $attempt)..."
if redisctl cloud database create \
--subscription-id 123 \
--data @database.json \
--wait \
--wait-timeout 600; then
echo "Database created successfully"
return 0
fi
echo "Creation failed or timed out"
attempt=$((attempt + 1))
# Check if partially created
DB_ID=$(redisctl cloud database list --subscription-id 123 \
-q "[?name=='my-database'].databaseId" | jq -r '.[]')
if [ -n "$DB_ID" ]; then
echo "Database partially created with ID: $DB_ID"
# Clean up or continue based on state
return 1
fi
done
return 1
}
Security
Never Hardcode Credentials
# Bad
redisctl profile set prod \
--api-key "abc123def456" \
--api-secret "secret789xyz"
# Good
redisctl profile set prod \
--api-key "$REDIS_API_KEY" \
--api-secret "$REDIS_API_SECRET"
# Better
redisctl profile set prod \
--api-key "$(vault kv get -field=api_key secret/redis)" \
--api-secret "$(vault kv get -field=api_secret secret/redis)"
Rotate Credentials Regularly
#!/bin/bash
# rotate-credentials.sh
# Generate new API key (via Redis Cloud UI or API)
NEW_API_KEY=$(generate_new_api_key)
NEW_API_SECRET=$(generate_new_api_secret)
# Update profile
redisctl profile set prod \
--api-key "$NEW_API_KEY" \
--api-secret "$NEW_API_SECRET"
# Test new credentials
if redisctl --profile prod cloud subscription list > /dev/null 2>&1; then
echo "New credentials working"
# Revoke old credentials
revoke_old_credentials
else
echo "New credentials failed, keeping old ones"
exit 1
fi
Audit Access
# Track who uses production credentials
alias redisctl-prod='log_operation redisctl --profile prod'
# Review audit logs regularly
grep "profile prod" ~/.redisctl/audit.log | tail -20
Performance
Cache Frequently Used Data
# Cache subscription list for 5 minutes
get_subscriptions() {
local cache_file="/tmp/redisctl-subs-cache.json"
local cache_age=$((5 * 60)) # 5 minutes
# Check cache age
if [ -f "$cache_file" ]; then
local file_age=$(($(date +%s) - $(stat -f %m "$cache_file" 2>/dev/null || stat -c %Y "$cache_file")))
if [ $file_age -lt $cache_age ]; then
cat "$cache_file"
return 0
fi
fi
# Refresh cache
redisctl cloud subscription list -o json | tee "$cache_file"
}
Batch Operations
# Good: Single command with multiple operations
redisctl cloud database update \
--subscription-id 123 \
--database-id 456 \
--data '{
"memoryLimitInGb": 16,
"throughputMeasurement": {"by": "operations-per-second", "value": 50000},
"alerts": [{"name": "dataset-size", "value": 90}]
}'
# Avoid: Multiple separate updates
redisctl cloud database update --subscription-id 123 --database-id 456 \
--data '{"memoryLimitInGb": 16}'
redisctl cloud database update --subscription-id 123 --database-id 456 \
--data '{"throughputMeasurement": {"by": "operations-per-second", "value": 50000}}'
Use Appropriate Query Filters
# Efficient: Filter at API level
redisctl api cloud get /subscriptions --query-params "status=active"
# Less efficient: Filter after fetching
redisctl cloud subscription list -o json | jq '.[] | select(.status == "active")'
Automation
Create Reusable Scripts
#!/bin/bash
# provision-database.sh
set -euo pipefail
# Required parameters
ENVIRONMENT=${1:?Environment required (dev/staging/prod)}
DATABASE_NAME=${2:?Database name required}
MEMORY_GB=${3:-4}
# Load environment config
source "config/${ENVIRONMENT}.env"
# Create database config
cat > /tmp/database.json <<EOF
{
"name": "${DATABASE_NAME}-${ENVIRONMENT}",
"memoryLimitInGb": ${MEMORY_GB},
"replication": $([ "$ENVIRONMENT" = "prod" ] && echo "true" || echo "false"),
"dataPersistence": "$([ "$ENVIRONMENT" = "prod" ] && echo "aof-every-1-second" || echo "none")"
}
EOF
# Create database
redisctl --profile "${ENVIRONMENT}-cloud" cloud database create \
--subscription-id "${SUBSCRIPTION_ID}" \
--data @/tmp/database.json \
--wait
# Clean up
rm /tmp/database.json
Use Configuration Files
# config/environments.yaml
environments:
development:
profile: dev-cloud
subscription_id: 12345
defaults:
memory_gb: 2
replication: false
persistence: none
production:
profile: prod-cloud
subscription_id: 67890
defaults:
memory_gb: 16
replication: true
persistence: aof-every-1-second
Implement GitOps
# .github/workflows/redis-sync.yml
name: Sync Redis Configuration
on:
push:
paths:
- 'redis-config/*.json'
jobs:
sync:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Apply configurations
run: |
for config in redis-config/*.json; do
db_name=$(basename "$config" .json)
redisctl cloud database update \
--subscription-id ${{ secrets.SUBSCRIPTION_ID }} \
--database-id $(cat "redis-config/${db_name}.id") \
--data @"$config" \
--wait
done
Monitoring and Alerting
Regular Health Checks
#!/bin/bash
# health-check.sh
check_database_health() {
local sub_id=$1
local db_id=$2
local status=$(redisctl cloud database get \
--subscription-id "$sub_id" \
--database-id "$db_id" \
-q "status")
if [ "$status" != "active" ]; then
alert "Database $db_id is $status"
return 1
fi
return 0
}
# Run checks
while read -r sub_id db_id; do
check_database_health "$sub_id" "$db_id"
done < databases.txt
Track Changes
# Before making changes
backup_configuration() {
local timestamp=$(date +%Y%m%d_%H%M%S)
local backup_dir="backups/${timestamp}"
mkdir -p "$backup_dir"
# Backup all database configs
while read -r sub_id; do
redisctl cloud database list --subscription-id "$sub_id" \
-o json > "${backup_dir}/sub_${sub_id}_databases.json"
done < subscriptions.txt
echo "Configuration backed up to $backup_dir"
}
Documentation
Document Your Setup
# Create README for your Redis setup
cat > Redis-Setup.md <<'EOF'
# Redis Infrastructure
## Profiles
- `prod-cloud`: Production Cloud environment
- `prod-enterprise`: Production Enterprise cluster
- `dr-enterprise`: Disaster recovery cluster
## Key Databases
- `user-sessions`: Session storage (16GB, 100k ops/sec)
- `product-cache`: Product catalog cache (8GB, 50k ops/sec)
- `analytics-stream`: Analytics event stream (32GB, 200k ops/sec)
## Maintenance Windows
- Production: Sunday 2-4 AM UTC
- Staging: Any time
## Runbooks
- [Database Creation](./runbooks/create-database.md)
- [Scaling Operations](./runbooks/scaling.md)
- [Disaster Recovery](./runbooks/dr.md)
EOF
Maintain Runbooks
# Runbook: Database Scaling
## When to Scale
- Memory usage > 80% for 30 minutes
- Throughput > 90% of limit
- Latency > 5ms p99
## How to Scale
1. Check current metrics:
```bash
./scripts/check-metrics.sh prod-database
-
Calculate new size:
- Memory: Current usage * 1.5
- Throughput: Current peak * 2
-
Apply scaling:
./scripts/scale-database.sh prod-database --memory 32 --throughput 200000
-
Verify:
./scripts/verify-scaling.sh prod-database
## Summary Checklist
✅ **Profiles**: Use separate profiles for each environment
✅ **Security**: Never hardcode credentials
✅ **Error Handling**: Check exit codes and implement retries
✅ **Async Ops**: Always use --wait for critical operations
✅ **Logging**: Audit all production operations
✅ **Automation**: Create reusable, parameterized scripts
✅ **Monitoring**: Implement regular health checks
✅ **Documentation**: Maintain runbooks and setup documentation
✅ **Testing**: Test changes in non-production first
✅ **Backups**: Backup configurations before changes
API Reference
Complete reference for direct API access using redisctl.
Overview
The api
command provides direct access to REST endpoints for both Redis Cloud and Redis Enterprise APIs.
redisctl api <deployment> <method> <path> [OPTIONS]
Command Structure
Deployment Types
cloud
- Redis Cloud APIenterprise
- Redis Enterprise API
HTTP Methods
get
- HTTP GET requestpost
- HTTP POST requestput
- HTTP PUT requestpatch
- HTTP PATCH requestdelete
- HTTP DELETE request
Path Format
- Must start with
/
- Can include path parameters
- Query parameters via
--query-params
Options
Option | Description | Example |
---|---|---|
--data <JSON> | Request body (inline or @file) | --data @payload.json |
--query-params <PARAMS> | URL query parameters | --query-params "limit=10&offset=0" |
--headers <HEADERS> | Additional HTTP headers | --headers "X-Custom: value" |
-o, --output <FORMAT> | Output format | -o json |
-q, --query <JMESPATH> | JMESPath filter | -q "[].name" |
--profile <NAME> | Use specific profile | --profile prod |
Redis Cloud API
Base URL
https://api.redislabs.com/v1
Authentication
- Header:
x-api-key: <api-key>
- Header:
x-api-secret-key: <secret-key>
Common Endpoints
Account
# Get account info
redisctl api cloud get /
# Get payment methods
redisctl api cloud get /payment-methods
# Get regions
redisctl api cloud get /regions
Subscriptions
# List subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/123456
# Create subscription
redisctl api cloud post /subscriptions --data @subscription.json
# Update subscription
redisctl api cloud put /subscriptions/123456 --data '{"name": "New Name"}'
# Delete subscription
redisctl api cloud delete /subscriptions/123456
Databases
# List databases
redisctl api cloud get /subscriptions/123456/databases
# Get database
redisctl api cloud get /subscriptions/123456/databases/789
# Create database
redisctl api cloud post /subscriptions/123456/databases --data @database.json
# Update database
redisctl api cloud put /subscriptions/123456/databases/789 --data '{"memoryLimitInGb": 8}'
# Delete database
redisctl api cloud delete /subscriptions/123456/databases/789
VPC Peering
# List VPC peerings
redisctl api cloud get /subscriptions/123456/peerings
# Create VPC peering
redisctl api cloud post /subscriptions/123456/peerings --data @vpc.json
# Get peering status
redisctl api cloud get /subscriptions/123456/peerings/abc123
# Delete peering
redisctl api cloud delete /subscriptions/123456/peerings/abc123
Tasks
# Get task status
redisctl api cloud get /tasks/task-123
# List tasks
redisctl api cloud get /tasks --query-params "status=processing"
ACL
# List ACL rules
redisctl api cloud get /subscriptions/123456/databases/789/acl/rules
# Create ACL rule
redisctl api cloud post /subscriptions/123456/databases/789/acl/rules --data @rule.json
# List ACL users
redisctl api cloud get /subscriptions/123456/databases/789/acl/users
# Create ACL user
redisctl api cloud post /subscriptions/123456/databases/789/acl/users --data @user.json
Response Codes
Code | Meaning | Action |
---|---|---|
200 | Success | Request completed |
201 | Created | Resource created |
202 | Accepted | Async operation started |
400 | Bad Request | Check request format |
401 | Unauthorized | Check API credentials |
403 | Forbidden | Check permissions |
404 | Not Found | Verify resource exists |
409 | Conflict | Resource state conflict |
429 | Rate Limited | Retry after delay |
500 | Server Error | Contact support |
Redis Enterprise API
Base URL
https://<cluster-address>:9443
Authentication
- Basic Auth:
username:password
- Header:
Authorization: Basic <base64>
Common Endpoints
Cluster
# Get cluster info
redisctl api enterprise get /v1/cluster
# Update cluster
redisctl api enterprise put /v1/cluster --data '{"name": "Production"}'
# Get cluster policy
redisctl api enterprise get /v1/cluster/policy
# Update policy
redisctl api enterprise put /v1/cluster/policy --data @policy.json
Databases (BDB)
# List databases
redisctl api enterprise get /v1/bdbs
# Get database
redisctl api enterprise get /v1/bdbs/1
# Create database
redisctl api enterprise post /v1/bdbs --data @bdb.json
# Update database
redisctl api enterprise put /v1/bdbs/1 --data '{"memory_size": 10737418240}'
# Delete database
redisctl api enterprise delete /v1/bdbs/1
Nodes
# List nodes
redisctl api enterprise get /v1/nodes
# Get node
redisctl api enterprise get /v1/nodes/1
# Update node
redisctl api enterprise put /v1/nodes/1 --data '{"rack_id": "rack-1"}'
# Node actions
redisctl api enterprise post /v1/nodes/1/actions/check
Users & RBAC
# List users
redisctl api enterprise get /v1/users
# Create user
redisctl api enterprise post /v1/users --data @user.json
# Get user
redisctl api enterprise get /v1/users/1
# Update user
redisctl api enterprise put /v1/users/1 --data '{"name": "Updated Name"}'
# Delete user
redisctl api enterprise delete /v1/users/1
# List roles
redisctl api enterprise get /v1/roles
Statistics
# Cluster stats
redisctl api enterprise get /v1/cluster/stats/last
# Database stats
redisctl api enterprise get /v1/bdbs/stats/last
# Node stats
redisctl api enterprise get /v1/nodes/stats/last
# Shard stats
redisctl api enterprise get /v1/shards/stats/last
Modules
# List modules
redisctl api enterprise get /v1/modules
# Upload module (requires multipart)
# Use module command instead: redisctl enterprise module upload --file module.zip
# Get module
redisctl api enterprise get /v1/modules/1
# Delete module
redisctl api enterprise delete /v1/modules/1
Logs
# Get cluster logs
redisctl api enterprise get /v1/logs --query-params "limit=100"
# Filter logs by time
redisctl api enterprise get /v1/logs --query-params "stime=2024-01-01T00:00:00Z&etime=2024-01-02T00:00:00Z"
API Versions
Redis Enterprise supports both v1 and v2 endpoints:
Version | Status | Usage |
---|---|---|
v1 | Stable | Most operations |
v2 | Preview | New features, async operations |
# v1 endpoint
redisctl api enterprise get /v1/bdbs
# v2 endpoint (if available)
redisctl api enterprise get /v2/bdbs
Query Parameters
Common query parameters across APIs:
Parameter | Description | Example |
---|---|---|
limit | Max results | limit=50 |
offset | Skip results | offset=100 |
sort | Sort field | sort=name |
order | Sort order | order=desc |
fields | Select fields | fields=name,status |
filter | Filter results | filter=status:active |
Request Body Formats
JSON Payload
# Inline JSON
redisctl api cloud post /path --data '{"key": "value"}'
# From file
redisctl api cloud post /path --data @payload.json
# From stdin
echo '{"key": "value"}' | redisctl api cloud post /path --data @-
Complex Examples
Create Database with Full Configuration
{
"name": "production-cache",
"memoryLimitInGb": 16,
"protocol": "redis",
"port": 10000,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 100000
},
"replication": true,
"dataPersistence": "aof-every-1-second",
"dataEvictionPolicy": "allkeys-lru",
"modules": [
{"name": "RedisJSON"},
{"name": "RediSearch"}
],
"alerts": [
{"name": "dataset-size", "value": 80}
],
"backup": {
"interval": 6,
"enabled": true
}
}
Update Multiple Properties
redisctl api cloud put /subscriptions/123/databases/456 --data '{
"memoryLimitInGb": 32,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 200000
},
"alerts": [
{"name": "dataset-size", "value": 90},
{"name": "throughput-higher-than", "value": 180000}
]
}'
Response Handling
Success Response
# Pretty print JSON
redisctl api cloud get /subscriptions -o json | jq .
# Extract specific fields
redisctl api cloud get /subscriptions -q "[].{id: id, name: name}"
# Table format
redisctl api cloud get /subscriptions -o table
Error Response
{
"error": {
"code": "INVALID_REQUEST",
"message": "Missing required field: name",
"details": {
"field": "name",
"constraint": "required"
}
}
}
Async Operations
# Create returns task ID
TASK_ID=$(redisctl api cloud post /subscriptions/123/databases \
--data @database.json \
-q "taskId")
# Poll task status
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q "status")
if [ "$STATUS" = "completed" ]; then
break
elif [ "$STATUS" = "failed" ]; then
echo "Task failed!"
exit 1
fi
sleep 10
done
Rate Limiting
Both APIs implement rate limiting:
Redis Cloud
- Default: 100 requests per minute
- Burst: 150 requests
- Headers:
X-RateLimit-Limit
,X-RateLimit-Remaining
Redis Enterprise
- Configurable per cluster
- Default: No rate limiting
- Can be enabled in cluster settings
Handling Rate Limits
# Retry with exponential backoff
retry_with_backoff() {
local max_attempts=5
local attempt=0
local delay=1
while [ $attempt -lt $max_attempts ]; do
if redisctl api cloud get /subscriptions; then
return 0
fi
echo "Rate limited, waiting ${delay}s..."
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2))
done
return 1
}
Pagination
Handle paginated results:
#!/bin/bash
# Fetch all pages
LIMIT=100
OFFSET=0
ALL_RESULTS=()
while true; do
RESULTS=$(redisctl api cloud get /subscriptions \
--query-params "limit=$LIMIT&offset=$OFFSET" \
-o json)
COUNT=$(echo "$RESULTS" | jq '. | length')
if [ "$COUNT" -eq 0 ]; then
break
fi
ALL_RESULTS+=("$RESULTS")
OFFSET=$((OFFSET + LIMIT))
done
# Combine results
echo "${ALL_RESULTS[@]}" | jq -s 'flatten'
Best Practices
- Use profiles for credential management
- Handle errors gracefully with proper error checking
- Implement retries for transient failures
- Respect rate limits with backoff strategies
- Use pagination for large result sets
- Cache responses when appropriate
- Log API calls for audit trails
- Validate JSON before sending
- Use query filters to reduce response size
- Monitor API usage to stay within limits
Troubleshooting
Debug API Calls
# Enable debug logging
RUST_LOG=debug redisctl api cloud get /subscriptions
# View request headers
RUST_LOG=trace redisctl api cloud get /subscriptions 2>&1 | grep -i header
# Test with curl
curl -H "x-api-key: $API_KEY" \
-H "x-api-secret-key: $SECRET" \
https://api.redislabs.com/v1/subscriptions
Common Issues
401 Unauthorized
- Check API credentials
- Verify profile configuration
- Ensure credentials have necessary permissions
404 Not Found
- Verify endpoint path
- Check resource IDs
- Ensure API version is correct
429 Rate Limited
- Implement retry logic
- Add delays between requests
- Consider caching responses
500 Server Error
- Check API status page
- Retry with exponential backoff
- Contact support if persistent
Architecture
Coming soon - this section will cover:
- Overall project structure
- Command routing system
- API client design
- Profile management
- Error handling strategy
Using the Libraries
The redis-cloud
and redis-enterprise
crates can be used independently in your Rust projects.
Installation
[dependencies]
redis-cloud = "0.2"
redis-enterprise = "0.2"
Basic Usage
Redis Cloud Client
use redis_cloud::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "your-api-key", "your-api-secret", )?; // Get account info let account = client.get_raw("/account").await?; println!("{}", account); Ok(()) }
Redis Enterprise Client
use redis_enterprise::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "https://cluster:9443", "admin@cluster.local", "password", true, // insecure )?; // Get cluster info let cluster = client.get_raw("/v1/cluster").await?; println!("{}", cluster); Ok(()) }
More documentation coming soon.
Contributing
Contributions are welcome! Please see the GitHub repository for guidelines.