Newer
Older
cactus / CUBESHELL_GUIDE.md
@agalyaramadoss agalyaramadoss on 16 Feb 16 KB added document

CubeShell & Cluster Utilities Guide

Overview

CubeShell is an enhanced interactive SQL shell for managing Cube database clusters with full support for:

  • Multi-node cluster connections
  • Consistency level management
  • Cluster topology visualization
  • Replication monitoring
  • Health checking
  • Token ring management

Features

✅ Cluster Management

  • Connect to multiple nodes simultaneously
  • View cluster topology and node states
  • Switch between nodes
  • Monitor node health
  • View datacenter/rack distribution

✅ Consistency Control

  • Set default consistency levels
  • Choose from: ANY, ONE, TWO, THREE, QUORUM, ALL
  • View consistency requirements per operation

✅ Data Operations

  • PUT, GET, DELETE with replication
  • SCAN with prefix search
  • Automatic consistency level application

✅ Monitoring & Stats

  • Node status and health
  • Replication statistics
  • Storage statistics per node
  • Cluster-wide aggregated stats

Quick Start

Starting CubeShell

# Connect to default localhost:8080
./cubesh

# Connect to specific node
./cubesh --host 192.168.1.100 --port 8080
./cubesh -h dbserver.local -p 9000

Starting Java Directly

java -cp target/cube-db-1.0.0.jar com.cube.shell.CubeShell --host localhost --port 8080

Shell Commands

Cluster Management Commands

CONNECT - Add Node to Cluster

cube> CONNECT <host> <port>

Examples:
cube> CONNECT localhost 8080
cube> CONNECT 192.168.1.101 8080
cube> CONNECT node2.cluster.local 8080

DISCONNECT - Remove Node

cube> DISCONNECT <node-id>

Example:
cube> DISCONNECT node-192.168.1.101-8080

NODES / CLUSTER - View All Nodes

cube> NODES
cube> CLUSTER

Output:
╔════════════════════════════════════════════════════════════╗
║                    Cluster Nodes                           ║
╠════════════════════════════════════════════════════════════╣
║ ➜ ✓ node-localhost-8080  localhost:8080     DC:dc1         ║
║   ✓ node-192.168.1.101    192.168.1.101:8080 DC:dc1        ║
║   ✗ node-192.168.1.102    192.168.1.102:8080 DC:dc2        ║
╠════════════════════════════════════════════════════════════╣
║ Total Nodes: 3    Alive: 2    Current: node-localhost-8080 ║
╚════════════════════════════════════════════════════════════╝

Legend:

  • = Current active node
  • = Node is alive
  • = Node is down/unreachable

USE - Switch Active Node

cube> USE <node-id>

Example:
cube> USE node-192.168.1.101-8080
✓ Switched to node-192.168.1.101-8080

STATUS - View Current Node Status

cube> STATUS

Output:
╔════════════════════════════════════════════════════════════╗
║                    Node Status                             ║
╠════════════════════════════════════════════════════════════╣
║ Node:        node-localhost-8080                           ║
║ Endpoint:    localhost:8080                                ║
║ Status:      ✓ ALIVE                                       ║
╠════════════════════════════════════════════════════════════╣
║ Storage Statistics:                                        ║
║   Total Keys:     1250                                     ║
║   Total Size:     524288 bytes                             ║
║   MemTable Size:  65536 bytes                              ║
║   SSTable Count:  3                                        ║
╚════════════════════════════════════════════════════════════╝

STATS - View Replication Statistics

cube> STATS

Output:
╔════════════════════════════════════════════════════════════╗
║              Replication Statistics                        ║
╠════════════════════════════════════════════════════════════╣
║ Cluster Nodes:           3                                 ║
║ Alive Nodes:             2                                 ║
║ Default Consistency:     QUORUM                            ║
╠════════════════════════════════════════════════════════════╣
║ Datacenter Distribution:                                   ║
║   dc1:                   2 nodes                           ║
║   dc2:                   1 nodes                           ║
╚════════════════════════════════════════════════════════════╝

Consistency Level Commands

CONSISTENCY / CL - Set Consistency Level

cube> CONSISTENCY <level>
cube> CL <level>

Examples:
cube> CONSISTENCY QUORUM
✓ Consistency level set to QUORUM

cube> CL ONE
✓ Consistency level set to ONE

cube> CONSISTENCY
Current consistency level: QUORUM

Available levels:
  ANY - Requires response from any node (including hints)
  ONE - Requires response from 1 replica
  TWO - Requires response from 2 replicas
  THREE - Requires response from 3 replicas
  QUORUM - Requires response from majority of replicas
  ALL - Requires response from all replicas
  LOCAL_ONE - Requires response from 1 local replica
  LOCAL_QUORUM - Requires response from local quorum

Data Operation Commands

PUT - Write Data

cube> PUT <key> <value>

Examples:
cube> PUT user:1 Alice
✓ PUT successful
  Key: user:1
  Value: Alice
  CL: QUORUM

cube> PUT product:laptop "MacBook Pro"
✓ PUT successful
  Key: product:laptop
  Value: MacBook Pro
  CL: QUORUM

GET - Read Data

cube> GET <key>

Examples:
cube> GET user:1
✓ Found
  Key: user:1
  Value: Alice
  CL: QUORUM

cube> GET nonexistent
✗ Not found: nonexistent

DELETE - Remove Data

cube> DELETE <key>

Example:
cube> DELETE user:1
✓ DELETE successful
  Key: user:1
  CL: QUORUM
cube> SCAN <prefix>

Example:
cube> SCAN user:
✓ Found 3 result(s)

┌────────────────────────────┬────────────────────────────┐
│ Key                        │ Value                      │
├────────────────────────────┼────────────────────────────┤
│ user:1                     │ Alice                      │
│ user:2                     │ Bob                        │
│ user:3                     │ Charlie                    │
└────────────────────────────┴────────────────────────────┘

Shell Utility Commands

HISTORY - View Command History

cube> HISTORY

Output:
╔════════════════════════════════════════════════════════════╗
║                   Command History                          ║
╠════════════════════════════════════════════════════════════╣
║   1: CONNECT localhost 8080                                ║
║   2: CONNECT 192.168.1.101 8080                            ║
║   3: NODES                                                 ║
║   4: CONSISTENCY QUORUM                                    ║
║   5: PUT user:1 Alice                                      ║
╚════════════════════════════════════════════════════════════╝

CLEAR - Clear Screen

cube> CLEAR

HELP / ? - Show Help

cube> HELP
cube> ?

EXIT / QUIT - Exit Shell

cube> EXIT
cube> QUIT
Goodbye!

Cluster Utilities API

ClusterUtils.HealthChecker

Monitors node health automatically:

import com.cube.cluster.ClusterUtils;

Map<String, ClusterNode> nodes = new HashMap<>();
nodes.put("node1", node1);
nodes.put("node2", node2);

ClusterUtils.HealthChecker healthChecker = new ClusterUtils.HealthChecker(
    nodes,
    5000,   // Check every 5 seconds
    15000   // 15 second timeout
);

healthChecker.start();

// Automatically marks nodes as SUSPECTED or DEAD if no heartbeat

ClusterUtils.Topology

Visualize cluster topology:

import com.cube.cluster.ClusterUtils;

List<ClusterNode> nodes = getAllClusterNodes();

ClusterUtils.Topology topology = new ClusterUtils.Topology(nodes);

// Get nodes by datacenter
List<ClusterNode> dc1Nodes = topology.getNodesByDatacenter("dc1");

// Get nodes by rack
List<ClusterNode> rackNodes = topology.getNodesByRack("dc1", "rack1");

// Print topology
topology.printTopology();

Output:

╔════════════════════════════════════════════════════════════╗
║                   Cluster Topology                         ║
╠════════════════════════════════════════════════════════════╣
║ Total Nodes:  5                                            ║
║ Alive Nodes:  4                                            ║
║ Datacenters:  2                                            ║
╠════════════════════════════════════════════════════════════╣
║ Datacenter: dc1                                            ║
║   Rack rack1:      2 nodes                                 ║
║     ✓ node-1               10.0.0.1:8080                   ║
║     ✓ node-2               10.0.0.2:8080                   ║
║   Rack rack2:      1 nodes                                 ║
║     ✓ node-3               10.0.0.3:8080                   ║
║ Datacenter: dc2                                            ║
║   Rack rack1:      2 nodes                                 ║
║     ✓ node-4               10.0.1.1:8080                   ║
║     ✗ node-5               10.0.1.2:8080                   ║
╚════════════════════════════════════════════════════════════╝

ClusterUtils.TokenRing

Consistent hashing for key distribution:

import com.cube.cluster.ClusterUtils;

List<ClusterNode> nodes = getAllClusterNodes();

ClusterUtils.TokenRing ring = new ClusterUtils.TokenRing(
    nodes,
    256  // 256 virtual nodes per physical node
);

// Find node responsible for a key
ClusterNode node = ring.getNodeForKey("user:123");

// Find N nodes for replication
List<ClusterNode> replicas = ring.getNodesForKey("user:123", 3);

// Print ring distribution
ring.printRing();

ClusterUtils.StatsAggregator

Aggregate cluster statistics:

import com.cube.cluster.ClusterUtils;

List<ClusterNode> nodes = getAllClusterNodes();

Map<String, Object> stats = ClusterUtils.StatsAggregator
    .aggregateClusterStats(nodes);

ClusterUtils.StatsAggregator.printClusterStats(stats);

ClusterUtils.NodeDiscovery

Discover nodes from seed list:

import com.cube.cluster.ClusterUtils;

List<String> seeds = Arrays.asList(
    "10.0.0.1:8080",
    "10.0.0.2:8080",
    "10.0.0.3:8080"
);

List<ClusterNode> discovered = ClusterUtils.NodeDiscovery
    .discoverFromSeeds(seeds);

// Generate seed list from nodes
List<String> seedList = ClusterUtils.NodeDiscovery
    .generateSeedList(discovered);

Usage Scenarios

Scenario 1: Connect to 3-Node Cluster

# Start shell
./cubesh

# Connect to all nodes
cube> CONNECT node1.cluster.local 8080
✓ Connected to node1.cluster.local:8080

cube> CONNECT node2.cluster.local 8080
✓ Connected to node2.cluster.local:8080

cube> CONNECT node3.cluster.local 8080
✓ Connected to node3.cluster.local:8080

# View cluster
cube> NODES
[Shows all 3 nodes]

# Set strong consistency
cube> CL QUORUM

# Write data (goes to 2 of 3 nodes)
cube> PUT user:alice "Alice Johnson"
✓ PUT successful

Scenario 2: Monitor Cluster Health

cube> NODES
[Check which nodes are alive]

cube> USE node-2
[Switch to node 2]

cube> STATUS
[Check node 2 status]

cube> STATS
[View replication stats]

Scenario 3: Handle Node Failure

# Initial state: 3 nodes alive
cube> NODES
║ ➜ ✓ node-1  10.0.0.1:8080  DC:dc1 ║
║   ✓ node-2  10.0.0.2:8080  DC:dc1 ║
║   ✓ node-3  10.0.0.3:8080  DC:dc1 ║

# Node 3 goes down
cube> NODES
║ ➜ ✓ node-1  10.0.0.1:8080  DC:dc1 ║
║   ✓ node-2  10.0.0.2:8080  DC:dc1 ║
║   ✗ node-3  10.0.0.3:8080  DC:dc1 ║  [DEAD]

# Continue operating with CL=QUORUM (2 of 3)
cube> PUT user:bob Bob
✓ PUT successful  [Writes to node-1 and node-2]

# Node 3 recovers
cube> NODES
║ ➜ ✓ node-1  10.0.0.1:8080  DC:dc1 ║
║   ✓ node-2  10.0.0.2:8080  DC:dc1 ║
║   ✓ node-3  10.0.0.3:8080  DC:dc1 ║  [ALIVE]

# Hinted handoff replays missed writes automatically

Configuration

Environment Variables

export CUBE_HOST=localhost
export CUBE_PORT=8080
export CUBE_CONSISTENCY=QUORUM

Consistency Level Guidelines

Scenario Write CL Read CL Description
High Availability ONE ONE Fastest, eventual consistency
Balanced QUORUM QUORUM Strong consistency, good performance
Strong Consistency QUORUM ALL Ensure reads see latest
Maximum Consistency ALL ALL Slowest, strongest

Troubleshooting

Cannot Connect to Node

✗ Failed to connect: Connection refused

Solutions:
1. Check node is running: curl http://host:port/api/v1/health
2. Check firewall rules
3. Verify correct host and port

Node Marked as DEAD

Cause: No heartbeat received within timeout

Solutions:
1. Check network connectivity
2. Check node is actually running
3. Increase timeout if network is slow

Consistency Level Errors

✗ Not enough replicas available

Solutions:
1. Reduce consistency level (e.g., ALL -> QUORUM -> ONE)
2. Add more nodes to cluster
3. Check node health

Advanced Features

Custom Health Checking

ClusterUtils.HealthChecker checker = new ClusterUtils.HealthChecker(
    nodes,
    3000,   // Check every 3 seconds
    10000   // 10 second timeout
);
checker.start();

Token Ring with Virtual Nodes

// More virtual nodes = better distribution
ClusterUtils.TokenRing ring = new ClusterUtils.TokenRing(nodes, 512);

Topology-Aware Operations

Topology topo = new Topology(nodes);

// Get local nodes
List<ClusterNode> localNodes = topo.getNodesByDatacenter("dc1");

// Prefer local reads
for (ClusterNode node : localNodes) {
    if (node.isAlive()) {
        readFrom(node);
        break;
    }
}

See Also

  • PHASE2_README.md - Replication and consistency details
  • README.md - Main project documentation
  • QUICKSTART.md - Quick setup guide

CubeShell - Manage your distributed database cluster with ease! 🚀