CubeShell is an enhanced interactive SQL shell for managing Cube database clusters with full support for:
# Connect to default localhost:8080 ./cubesh # Connect to specific node ./cubesh --host 192.168.1.100 --port 8080 ./cubesh -h dbserver.local -p 9000
java -cp target/cube-db-1.0.0.jar com.cube.shell.CubeShell --host localhost --port 8080
cube> CONNECT <host> <port> Examples: cube> CONNECT localhost 8080 cube> CONNECT 192.168.1.101 8080 cube> CONNECT node2.cluster.local 8080
cube> DISCONNECT <node-id> Example: cube> DISCONNECT node-192.168.1.101-8080
cube> NODES cube> CLUSTER Output: ╔════════════════════════════════════════════════════════════╗ ║ Cluster Nodes ║ ╠════════════════════════════════════════════════════════════╣ ║ ➜ ✓ node-localhost-8080 localhost:8080 DC:dc1 ║ ║ ✓ node-192.168.1.101 192.168.1.101:8080 DC:dc1 ║ ║ ✗ node-192.168.1.102 192.168.1.102:8080 DC:dc2 ║ ╠════════════════════════════════════════════════════════════╣ ║ Total Nodes: 3 Alive: 2 Current: node-localhost-8080 ║ ╚════════════════════════════════════════════════════════════╝
Legend:
➜ = Current active node✓ = Node is alive✗ = Node is down/unreachablecube> USE <node-id> Example: cube> USE node-192.168.1.101-8080 ✓ Switched to node-192.168.1.101-8080
cube> STATUS Output: ╔════════════════════════════════════════════════════════════╗ ║ Node Status ║ ╠════════════════════════════════════════════════════════════╣ ║ Node: node-localhost-8080 ║ ║ Endpoint: localhost:8080 ║ ║ Status: ✓ ALIVE ║ ╠════════════════════════════════════════════════════════════╣ ║ Storage Statistics: ║ ║ Total Keys: 1250 ║ ║ Total Size: 524288 bytes ║ ║ MemTable Size: 65536 bytes ║ ║ SSTable Count: 3 ║ ╚════════════════════════════════════════════════════════════╝
cube> STATS Output: ╔════════════════════════════════════════════════════════════╗ ║ Replication Statistics ║ ╠════════════════════════════════════════════════════════════╣ ║ Cluster Nodes: 3 ║ ║ Alive Nodes: 2 ║ ║ Default Consistency: QUORUM ║ ╠════════════════════════════════════════════════════════════╣ ║ Datacenter Distribution: ║ ║ dc1: 2 nodes ║ ║ dc2: 1 nodes ║ ╚════════════════════════════════════════════════════════════╝
cube> CONSISTENCY <level> cube> CL <level> Examples: cube> CONSISTENCY QUORUM ✓ Consistency level set to QUORUM cube> CL ONE ✓ Consistency level set to ONE cube> CONSISTENCY Current consistency level: QUORUM Available levels: ANY - Requires response from any node (including hints) ONE - Requires response from 1 replica TWO - Requires response from 2 replicas THREE - Requires response from 3 replicas QUORUM - Requires response from majority of replicas ALL - Requires response from all replicas LOCAL_ONE - Requires response from 1 local replica LOCAL_QUORUM - Requires response from local quorum
cube> PUT <key> <value> Examples: cube> PUT user:1 Alice ✓ PUT successful Key: user:1 Value: Alice CL: QUORUM cube> PUT product:laptop "MacBook Pro" ✓ PUT successful Key: product:laptop Value: MacBook Pro CL: QUORUM
cube> GET <key> Examples: cube> GET user:1 ✓ Found Key: user:1 Value: Alice CL: QUORUM cube> GET nonexistent ✗ Not found: nonexistent
cube> DELETE <key> Example: cube> DELETE user:1 ✓ DELETE successful Key: user:1 CL: QUORUM
cube> SCAN <prefix> Example: cube> SCAN user: ✓ Found 3 result(s) ┌────────────────────────────┬────────────────────────────┐ │ Key │ Value │ ├────────────────────────────┼────────────────────────────┤ │ user:1 │ Alice │ │ user:2 │ Bob │ │ user:3 │ Charlie │ └────────────────────────────┴────────────────────────────┘
cube> HISTORY Output: ╔════════════════════════════════════════════════════════════╗ ║ Command History ║ ╠════════════════════════════════════════════════════════════╣ ║ 1: CONNECT localhost 8080 ║ ║ 2: CONNECT 192.168.1.101 8080 ║ ║ 3: NODES ║ ║ 4: CONSISTENCY QUORUM ║ ║ 5: PUT user:1 Alice ║ ╚════════════════════════════════════════════════════════════╝
cube> CLEAR
cube> HELP cube> ?
cube> EXIT cube> QUIT Goodbye!
Monitors node health automatically:
import com.cube.cluster.ClusterUtils;
Map<String, ClusterNode> nodes = new HashMap<>();
nodes.put("node1", node1);
nodes.put("node2", node2);
ClusterUtils.HealthChecker healthChecker = new ClusterUtils.HealthChecker(
nodes,
5000, // Check every 5 seconds
15000 // 15 second timeout
);
healthChecker.start();
// Automatically marks nodes as SUSPECTED or DEAD if no heartbeat
Visualize cluster topology:
import com.cube.cluster.ClusterUtils;
List<ClusterNode> nodes = getAllClusterNodes();
ClusterUtils.Topology topology = new ClusterUtils.Topology(nodes);
// Get nodes by datacenter
List<ClusterNode> dc1Nodes = topology.getNodesByDatacenter("dc1");
// Get nodes by rack
List<ClusterNode> rackNodes = topology.getNodesByRack("dc1", "rack1");
// Print topology
topology.printTopology();
Output:
╔════════════════════════════════════════════════════════════╗ ║ Cluster Topology ║ ╠════════════════════════════════════════════════════════════╣ ║ Total Nodes: 5 ║ ║ Alive Nodes: 4 ║ ║ Datacenters: 2 ║ ╠════════════════════════════════════════════════════════════╣ ║ Datacenter: dc1 ║ ║ Rack rack1: 2 nodes ║ ║ ✓ node-1 10.0.0.1:8080 ║ ║ ✓ node-2 10.0.0.2:8080 ║ ║ Rack rack2: 1 nodes ║ ║ ✓ node-3 10.0.0.3:8080 ║ ║ Datacenter: dc2 ║ ║ Rack rack1: 2 nodes ║ ║ ✓ node-4 10.0.1.1:8080 ║ ║ ✗ node-5 10.0.1.2:8080 ║ ╚════════════════════════════════════════════════════════════╝
Consistent hashing for key distribution:
import com.cube.cluster.ClusterUtils;
List<ClusterNode> nodes = getAllClusterNodes();
ClusterUtils.TokenRing ring = new ClusterUtils.TokenRing(
nodes,
256 // 256 virtual nodes per physical node
);
// Find node responsible for a key
ClusterNode node = ring.getNodeForKey("user:123");
// Find N nodes for replication
List<ClusterNode> replicas = ring.getNodesForKey("user:123", 3);
// Print ring distribution
ring.printRing();
Aggregate cluster statistics:
import com.cube.cluster.ClusterUtils;
List<ClusterNode> nodes = getAllClusterNodes();
Map<String, Object> stats = ClusterUtils.StatsAggregator
.aggregateClusterStats(nodes);
ClusterUtils.StatsAggregator.printClusterStats(stats);
Discover nodes from seed list:
import com.cube.cluster.ClusterUtils;
List<String> seeds = Arrays.asList(
"10.0.0.1:8080",
"10.0.0.2:8080",
"10.0.0.3:8080"
);
List<ClusterNode> discovered = ClusterUtils.NodeDiscovery
.discoverFromSeeds(seeds);
// Generate seed list from nodes
List<String> seedList = ClusterUtils.NodeDiscovery
.generateSeedList(discovered);
# Start shell ./cubesh # Connect to all nodes cube> CONNECT node1.cluster.local 8080 ✓ Connected to node1.cluster.local:8080 cube> CONNECT node2.cluster.local 8080 ✓ Connected to node2.cluster.local:8080 cube> CONNECT node3.cluster.local 8080 ✓ Connected to node3.cluster.local:8080 # View cluster cube> NODES [Shows all 3 nodes] # Set strong consistency cube> CL QUORUM # Write data (goes to 2 of 3 nodes) cube> PUT user:alice "Alice Johnson" ✓ PUT successful
cube> NODES [Check which nodes are alive] cube> USE node-2 [Switch to node 2] cube> STATUS [Check node 2 status] cube> STATS [View replication stats]
# Initial state: 3 nodes alive cube> NODES ║ ➜ ✓ node-1 10.0.0.1:8080 DC:dc1 ║ ║ ✓ node-2 10.0.0.2:8080 DC:dc1 ║ ║ ✓ node-3 10.0.0.3:8080 DC:dc1 ║ # Node 3 goes down cube> NODES ║ ➜ ✓ node-1 10.0.0.1:8080 DC:dc1 ║ ║ ✓ node-2 10.0.0.2:8080 DC:dc1 ║ ║ ✗ node-3 10.0.0.3:8080 DC:dc1 ║ [DEAD] # Continue operating with CL=QUORUM (2 of 3) cube> PUT user:bob Bob ✓ PUT successful [Writes to node-1 and node-2] # Node 3 recovers cube> NODES ║ ➜ ✓ node-1 10.0.0.1:8080 DC:dc1 ║ ║ ✓ node-2 10.0.0.2:8080 DC:dc1 ║ ║ ✓ node-3 10.0.0.3:8080 DC:dc1 ║ [ALIVE] # Hinted handoff replays missed writes automatically
export CUBE_HOST=localhost export CUBE_PORT=8080 export CUBE_CONSISTENCY=QUORUM
| Scenario | Write CL | Read CL | Description |
|---|---|---|---|
| High Availability | ONE | ONE | Fastest, eventual consistency |
| Balanced | QUORUM | QUORUM | Strong consistency, good performance |
| Strong Consistency | QUORUM | ALL | Ensure reads see latest |
| Maximum Consistency | ALL | ALL | Slowest, strongest |
✗ Failed to connect: Connection refused Solutions: 1. Check node is running: curl http://host:port/api/v1/health 2. Check firewall rules 3. Verify correct host and port
Cause: No heartbeat received within timeout Solutions: 1. Check network connectivity 2. Check node is actually running 3. Increase timeout if network is slow
✗ Not enough replicas available Solutions: 1. Reduce consistency level (e.g., ALL -> QUORUM -> ONE) 2. Add more nodes to cluster 3. Check node health
ClusterUtils.HealthChecker checker = new ClusterUtils.HealthChecker(
nodes,
3000, // Check every 3 seconds
10000 // 10 second timeout
);
checker.start();
// More virtual nodes = better distribution ClusterUtils.TokenRing ring = new ClusterUtils.TokenRing(nodes, 512);
Topology topo = new Topology(nodes);
// Get local nodes
List<ClusterNode> localNodes = topo.getNodesByDatacenter("dc1");
// Prefer local reads
for (ClusterNode node : localNodes) {
if (node.isAlive()) {
readFrom(node);
break;
}
}
PHASE2_README.md - Replication and consistency detailsREADME.md - Main project documentationQUICKSTART.md - Quick setup guideCubeShell - Manage your distributed database cluster with ease! 🚀