Clickhouse cluster with 2 shards and 2 replicas built with docker-compose.
Not for production use.
docker compose up -d
default
- no password
Login to clickhouse01 console (first node's ports are mapped to localhost)
clickhouse-client -h localhost
Or open clickhouse-client
inside any container
docker exec -it clickhouse01 clickhouse-client -h localhost
Check node in cluster
SELECT hostName(), getMacro('replica'), getMacro('shard'), currentUser() FROM clusterAllReplicas(test_cluster);
Create a test database and table (sharded and replicated)
CREATE DATABASE company_db ON CLUSTER 'test_cluster';
CREATE TABLE company_db.events ON CLUSTER 'test_cluster' (
time DateTime,
uid Int64,
type LowCardinality(String)
)
ENGINE = ReplicatedMergeTree
PARTITION BY toDate(time)
ORDER BY (uid);
CREATE TABLE company_db.events_distr ON CLUSTER 'test_cluster'
AS company_db.events
ENGINE = Distributed('test_cluster', company_db, events, rand());
Load some data
INSERT INTO company_db.events_distr VALUES
('2020-01-01 10:00:00', 100, 'view'),
('2020-01-01 10:05:00', 101, 'view'),
('2020-01-01 11:00:00', 100, 'contact'),
('2020-01-01 12:10:00', 101, 'view'),
('2020-01-02 08:10:00', 100, 'view'),
('2020-01-02 13:00:00', 103, 'view'),
('2020-01-02 15:00:00', 99, 'view'),
('2020-01-02 16:00:00', 66, 'view'),
;
Check data from the current shard
SELECT * FROM company_db.events;
Check data from all cluster
SELECT _shard_num, * FROM company_db.events_distr;
If you need more Clickhouse nodes, add them like this:
- Add replicas/shards to
config.xml
to the blockconfig.d/remote_servers.xml
. - Add nodes to
docker-compose.yml
.
Start/stop the cluster without removing containers
docker compose start
docker compose stop
Stop and remove containers and volume
docker compose down -v