Kafka GUI for topics, topics data, consumers group, schema registry and more...
- General
- Works with modern Kafka cluster (1.0+)
- Connection on standard or ssl, sasl cluster
- Multi cluster
- Topics
- List
- Configurations view
- Partitions view
- Consumers groups assignments view
- Node leader & assignments view
- Create a topic
- Configure a topic
- Delete a topic
- Browse Topic datas
- View data, offset, key, timestamp & headers
- Automatic deserializarion of avro message encoded with schema registry
- Configurations view
- Logs view
- Delete a record
- Sort view
- Filter per partitions
- Filter with a starting time
- Filter data with a search string
- Consumer Groups (only with kafka internal storage, not with old Zookepper)
- List with lag, topics assignments
- Partitions view & lag
- Node leader & assignments view
- Display active and pending consumers groups
- Delete a consumer group
- Update consumer group offsets to start / end / timestamp
- Schema Registry
- List schema
- Create a schema
- Update a schema
- Delete a schema
- View and delete individual schema version
- Nodes
- List
- Configurations view
- Logs view
- Configure a node
- Download docker-compose.yml file
- run
docker-compose up
- go to http://localhost:8080
It will start a Kafka node, a Zookeeper node, a Schema Registry, fill with some sample data, start a consumer group & start KafkaHQ.
First you need a configuration files in order to configure KafkaHQ connections to Kafka Brokers.
docker run -d \
-p 8080:8080 \
-v /tmp/application.conf:/app/application.conf \
tchiotludo/kafkahq
- With
-v /tmp/application.conf
must be an absolute path to configuration file - Go to http://localhost:8080
- Install Java 8
- Download the latest jar on release page
- Create an
application.conf
in the same directory - Launch the application with
java -jar kafkahq.jar prod
- Go to http://localhost:8080
Configuration file is a HOCON configuration file with an example below :
{
kafka {
connections {
my-cluster-1 {
properties {
bootstrap.servers: "kafka:9092"
}
registry: "http://schema-registry:8085"
}
my-cluster-2 {
properties {
bootstrap.servers: "kafka:9093"
security.protocol: SSL
ssl.truststore.location: /app/truststore.jks
ssl.truststore.password: password
ssl.keystore.location: /app/keystore.jks
ssl.keystore.password: password
ssl.key.password: password
}
}
}
}
}
kafka.connections
is a key value configuration with :
key
: must be an url friendly string the identify your cluster (my-cluster-1
andmy-cluster-2
is the example above)properties
: all the configurations found on Kafka consumer documentation. Most important isbootstrap.servers
that is a list of host:port of your Kafka brokers.registry
: the schema registry url (optional)
KafkaHQ docker image support 1 environment variables to handle configuraiton :
KAFKAHQ_CONFIGURATION
: a string that contains the full configuration that will be written on /app/configuration.conf on container.
A docker-compose is provide to start a development environnement.
Just install docker & docker-compose, clone the repository and issue a simple docker-compose -f docker-compose-dev.yml up
to start a dev server.
Dev server is a java server & webpack-dev-server with live reload.
Apache 2.0 © tchiotludo