Debezium – Transaction log miner

In the previous post the services used eventuate to communicate. But how did it know to create events? How does the CDC module work? Well behind the scenes it is mining the transaction logs and create events from it. Enter Debezium. It’s build on top of Kafka and records the changes made to the database in kafka logs from where they are consumed.

It has 4 connectors available to this date:

  • MySql
  • MongoDB
  • Oracle
  • Postgres

This means you need to use one of these databases in order to be able to use it. You can follow the instructions described here to set it up. You can use the docker images to build your components or you can use the embedded version. Let’s break down the example from previous post and see debezium in action.

The eventuateio-local-cdc-service uses an embedded debezium engine which is configured in EventTableChangesToAggregateTopicRelay class. This command side of the CQRS puts the command into the AggregateRepository and events are processed from it. The events are persisted into the database.

 public CompletableFuture<Object> startCapturingChanges() throws InterruptedException {

    logger.debug("Starting to capture changes");


    producer = new EventuateKafkaProducer(kafkaBootstrapServers);

    String connectorName = "my-sql-connector";
    Configuration config = Configuration.create()
                                    /* begin engine properties */

            .with("offset.storage", KafkaOffsetBackingStore.class.getName())
            .with("bootstrap.servers", kafkaBootstrapServers)
            .with("offset.storage.topic", "eventuate.local.cdc." + connectorName + ".offset.storage")

            .with("poll.interval.ms", 50)
            .with("offset.flush.interval.ms", 6000)
                                    /* begin connector properties */
            .with("name", connectorName)
            .with("database.hostname", jdbcUrl.getHost())
            .with("database.port", jdbcUrl.getPort())
            .with("database.user", dbUser)
            .with("database.password", dbPassword)
            .with("database.server.id", 85744)
            .with("database.server.name", "my-app-connector")
            //.with("database.whitelist", "eventuate")
                    "eventuate.local.cdc." + connectorName + ".history.kafka.topic")

    CompletableFuture<Object> completion = new CompletableFuture();
    engine = EmbeddedEngine.create()
            .using((success, message, throwable) -> {
              if (success)
                completion.completeExceptionally(new RuntimeException("Engine failed to start" + message, throwable));

    Executor executor = Executors.newCachedThreadPool();
    executor.execute(() -> {
      try {
      } catch (Throwable t) {

    logger.debug("Started engine");
    return completion;

Now the CDC service enters and using debezium engine in async mode it reads the transaction log and pushes the events to Kafka topics. This is the producer. In order to see the topics you need to get on the kafka docker machine

 docker run -i -t eventuateio/eventuateio-local-kafka bash

and run

./bin/kafka-topics.sh --zookeeper --list

The following topics should be present:


We are interested in the aggregate topics as this is the place where our events will appear. Let’s start listening on the ItemAggregate topic

./bin/kafka-console-consumer.sh --zookeeper --topic com.so.aggregate.ItemAggregate

and insert an item.


We will see the event


The same applies for the BidAggregate topic

./bin/kafka-console-consumer.sh --zookeeper --topic com.so.aggregate.BidAggregate


And the event


Cool. This is the way Debezium contributes to event-sourcing microservices.

The consumer is the query side of the CQRS which listen on these topics and reacts by updating the materialized views.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s