Hi Mathieu,
Great to hear! By the way, if you are interested in how we've been writing automated system tests, consider checking out ducktape, a command-line tool and library for system testing:
On to your question:
There's no build-in way to clear out schemas, but since you're in a development environment where it's ok to break things, there are a couple ways to do this which all boil down to the same two steps:
Note that both option A and option B clear out *all* schemas (not just schemas per topic).
(1) get rid of Kafka data in the "_schemas" topic
(2) get rid of persistent zookeeper data storing the upper bound on the current batch
Option A
This is the more careful option. It's best to shut you schema registries down first.
(1) Bounce your brokers with delete.topic.enable=true in the server.properties file
Delete the _schemas topic: kafka/bin/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_CONNECT> --topic _schemas --delete
(2) bin/kafka-run-class.sh kafka.tools.ZooKeeperMainWrapper -server <ZOOKEEPER_CONNECT> delete /schema_registry/schema_id_counter
Option B (the nuclear option - beware, this will destroy all of your persistent kafka and zookeeper data):
If you are just messing with a dev cluster on your local machine, this is fine
(1) Remove kafka log directories foreach broker (you can move these instead of destroying to be slightly more careful) - this is the "log.dirs" property in your server.properties file.
(2) Remove zookeeper log directories - this is "dataDir" in your zookeeper.properties file
Hope this helps,
Geoff