If raspimjpeg is put into a capture mode (described below) then the flow of preview images is maintained but an extra recording is made of either a single image, a time lapse sequence of single images, or a full video recording which can be at any format the camera can support including HD normal frame rates. This data is stored in the media folder. For the images and tile lapse images these are stored directly in jpg format. The video is initially stored as a raw h264 stream from the camera but can be automatically formatted into mp4 when the recording ends. The boxing mode (MP4Box) controls this. Three options are provided; false leaves the files in raw h264, true converts them inline but no further recording is possible till this completes, background spawns a separate process to do the boxing operation. Normally the raw h264 captures and box operations take place within the same folder. This can cause quite a lot of network traffic and potential problems if the media folder has been remotely mounted. A config option (boxing_path) may be defined as a separate local folder. If that is used then the capture is to the boxing_path folder and the boxing operation is from the boxing_path to the final destination. This limits the network traffic to one pass and also takes it out of the real time capture operation.
The Java rule CloseResource (java-errorprone) has a new propertycloseNotInFinally. With this property set to true the rule will also find calls to close aresource, which are not in a finally-block of a try-statement. If a resource is not closed within afinally block, it might not be closed at all in case of exceptions.
Topics are added and modified using the topic tool: > bin/kafka-topics.sh --bootstrap-server broker_host:port --create --topic my_topic_name \ --partitions 20 --replication-factor 3 --config x=y The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption. The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (not counting replicas). Finally the partition count impacts the maximum parallelism of your consumers. This is discussed in greater detail in the concepts section. Each sharded partition log is placed into its own folder under the Kafka log directory. The name of such folders consists of the topic name, appended by a dash (-) and the partition id. Since a typical folder name can not be over 255 characters long, there will be a limitation on the length of topic names. We assume the number of partitions will not ever be above 100,000. Therefore, topic names cannot be longer than 249 characters. This leaves just enough room in the folder name for a dash and a potentially 5 digit long partition id. The configurations added on the command line override the default settings the server has for things like the length of time data should be retained. The complete set of per-topic configurations is documented here. Modifying topics You can change the configuration or partitioning of a topic using the same topic tool. To add partitions you can do > bin/kafka-topics.sh --bootstrap-server broker_host:port --alter --topic my_topic_name \ --partitions 40 Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn't change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by hash(key) % number_of_partitions then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way. To add configs: > bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --add-config x=y To remove a config: > bin/kafka-configs.sh --bootstrap-server broker_host:port --entity-type topics --entity-name my_topic_name --alter --delete-config x And finally deleting a topic: > bin/kafka-topics.sh --bootstrap-server broker_host:port --delete --topic my_topic_name Kafka does not currently support reducing the number of partitions for a topic. Instructions for changing the replication factor of a topic can be found here. Graceful shutdown The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stopping a server than just killing it. When a server is stopped gracefully it has two optimizations it will take advantage of: