When I downloaded the file, I found that there were 1 machine-events.csv and 1 machine_attributes.csv file, 9 collection_events-*.csv files, 56 instance_event-*.csv files, and 1555 instance_usage-*.csv files, a total of 1622.
What are the criteria for dividing the collection_events, instance_events, and instance_usage files?
Is it just divided because of the file size problem?
Or is it divided by machine or switch or collection?
If I want to use only some files due to the huge data set capacity, for example, if I want to analyze for a specific switch, are the instances that make up the switch randomly distributed in the 1555 instance_usage-*.csv files?