Satish,
The number of brokers does not affect the parallelism of the Connectors; that configuration is controlled by the Connector itself and the capacity of the Connect Workers. It would be fairly simple to define a Connector such that each S3 object would be supported by a separate Connector Task (effectively a thread within the Worker processes).
The Workers will support failover. Connector Tasks from a failed Worker node will be redistributed to the other Worker nodes, picking up right where they left off.
It's probably worth taking a look at some other Source connectors. For example, the JDBC Source connector supports the bulk upload of a database table. That would be a good model for the transfer of a complete S3 object.
Regards,
David
NOTE: It's worth paying particular attention to the consistency semantics of S3 if you suspect that the Kafka Connector will be reading from the S3 objects as they are being updated by some external processes. You don't want to read half the data from "the first version" of the object and the remander from "a later version".