These days I would recommend using the confluent consumer:
https://github.com/confluentinc/confluent-kafka-go. It's just a wrapper around librdkafka. Sarama works fine too though.
With all of these you can configure the behavior you are looking for. For example: request.required.acks would allow you to customize how many acks are required. If you didn't receive enough on the producer, you might send a message twice, but you're increasing the likelihood you won't lose the message.
For most problems you can tolerate duplicate messages without having to do anything special. Even operations like sending an email are tolerable if the duplication is relatively rare (and based on my experience kafka issues are relatively rare). But if you can't tolerate duplicate messages, you will need to maintain a hashtable (or similar) of message ids that you've processed. Something like redis would work well for this.
As an example, consider payment processing systems, which include a idempotency key as part of a request:
https://stripe.com/docs/api?lang=curl#idempotent_requests. With this approach you're trading availability for consistency (if the centralized database you depend on to de-dupe is down, your whole pipeline is down).
I would aim to handle the duplicates if you can though, since systems that tolerate duplicates are much easier to build than ones that don't.