-
Notifications
You must be signed in to change notification settings - Fork 76
Closed
Description
The defaults in this plugin set retries => 0. Further, the KafkaProducer has no mechanism for infinite retry, and recommends this:
Note that the above example may drop records if the produce request fails. If we want to ensure that this does not occur we need to set retries=<large_number> in our config.
"Large number" is probably not a great default because it still allows for data loss.
The defaults in this plugin are not what we want Logstash to do out of the box. This default (zero retry) means Logstash loses data through the Kafka Producer during any network fault or Kafka fault.
Proposal:
Change the default behavior (breaking change) to retry until successful.
- Change the default
retriesto benilwith the implication of infinite retry (breaking change) - KafkaProducer.send() returns a Future that we can inspect for success.
- If
@retriesis nil (proposed default), any failed send()'s must be retried - If
@retriesis a number, retry failedsend()that number of times. - Do not use
send()asynchronously. Always check the Future's result.
Metadata
Metadata
Assignees
Labels
No labels