Skip to content

Retry indefinitely #149

@jordansissel

Description

@jordansissel

The defaults in this plugin set retries => 0. Further, the KafkaProducer has no mechanism for infinite retry, and recommends this:

Note that the above example may drop records if the produce request fails. If we want to ensure that this does not occur we need to set retries=<large_number> in our config.

"Large number" is probably not a great default because it still allows for data loss.

The defaults in this plugin are not what we want Logstash to do out of the box. This default (zero retry) means Logstash loses data through the Kafka Producer during any network fault or Kafka fault.

Proposal:

Change the default behavior (breaking change) to retry until successful.

  • Change the default retries to be nil with the implication of infinite retry (breaking change)
  • KafkaProducer.send() returns a Future that we can inspect for success.
  • If @retries is nil (proposed default), any failed send()'s must be retried
  • If @retries is a number, retry failed send() that number of times.
  • Do not use send() asynchronously. Always check the Future's result.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions