-
Notifications
You must be signed in to change notification settings - Fork 118
Closed
Description
Versions:
- logstash-input-kafka 8.0.4
- logstash 6.2.3
Configuration:
input {
bootstrap_servers => "XXXXX:9092"
topic => "logs"
consumer_threads => 1
group_id => "xxx_group"
heartbeat_interval_ms => "10000"
session_timeout_ms => "30000"
codec => "json"
client_id => "xxx-id"
max_poll_records => "100"
auto_offset_reset => "earliest"
}
I am wondering if this scenario is possible with the offset auto commit ON (default):
- Poll from Kafka
- Some exception is triggered BEFORE adding the event(s) to the Logstash queue
- The consumer is closed and the offset is committed (default behaviour)
I am trying to debug an issue we had in which the Logstash instance was restarted several times by an external tool/script due to an exception and when the issue was finally solved, one day later, it started consuming from the end of the partitions, which means that the offset was committed in that time, even when no message was processed (I am sure about this since we have the metrics pack enable and no message flow through, and of course, no message reach the output on that time neither).
If it is a possible scenario I think I can move to the manual commit which should prevent the offset commit in the consumer.close.
Metadata
Metadata
Assignees
Labels
No labels