You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* New default retry behavior: Retry until successful
* Now makes sure the data is in Kafka before completion.
Prior, the default was `retries => 0` which means never retry.
The implications of this are that any fault (network failure, Kafka
restart, etc), could cause data loss.
This commit makes the following changes:
* `retries` now has no default value (aka: nil)
* Any >=0 value for `retries` will behave the same as it did before.
Slight difference in internal behavior in this patch -- We now no longer
ignore the Future<RecordMetadata> returned by KafkaProducer.send(). We
send the whole batch of events and then wait for all of those operations
to complete. If any fail, we retry only the failed transmissions.
Prior to this patch, we would call `send()`, which is asynchronous, and
then acknowledge in the pipeline. This would cause data loss, even if
the PQ was enabled, under the following circumstances:
1) Logstash send() to Kafka then returns -- indicating that the data is
in Kafka, which was not true. This means we would ack the
transmission to the PQ but Kafka may not have the data yet!
2) Logstash crashes before the KafkaProducer client actually sends it to
Kafka.
Fixes#149
Test Coverage:
* Move specs to call newly-implemented multi_receive
This also required a few important changes to the specs:
* Mocks (expect..to_receive) were not doing `.and_call_original` so
method expectations were returning nil[1]
* Old `ssl` setting is now `security_protocol => "SSL"`
[1] ProducerRecord.new was returning `nil` due to missing
.and_call_original, for exmaple.
# The size of the TCP send buffer to use when sending data.
@@ -170,6 +176,17 @@ class LogStash::Outputs::Kafka < LogStash::Outputs::Base
170
176
171
177
public
172
178
defregister
179
+
@thread_batch_map=Concurrent::Hash.new
180
+
181
+
if !@retries.nil?
182
+
if@retries < 0
183
+
raiseConfigurationError,"A negative retry count (#{@retries}) is not valid. Must be a value >= 0"
184
+
end
185
+
186
+
@logger.warn("Kafka output is configured with finite retry. This instructs Logstash to LOSE DATA after a set number of send attempts fails. If you do not want to lose data if Kafka is down, then you must remove the retry setting.",:retries=>@retries)
0 commit comments