Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kafka: Failed to deliver 1 messages. #802

Closed
ghost opened this issue Dec 22, 2016 · 3 comments
Closed

kafka: Failed to deliver 1 messages. #802

ghost opened this issue Dec 22, 2016 · 3 comments

Comments

@ghost
Copy link

ghost commented Dec 22, 2016

Versions

Please specify real version numbers or git SHAs, not just "Latest" since that changes fairly regularly.
Sarama Version: 353cc46
Kafka Version: 0.8.2
Go Version: go1.7.3 linux/amd64

Configuration

What configuration values are you using for Sarama and Kafka?

Sarama config:

broker_list = ["172.20.43.10:9092", "172.20.43.11:9092", "172.20.43.12:9092"]
required_acks = 0
retry_max = 3

Procuder config:

config.Producer.RequiredAcks = 0
config.Producer.Retry.Max = 3
config.Producer.Return.Errors = true
config.Producer.Return.Successes = true

kafka producer config:

metadata.broker.list=localhost:9092
producer.type=sync
compression.codec=none
serializer.class=kafka.serializer.DefaultEncoder

Logs

[sarama] 2016/12/22 10:37:44 producer/broker/1 state change to [open] on collector-test/2
[sarama] 2016/12/22 10:37:44 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2016/12/22 10:37:44 producer/broker/2 starting up
[sarama] 2016/12/22 10:37:44 producer/broker/2 state change to [open] on collector-test/0
[sarama] 2016/12/22 10:37:44 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2016/12/22 10:37:44 producer/broker/0 starting up
[sarama] 2016/12/22 10:37:44 producer/broker/0 state change to [open] on collector-test/1
[sarama] 2016/12/22 10:37:44 Connected to broker at sz-pg-dc-smalldisk-011:9092 (registered as #1)
[sarama] 2016/12/22 10:37:44 Connected to broker at sz-pg-dc-smalldisk-012:9092 (registered as #2)
[sarama] 2016/12/22 10:37:44 Connected to broker at sz-pg-dc-smalldisk-010:9092 (registered as #0)

Problem Description

I use SyncProducer, call SendMessages to write kafka, then check error, got error kafka: Failed to deliver 1 messages. The messages array length is bigger than zero.

@eapache
Copy link
Contributor

eapache commented Dec 22, 2016

I use SyncProducer, call SendMessages to write kafka, then check error, got error kafka: Failed to deliver 1 messages. The messages array length is bigger than zero.

The error that is returned from SendMessages is a sarama.ProducerErrors which you can inspect to determine the individual errors that happened for each message. That should give a hint as to the root cause.

@ghost
Copy link
Author

ghost commented Dec 23, 2016

Thank you @eapache

	errs := p.SendMessages(msgs)
	if errs != nil {
		for _, err := range errs.(sarama.ProducerErrors) {
			log.Println("Write to kafka failed: ", err)
		}

I range sarama.ProducerErrors getermine the individual error:

kafka server: Message was too large, server rejected it to avoid allocation error.

I print msgs payload: 1985343, in server.properties: socket.request.max.bytes=104857600.

May sarama will cover server.properties config, if the topic is created by sarama?

server.properties:

broker.id=0

port=9092

num.network.threads=12

num.io.threads=24

socket.send.buffer.bytes=1048576

socket.receive.buffer.bytes=1048576

socket.request.max.bytes=104857600

log.dirs=/home/hadoop/apache/kafka/var/kafka-logs/1

num.partitions=3

num.recovery.threads.per.data.dir=1

log.retention.hours=360

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

log.cleaner.enable=false

zookeeper.connect=172.20.43.10:2181,172.20.43.11:2181,172.20.43.12:2181/kafka/product/kafka821

zookeeper.connection.timeout.ms=6000

default.replication.factor=2

delete.topic.enable=true

@ghost ghost closed this as completed Dec 29, 2016
@eapache
Copy link
Contributor

eapache commented Dec 29, 2016

socket.request.max.bytes (the max size of a network request to handle) is different from message.max.bytes (the max size of a single message); the latter defaults to 1000012 if not otherwise specified, which is smaller than the message you are trying to produce.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant