How to set max.poll.records in Kafka-Connect API of Confluent Platform -


i using confluent-3.0.1 platform , building kafka-elasticsearch connector. extending sinkconnector , sinktask (kafka-connect apis) data kafka.

as part of code extending taskconfigs method of sinkconnector return "max.poll.records" fetch 100 records @ time. not working , getting records @ same time , failing commit offsets within stipulated time. please can 1 me configure "max.poll.records"

 public list<map<string, string>> taskconfigs(int maxtasks) {     arraylist<map<string, string>> configs = new arraylist<map<string, string>>();     (int = 0; < maxtasks; i++) {       map<string, string> config = new hashmap<string, string>();       config.put(configurationconstants.cluster_name, clustername);       config.put(configurationconstants.hosts, hosts);       config.put(configurationconstants.bulk_size, bulksize);       config.put(configurationconstants.ids, elasticsearchids);       config.put(configurationconstants.topics_satellite_data, topics);       config.put(configurationconstants.publish_topic, topictopublish);       config.put(configurationconstants.types, elasticsearchtypes);       config.put("max.poll.records", "100");        configs.add(config);     }     return configs;   } 

you can't override kafka consumer configs max.poll.records in connector configuration. can in connect worker configuration though, consumer. prefix.


Comments

Popular posts from this blog

aws api gateway - SerializationException in posting new Records via Dynamodb Proxy Service in API -

asp.net - Problems sending emails from forum -