1、生产者Producer
1)添加依赖
org.apache.kafka kafka-clients 0.10.0.0
2)简单推送代码
文档参考:http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
同步推送:
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); Producerproducer = new KafkaProducer<>(props); for(int i = 0; i < 100; i++) producer.send(new ProducerRecord ("my-topic", Integer.toString(i), Integer.toString(i))); producer.close();
异步推送:
public Futuresend(ProducerRecord record,Callback callback)
对比:
If you want to simulate a simple blocking call you can call the get() method immediately: byte[] key = "key".getBytes(); byte[] value = "value".getBytes(); ProducerRecordrecord = new ProducerRecord ("topic1", key, value) producer.send(record).get(); Fully non-blocking usage can make use of the Callback parameter to provide a callback that will be invoked when the request is complete. ProducerRecord record = new ProducerRecord ("topic1", key, value); producer.send(myRecord, new Callback() { public void onCompletion(RecordMetadata metadata, Exception e) { if(e != null) e.printStackTrace(); System.out.println("The offset of the record we just sent is: " + metadata.offset()); } });
2、消费者Consumer
1)添加依赖
org.apache.kafka kafka-clients 0.10.0.0
2)简单拉取代码
更多请查看:http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("group.id", "test"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("session.timeout.ms", "30000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumerconsumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("foo", "bar")); while (true) { ConsumerRecords records = consumer.poll(100); for (ConsumerRecord record : records) System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()); }
附上以验证通过的producer推送代码:
public RecordMetadata sendSyncKafkaRequest(String topic, KeyModel keyModel, Object message) { logger.info("=== 推送服务开始:sendSyncKafkaRequest start ==="); logger.info("=== topic: " + topic + "==="); logger.info("=== keyModel: " + JSON.toJSONString(keyModel) + "==="); logger.info("=== message: " + JSON.toJSONString(message) + "==="); Properties props = kafkaProducerProperties.getProperties(); KafkaProducerproducer = null; RecordMetadata recordMetadata = null; try { producer = new KafkaProducer (props); recordMetadata = producer.send(new ProducerRecord (topic, keyModel, message)).get(); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } if (recordMetadata != null) { logger.info("===kafka请求推送成功!Topic:" + recordMetadata.topic() + ";分区:" + recordMetadata.partition() + "==="); } else { logger.info("=== recordMetadata为 null!本次kafka 写入请求没有完成!==="); } return recordMetadata; }