Struct rdkafka::consumer::StreamConsumer
source · [−]pub struct StreamConsumer<C = DefaultConsumerContext, R = DefaultRuntime> where
C: ConsumerContext, { /* private fields */ }
Expand description
A high-level consumer with a Stream
interface.
This consumer doesn’t need to be polled explicitly. Extracting an item from
the stream returned by the stream
will
implicitly poll the underlying Kafka consumer.
If you activate the consumer group protocol by calling
subscribe
, the stream consumer will integrate with
librdkafka’s liveness detection as described in KIP-62. You must be sure
that you attempt to extract a message from the stream consumer at least
every max.poll.interval.ms
milliseconds, or librdkafka will assume that
the processing thread is wedged and leave the consumer groups.
Implementations
Constructs a stream that yields messages from this consumer.
It is legal to have multiple live message streams for the same consumer, and to move those message streams across threads. Note, however, that the message streams share the same underlying state. A message received by the consumer will be delivered to only one of the live message streams. If you seek the underlying consumer, all message streams created from the consumer will begin to draw messages from the new position of the consumer.
If you want multiple independent views of a Kafka topic, create multiple consumers, not multiple message streams.
Receives the next message from the stream.
This method will block until the next message is available or an error
occurs. It is legal to call recv
from multiple threads simultaneously.
Note that this method is exactly as efficient as constructing a single-use message stream and extracting one message from it:
use futures::stream::StreamExt;
consumer.stream().next().await.expect("MessageStream never returns None");
pub fn split_partition_queue(
self: &Arc<Self>,
topic: &str,
partition: i32
) -> Option<StreamPartitionQueue<C, R>>
pub fn split_partition_queue(
self: &Arc<Self>,
topic: &str,
partition: i32
) -> Option<StreamPartitionQueue<C, R>>
Splits messages for the specified partition into their own stream.
If the topic
or partition
is invalid, returns None
.
After calling this method, newly-fetched messages for the specified
partition will be returned via StreamPartitionQueue::recv
rather
than StreamConsumer::recv
. Note that there may be buffered messages
for the specified partition that will continue to be returned by
StreamConsumer::recv
. For best results, call split_partition_queue
before the first call to
StreamConsumer::recv
.
You must periodically await StreamConsumer::recv
, even if no messages
are expected, to serve callbacks. Consider using a background task like:
tokio::spawn(async move {
let message = stream_consumer.recv().await;
panic!("main stream consumer queue unexpectedly received message: {:?}", message);
})
Note that calling Consumer::assign
will deactivate any existing
partition queues. You will need to call this method for every partition
that should be split after every call to assign
.
Beware that this method is implemented for &Arc<Self>
, not &self
.
You will need to wrap your consumer in an Arc
in order to call this
method. This design permits moving the partition queue to another thread
while ensuring the partition queue does not outlive the consumer.
Trait Implementations
Returns the current consumer group metadata associated with the consumer. Read more
Subscribes the consumer to a list of topics.
Unsubscribes the current subscription list.
Manually assigns topics and partitions to the consumer. If used, automatic consumer rebalance won’t be activated. Read more
Seeks to offset
for the specified topic
and partition
. After a
successful call to seek
, the next poll of the consumer will return the
message with offset
. Read more
fn commit(
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode
) -> KafkaResult<()>
fn commit(
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode
) -> KafkaResult<()>
Commits the offset of the specified message. The commit can be sync (blocking), or async. Notice that when a specific offset is committed, all the previous offsets are considered committed as well. Use this method only if you are processing messages in order. Read more
Commits the current consumer state. Notice that if the consumer fails after a message has been received, but before the message has been processed by the user code, this might lead to data loss. Check the “at-least-once delivery” section in the readme for more information. Read more
Commit the provided message. Note that this will also automatically commit every message with lower offset within the same partition. Read more
Stores offset to be used on the next (auto)commit. When
using this enable.auto.offset.store
should be set to false
in the
config. Read more
Like Consumer::store_offset
, but the offset to store is derived from
the provided message. Read more
Store offsets to be used on the next (auto)commit. When using this
enable.auto.offset.store
should be set to false
in the config. Read more
Returns the current topic subscription.
Returns the current partition assignment.
fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
Retrieves the committed offsets for topics and partitions.
fn committed_offsets<T>(
&self,
tpl: TopicPartitionList,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
fn committed_offsets<T>(
&self,
tpl: TopicPartitionList,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Retrieves the committed offsets for specified topics and partitions.
fn offsets_for_timestamp<T>(
&self,
timestamp: i64,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
fn offsets_for_timestamp<T>(
&self,
timestamp: i64,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
Looks up the offsets for this consumer’s partitions by timestamp.
fn offsets_for_times<T>(
&self,
timestamps: TopicPartitionList,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
fn offsets_for_times<T>(
&self,
timestamps: TopicPartitionList,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Timeout>,
Self: Sized,
Looks up the offsets for the specified partitions by timestamp.
Retrieve current positions (offsets) for topics and partitions.
fn fetch_metadata<T>(
&self,
topic: Option<&str>,
timeout: T
) -> KafkaResult<Metadata> where
T: Into<Timeout>,
Self: Sized,
fn fetch_metadata<T>(
&self,
topic: Option<&str>,
timeout: T
) -> KafkaResult<Metadata> where
T: Into<Timeout>,
Self: Sized,
Returns the metadata information for the specified topic, or for all topics in the cluster if no topic is specified. Read more
Returns the low and high watermarks for a specific topic and partition.
fn fetch_group_list<T>(
&self,
group: Option<&str>,
timeout: T
) -> KafkaResult<GroupList> where
T: Into<Timeout>,
Self: Sized,
fn fetch_group_list<T>(
&self,
group: Option<&str>,
timeout: T
) -> KafkaResult<GroupList> where
T: Into<Timeout>,
Self: Sized,
Returns the group membership information for the given group. If no group is specified, all groups will be returned. Read more
Pauses consumption for the provided list of partitions.
Resumes consumption for the provided list of partitions.
Reports the rebalance protocol in use.
Returns a reference to the ConsumerContext
used to create this
consumer. Read more
Creates a client from a client configuration. The default client context will be used. Read more
impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R> where
C: ConsumerContext + 'static,
R: AsyncRuntime,
impl<C, R> FromClientConfigAndContext<C> for StreamConsumer<C, R> where
C: ConsumerContext + 'static,
R: AsyncRuntime,
Creates a new StreamConsumer
starting from a ClientConfig
.
Creates a client from a client configuration and a client context.