See also the rdkafka-sys changelog.
-
Add support for transactional producers. The new methods are
Producer::init_transactions
,Producer::begin_transaction
,Producer::commit_transaction
,Producer::abort_transaction
, andProducer::send_offsets_to_transaction
.Thanks to @roignpar for the implementation.
-
Breaking change. Rename
RDKafkaError
toRDKafkaErrorCode
. This makes space for the newRDKafkaError
type, which mirrors therd_kafka_error_t
type added to librdkafka in v1.4.0.This change was made to reduce long-term confusion by ensuring the types in rust-rdkafka map to types in librdkafka as directly as possible. The maintainers apologize for the difficulty in upgrading through this change.
-
Breaking change. Rework the consumer APIs to fix several bugs and design warts:
-
Rename
StreamConsumer::start
toStreamConsumer::stream
, though the former name will be retained as a deprecated alias for one release to ease the transition. The new name better reflects that the method is a cheap operation that can be called repeatedly and in multiple threads simultaneously. -
Remove the
StreamConsumer::start_with
andStreamConsumer::start_with_runtime
methods.There is no replacement in rust-rdkafka itself for the
no_message_error
parameter. If you need this message, use a downstream combinator liketokio_stream::StreamExt::timeout
.There is no longer a need for the
poll_interval
parameter to these methods. Message delivery is now entirely event driven, so no time-based polling occurs.To specify an
AsyncRuntime
besides the default, specify the desired runtime type as the newR
parameter ofStreamConsumer
when you create it. -
Remove the
Consumer::get_base_consumer
method, as accessing theBaseConsumer
that underlied aStreamConsumer
was dangerous. -
Return an
&Arc<C>
fromClient::context
rather than an&C
. This is expected to cause very little breakage in practice. -
Move the
BaseConsumer::context
method to theConsumer
trait, so that it is available when using theStreamConsumer
as well.
-
-
Breaking change. Rework the producer APIs to fix several design warts:
-
Remove the
FutureProducer::send_with_runtime
method. Use thesend
method instead. TheAsyncRuntime
to use is determined by the newR
type parameter toFutureProducer
, which you can specify when you create the producer.This change makes the
FutureProducer
mirror the redesignedStreamConsumer
.This change should have no impact on users who use the default runtime.
-
Move the
producer::base_producer::{ProducerContext, DefaultProducerContext}
types out of thebase_producer
module and into theproducer
module directly, to match theconsumer
module layout. -
Move the
client
,in_flight_count
, andflush
methods inherent to all producers to a newProducer
trait. This trait is analogous to theConsumer
trait.
-
-
Breaking change. Several
TopicPartitionList
-related methods now returnResult<T, KafkaError>
rather thanT
:TopicPartitionListElem::set_offset
TopicPartitionList::from_topic_map
TopicPartitionList::add_partition_offset
TopicPartitionList::set_all_offsets
This was necessary to properly throw errors when an
Offset
passed to one of these methods is representable in Rust but not in C. -
Support end-relative offsets via
Offset::OffsetTail
. -
Fix stalls when using multiple
MessageStream
s simultaneously.Thanks to @Marwes for discovering the issue and contributing the initial fix.
-
Add a convenience method,
StreamConsumer::recv
, to yield the next message from a stream.Thanks again to @Marwes.
-
Add a new implementation of
AsyncRuntime
calledNaiveRuntime
that does not depend on Tokio.This runtime has poor performance, but is necessary to make the crate compile when the
tokio
feature is disabled. -
Add the
ClientConfig::get
andClientConfig::remove
methods to retrieve and remove configuration parameters that were set withClientConfig::set
. -
Add the [
NativeClientConfig::get
] method, which reflects librdkafka's view of a parameter value. UnlikeClientConfig::get
, this method is capable of surfacing librdkafka's default value for a parameter. -
Add the missing
req
field, which counts the number of requests of each type that librdkafka has sent, to theStatistics
struct. Thanks, @pablosichert!
-
Breaking change. Introduce a dependency on Tokio for the
StreamConsumer
in its default configuration. The new implementation is more efficient and does not require a background thread and an extra futures executor. -
Introduce the
StreamConsumer::start_with_runtime
andFutureProducer::send_with_runtime
methods. These methods are identical to their respective non-_with_runtime
counterparts, except that they take an additionalAsyncRuntime
generic parameter that permits using an asynchronous runtime besides Tokio.For an example of using rdkafka with the smol runtime, see the new smol runtime example.
-
Breaking change. Remove the
StreamConsumer::stop
method. To stop aStreamConsumer
after callingstart
, simply drop the resultingMessageStream
. -
Breaking change. Overhaul the
FutureProducer::send
method. The old implementation incorrectly blocked asynchronous tasks withstd::thread::sleep
and theblock_ms
parameter did not behave as documented.The new implementation:
-
changes the
block_ms: i64
parameter toqueue_timeout: impl Into<Timeout>
, to better match how timeouts are handled elsewhere in the rust-rdkafka API, -
depends on Tokio, in order to retry enqueuing after a time interval without using
std::thread::sleep
, -
returns an opaque future that borrows its input, rather than a
DeliveryFuture
with no internal references, -
simplifies the output type of the returned future from
Result<OwnedDeliveryResult, oneshot::Canceled>
toOwnedDeliveryResult
.
Thanks to @FSMaxB-dooshop for discovering the issue and contributing the initial fix.
-
-
Breaking change. Remove the
util::duration_to_millis
function. This functionality has been available in the standard library asstd::time::Duration::as_millis
for over a year. -
Introduce the
BaseConsumer::split_partition_queue
method to allow reading messages from partitions independently of one another. -
Implement
Clone
,Copy
, andDebug
forCommitMode
. -
Decouple versioning of rdkafka-sys from rdkafka. rdkafka-sys now has its own changelog and will follow SemVer conventions. (#211)
- Fix build on docs.rs.
-
Upgrade to the async/await ecosystem, including
std::future::Future
, v0.3 of the futures crate, and v0.2 of Tokio. The minimum supported Rust version is now Rust 1.39. Special thanks to @sd2k and @dbcfd. (#187)The main difference is that functions that previously returned
futures01::Future<Item = T, Error = E>
now return:
std::future::Future<Output = Result<T, E>>
In the special case when the error was
()
, the new signature is further simplified to:std::future::Future<Output = T>
Functions that return
future::Stream
s have had the analogous transformation applied. -
Implement
Send
andSync
onBorrowedMessage
, so that holding a reference to aBorrowedMessage
across an await point is possible. (#190) -
Implement
Sync
onOwnedHeaders
, which applies transitively toOwnedMessage
, so that holding a reference to anOwnedMessage
across an await point is possible. (#203) -
Bump librdkafka to v1.3.0. (#202)
-
Change the signature of
ConsumerContext::commit_callback
so that the offsets are passed via a safeTopicPartitionList
struct, and not a raw*mut rdkafka_sys::RDKafkaPartitionList
pointer. Thanks, @scrogson! (#198). -
Fix CMake build on Windows when debug information is enabled (#194).
- Add a client for Kafka's Admin API, which allows actions like creating and deleting Kafka topics and changing configuration parameters. (#122)
- Fix compliation on ARM, and ensure it stays fixed by adding an ARM builder to CI. (#134, #162)
- Stop automatically generating librdkafka bindings. Platform-independent bindings are now checked in to the repository. (#163)
- Move zstd compression support behind the
zstd
feature flag. (#163) - Remove build-time dependency on bindgen, clang, and libclang. (#163)
- Support
Consumer::pause
andConsumer::resume
. (#167) - Expose the
message_queue_nonempty
callback, which allows clients to put their poll thread to sleep and be woken up when new data arrives. (#164) - Implement
IntoOpaque
forArc<T>
. (#171) - Add
Consumer::seek
method. (#172) - Support building with Microsoft Visual C++ (MSVC) on Windows. (#176)
- Bump librdkafka to v1.2.2. (#177)
- Run tests against multiple Kafka versions in CI. (#182)
- Standardize feature names. All feature names now use hyphens instead of underscores, as is conventional, though the old names remain for backwards compatibility. (#183)
- Optionalize libz via a new
libz
feature. The new feature is a default feature for backwards compatibility. (#183) - Better attempt to make build systems agree on what version of a dependency to compile and link against, and document this hazard. (#183)
- Add librdkafka 1.0 support
- Automatically generate librdkafka bindings
- Use updated tokio version in asynchronous_processing example
- Add FreeBSD support
- Add
offsets_for_times
method - Add
committed_offsets
method
- Fix ordering of generics in FutureProducer::send
- Add method for storing multiple offsets
- Upgrade librdkafka to 0.11.6
- Add missing documentation warning.
- Add new experimental producer API. Instead of taking key, value and timestamp directly,
producers now get them in a
ProducerRecord
which allows to specify optional arguments using the builder pattern. - Add message headers support.
- Upgrade tokio-core to tokio in async example, remove futures-cpupool.
MessageStream
is now Send and Sync
- Upgrade librdkafka to 0.11.4
- Added iterator interface to the
BaseConsumer
. - Change timeout to more rust-idiomatic
Option<Duration>
. - Add
external_lz4
feature to use external lz4 library instead of the one one built in librdkafka. Disable by default. - Mark all
from_ptr
methods as unsafe. - Remove
Timestamp::from_system_time
and implementFrom
trait instead. - Rename
Context
toClientContext
. - Rename
Empty(...)Context
toDefault(...)Context
. - Use default type parameters for the context of
Client
, producers and consumers withDefault(...)Context
set as the default one. - Increase default buffer size in
StreamConsumer
from 0 to 10 to reduce context switching.
- Upgrade to librdkafka 0.11.3
- Add
send_copy_result
method toFutureProducer
- Make
PollingProducer
methods public - Rename
PollingProducer
toThreadedProducer
- Remove
TopicConfig
since librdkafka supports default topic configuration directly in the top level configuration - Rename
DeliveryContext
intoDeliveryOpaque
- Add
IntoOpaque
trait to support different opaque types.
- Fix regression in producer error reporting (#65)
- Split producer.rs into multiple files
- Both producers now return the original message after failure
- BaseConsumer returns an Option<Result> instead of Result<Option>
- Upgrade to librdkafka 0.11.1
- Enable dynamic linking via feature
- Refactor BaseConsumer, which now implements the Consumer trait directly
- A negative timestamp will now automatically be reported as NonAvailable timestamp
- Point rdkafka-sys to latest librdkafka master branch
- Add producer.flush and producer.in_flight_count
- Add max block time for FutureProducer
- Fix memory leak during consumer error reporting
- Fix memory leak during producer error reporting
- Upgrade librdkafka to 0.11.0.
FutureProducer::send_copy
will now return aDeliveryFuture
direcly.- TPL entries now also export errors.
KafkaError
is now Clone and Eq.
- Fix flaky tests.
- Support direct creation of OwnedMessages.
- The topic partition list object from librdkafka is now completely accessible from Rust.
- The test suite will now run both unit tests and integration tests in valgrind, and it will also check for memory leaks.
- rdkafka-sys will use the system librdkafka if it's already installed.
- rdkafka-sys will verify that the crate version corresponds to the librdkafka version during the build.
- Timestamp is now Copy.
- Message has been renamed to BorrowedMessage. Borrowed messages can be transformed into owned messages. Both implement the new Message trait.
- Improved error enumerations.
- Fix memory access bug in statistics callback.
- Fix memory leak in topic partition list.
- Messages lifetime is now explicit (issue #48)
- Consumer commit callback
- Add configurable poll timeout
- Add special error code for message not received within poll time
- Add topic field for messages
- Make the topic partition list optional for consumer commit
- Add
store_offset
to consumer - Add at-least-once delivery example
- OpenSSL dependency optional
- librdkafka 0.9.5
- Fix termination sequence
- Integration tests running in docker and valgrind
- Producer topics are not needed anymore
- Implement Clone for
BaseProducerTopic
- Add timestamp support
- librdkafka 0.9.4
- Add client statistics callback and parsing
- Asynchronous message processing example based on tokio
- More metadata for consumers
- Watermark API
- First iteration of integration test suite