-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sink bug when when using AvroConverter -> Error: Argument 'recordValue' is not valid map format. #493
Comments
@flopetegui thanks for reporting this, and adding the required details. Wondering if you can provide more information related to the version of sink connector being used. |
@flopetegui please disable the bulk mode and see whether the issue still repros -> connect.cosmos.sink.bulk.enabled=false |
I am using v1.6.0 of the plugin downloaded from Confluent hub. @xinlian12 added the property as you instructed. Still failed, but with a different message.
|
The error indicates a potential issue with the ObjectMapper in connector? The Avro message is in JSON format, but it includes the type in output for fields that are nullable. For example, schema
Will results in JSON message,
|
thanks @flopetegui, agree with you, seems mattered to the ObjectMapper used in the SDK, will pick up this issue in the next 2-3 weeks, |
Hi, we faced with same issue. Is there any update on this? Thanks. |
@flopetegui / @ilyasdresden due to some other higher priority working items, this issue will likely be picked up sometime in March. |
@xinlian12, thank you for the information and an option. Unfortunately, it would not work for us because we are not owning the topic. |
Same here with 1.6.0. Is there any update on this? Being worked on this month? Thanks. We are using Avro.
|
I had the same issue with the CosmosDb sink 1.6.0 with base image confluentinc/cp-kafka-connect-base:7.3.2 |
I think i found the issue. I compared the version 1.5..0 with version 1.7.0. This can be seen in version 1.7.0 at lines 110 - 119 in The ComosDbSinkTask `
To fix i suggest the the record.value() can be replaced with the recordValue or the for loop from version 1.5.0 can be reintroduced. If help is required we can be asked @ marko.oljaca@sva.de since this version is critical for one of our customers. |
@maoljaca thanks for the suggestion, I have tried fixing this bug in the below PR - would appreciate if you can take a look (just in case I missed anything), thanks - #503 |
@kushagraThapar i will test the version asap. Thanks for your PR! I'm still having some bugs related with very nested avro but this is not related to this issue. I suggest closing this issue. Thanks and Best |
About my currentissue with the nested avro i have the following idea (tried to open discussion but didn't work due to permisison i think): Currently https://github.com/microsoft/kafka-connect-cosmosdb/blob/v1.7.1/src/main/java/com/azure/cosmos/kafka/connect/sink/StructToJsonMap.java handles transformation of Struct to a nested Map of List and Maps. The recursion seems fine, but complexer Object still crash (we have an issue currently with large nested object). Some nested struct is not properly resolved. We currently use a workaround where we use a Converter to convert Avro to Plain JSON and write that straight to Cosmos DB via Sink Connector. The Converter is available and can transform Struct to plain JSON. This could be written to Cosmos DB. Benefits:
Draw backs:
Here a Code Snippet:
Output is:
What do you think about it? Best Marko |
@maoljaca thanks for verifying the solution. Reagrding the converter, I agree, your solution works but it is kind of hacky. Can you share the complex object on which the current StructToJsonMap conversion fails. May be I can try to fix the issue in cosmos connector. Will close this issue, can you please create a new github issue for this problem? |
i'll create a new one. I try to give a solution as well to it. |
@maoljaca appreciate it, thank you! |
Description
WorkerSinkTask fails to write message to cosmosdb citing that record value is not in a valid 'map' format.
Sink connector is pulling from kafka topic that uses AVRO serializer and schema registry.
Expected Behavior
Sink connector should publishes message from topic to cosmos db container.
Reproduce
Here is the Avro schema from .avsc file.
The Java code snippet that produces the message to 'company-response-topic' topic
The Cosmos sink configuration
Cosmos db configurations includes a partition key '/companyId'
Here are logs indicating the error from WorkerSinkTask
Logs aknowledge that a record needs to be written, but then fails at BulkWriter.getPartitionKeyValue(BulkWriter.java:103) where it fails the precondition
checkArgument(recordValue instanceof Map, "Argument 'recordValue' is not valid map format.");
Additional Context
If I use json instead of Avro, the sink works. This seems like an avro specific issue.
The text was updated successfully, but these errors were encountered: