You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
More specific, it's the serialization of com.azure.core.implementation.util.SerializableContent, used by com.azure.core.util.BinaryData#fromObject(java.lang.Object) that causes the issue
The CloudEvent is serialized and sent to the desired output binding.
Actual behavior
The runtime is unusable after one trigger due a (swallowed?) StackOverflowError and nothing is sent to the output as a result.
Known workarounds
Create a custom class that implements the CloudEvent spec.
Related information
Provide any related information
Basically it all boils down to Jackson vs Gson. The CloudEvent uses Jackson annotations to imply serialization rules while the azure-functions-java-worker uses Gson for serialization. Ironically, the last change on RpcUnspecifiedDataTarget.java, was switching from Jackson to Gson.
Note that com.azure.core.models.CloudEvent#binaryData is annotated with @JsonIgnore. This property is what causes the StackOverflowError when serializing with Gson.
I'm using java 8 to dodge the issues caused by reflection due modularization in java 9.
@CosmosDBTrigger as trigger
@EventGridOutput as output binding
Ideally, there is a consensus in using Jackson vs Gson throughout the azure java projects to prevent such issues in the future.
Furthermore, the CloudEvent implementation doesn't even work with Jackson due to com.azure.core.models.CloudEvent#getData not returning the actual data. Instead it serializes to SerializableContent.
Investigative information
Please provide the following:
I'm not sure what the format of a CosmosDBTrigger is, and I am unable to provide this information for my current setup at the time.
Repro steps
As a function:
Trigger a function with com.azure.core.models.CloudEvent as the type of the OutputBinding in combination with @EventGridOutput.
As a standalone issue:
More specific, it's the serialization of com.azure.core.implementation.util.SerializableContent, used by com.azure.core.util.BinaryData#fromObject(java.lang.Object) that causes the issue
Expected behavior
The CloudEvent is serialized and sent to the desired output binding.
Actual behavior
The runtime is unusable after one trigger due a (swallowed?) StackOverflowError and nothing is sent to the output as a result.
Known workarounds
Related information
Provide any related information
Basically it all boils down to Jackson vs Gson. The CloudEvent uses Jackson annotations to imply serialization rules while the azure-functions-java-worker uses Gson for serialization. Ironically, the last change on RpcUnspecifiedDataTarget.java, was switching from Jackson to Gson.
Note that com.azure.core.models.CloudEvent#binaryData is annotated with @JsonIgnore. This property is what causes the StackOverflowError when serializing with Gson.
I'm using java 8 to dodge the issues caused by reflection due modularization in java 9.
@CosmosDBTrigger as trigger
@EventGridOutput as output binding
Ideally, there is a consensus in using Jackson vs Gson throughout the azure java projects to prevent such issues in the future.
Furthermore, the CloudEvent implementation doesn't even work with Jackson due to com.azure.core.models.CloudEvent#getData not returning the actual data. Instead it serializes to SerializableContent.
Serializing CloudEvent with Jackson:
Output:
Note that the data property is the serialized representation of SerializableContent through 'replayable' and 'length'.
The CloudEvent implementation for Event Grid is unusable at this point without changes to the azure-sdk-for-java.
Azure/azure-sdk-for-java#36000
Please let me know if you require addition info.
The text was updated successfully, but these errors were encountered: