Fix incorrect serialization of Unicode characters in NewtonSoftJsonSerializer #1508
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently Akka.NET uses
System.Text.Encoding.Default.GetBytes()
to obtain a "blob" representation of messages that need to be sent outside the local actor system.This is what the .NET documentation has to say regarding the use of
Encoding.Default:
In the project I'm currently working on I experienced this problem. Results returned from locally-deployed actors presented no problems at all, while results from remote actors had Unicode characters replaced with question marks.
Akka.NET's object serialization should have no effect on the actual content of the message, and I imagine this will cause problems in clusters that have nodes with different default encodings.
The bug is easy to reproduce fortunately, using a LINQPad script for example:
Script output:
The solution is to use
Encoding.UTF8.GetBytes()
instead. I've built thesrc/core/Akka
project with this patch applied and am currently using the resulting DLL to work around this issue in my cluster.