Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add notes on bit depth for ints and floats #10028

Merged
merged 2 commits into from
Oct 4, 2024

Conversation

BrianBHuynh
Copy link
Contributor

Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.

The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).

While Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.

The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).

While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).

Examples of problems this may cause:
When setting two floats with time.get_unix_time_from_system() in GDscript (which returns a 64 bit float) a few seconds apart, when compared in the shader they will be equal to each other, and when subtracted from one another they will equal 0.0 due to the 32 bit depth of floats in the shader language.

This is not intuitive to debug without documentation as when using the get function in GDscript, they will still subtract correctly so that the second float will be greater than the first float in GDscript, even if they won’t subtract correctly within the shader.most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).

Additionally some functions mention 32 bit floating point numbers (look at packHalf2x16 for example) however I could not find anywhere that states that the default bit depth of int / floats was 32 bit and not 64 bit like in GDscript.

Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.

The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).

While Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.

The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).

While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).


Examples of problems this may cause:
When setting two floats with time.get_unix_time_from_system()  in GDscript (which returns a 64 bit float) a few seconds apart, when compared in the shader they will be equal to each other, and when subtracted from one another they will equal 0.0 due to the 32 bit depth of floats in the shader language.

This is not intuitive to debug without documentation as when using the get function in GDscript, they will still subtract correctly so that the second float will be greater than the first float in GDscript, even if they won’t subtract correctly within the shader.most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).

Additionally some functions mention 32 bit floating point numbers (look at packHalf2x16 for example) however I could not find anywhere that states that the default bit depth of int / floats was 32 bit and not 64 bit like in GDscript.
Fixed broken table
@AThousandShips AThousandShips changed the title Added notes on bit depth for ints and floats Add notes on bit depth for ints and floats Oct 2, 2024
@skyace65 skyace65 added enhancement area:manual Issues and PRs related to the Manual/Tutorials section of the documentation topic:shaders cherrypick:4.3 labels Oct 3, 2024
@mhilbrunner mhilbrunner merged commit 67e37b5 into godotengine:master Oct 4, 2024
1 check passed
@mhilbrunner
Copy link
Member

Merged. Thanks and congrats on your first merged contribution!

mhilbrunner pushed a commit that referenced this pull request Oct 4, 2024
* Added notes on bit depth for ints and floats

Added a small note about the bit depth of integers and floats in Godot's shading language as it is not explicitly stated anywhere.

The bit depth of integer and floats in GDscript and Godot's shading language are different, which can cause problems with lost precision in calculations when integers are set from GDscript as floats/ints in GDscript are 64 bits instead of 32 bits (the standard in GLSL ES 3.0).

While most are unlikely to run into problems due to this difference in bit depth, it can cause mathematical errors in edge cases. As stated by previous contributors, no error will be thrown if types do not match while setting a shader uniform. This includes GDscript floats being set as Godot shader floats (which may not be intuitive).
@mhilbrunner
Copy link
Member

Cherry-picked to 4.3 in #10038.

@BrianBHuynh
Copy link
Contributor Author

LESSGOOOO FIRST COMMIT

@BrianBHuynh BrianBHuynh deleted the patch-1 branch October 4, 2024 03:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:manual Issues and PRs related to the Manual/Tutorials section of the documentation enhancement topic:shaders
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants