Skip to content

Conversation

@LeiWang1999
Copy link
Contributor

Previously, float constants in codegen were always emitted in scientific decimal format, e.g.:

bfloat16_t(3.487723e-05f);

This could introduce slight rounding differences compared to the actual binary representation, since the constant is printed and then re-parsed in decimal. we now emit the value in hexadecimal floating-point format (std::hexfloat) to preserve the exact binary value, and additionally include the decimal form as a comment for readability:

bfloat16_t(0x1.2492492492492p-15f /*3.487723e-05*/)

@LeiWang1999 LeiWang1999 changed the title [TIR][CUDA] Enhance Precision CUDA Codegen [TIR][CUDA] Preserve float precision in codegen with hexfloat output Sep 16, 2025
@tqchen
Copy link
Member

tqchen commented Sep 16, 2025

need to update the testcase and we are good to go

@LeiWang1999
Copy link
Contributor Author

need to update the testcase and we are good to go

added a test case at tests/python/codegen/test_target_codegen_cuda.py: https://github.com/apache/tvm/pull/18320/files#diff-de74af0a4254367528294a58410a1c2488d014d3fafdbd8446f88b621358ad4dR804-R820

@tqchen
Copy link
Member

tqchen commented Sep 18, 2025

i meant that we should fix https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-gpu/detail/PR-18320/2/pipeline which likely is caused by expected code in the original form

@tqchen tqchen merged commit 5ee38ea into apache:main Sep 19, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants