-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Open
Labels
P2Priority of the issue for triage purpose: Needs to be fixed at some point.Priority of the issue for triage purpose: Needs to be fixed at some point.enhancementNew feature or requestNew feature or requestonnxExporting ONNX models or loading ONNX modelsExporting ONNX models or loading ONNX models
Description
Currently, for ONNX, we are mapping a U4 datatype (an unsigned 32-bit integer) to an Int64.
Should we be instead mapping the U4 datatype to Uint32 in ONNX? Or is there no support for a Uint32, and we're storing in an Int64?
machinelearning/src/Microsoft.ML.Onnx/OnnxUtils.cs
Lines 329 to 331 in 9643975
| case DataKind.U4: | |
| dataType = TensorProto.Types.DataType.Int64; | |
| break; |
In the above code, you'll notice the mapping is currently:
BLtoFloatTXtoStringI1toInt8U1toUint8I2toInt16U2toUint16I4toInt32U4toInt64<- This one is oddI8toInt64U8toUint64R4toFloatR8toDouble
The BL to Float & U4 to Int64 seem odd.
@wschin noted we have been mapping U4 to Int64 for the last two releases of WinML: #947 (comment)
TomFinley
Metadata
Metadata
Assignees
Labels
P2Priority of the issue for triage purpose: Needs to be fixed at some point.Priority of the issue for triage purpose: Needs to be fixed at some point.enhancementNew feature or requestNew feature or requestonnxExporting ONNX models or loading ONNX modelsExporting ONNX models or loading ONNX models