Skip to content

Revise type mappings in MLNET-to-ONNX conversion #1198

@justinormont

Description

@justinormont

Currently, for ONNX, we are mapping a U4 datatype (an unsigned 32-bit integer) to an Int64.

Should we be instead mapping the U4 datatype to Uint32 in ONNX? Or is there no support for a Uint32, and we're storing in an Int64?

case DataKind.U4:
dataType = TensorProto.Types.DataType.Int64;
break;

In the above code, you'll notice the mapping is currently:

  • BL to Float
  • TX to String
  • I1 to Int8
  • U1 to Uint8
  • I2 to Int16
  • U2 to Uint16
  • I4 to Int32
  • U4 to Int64 <- This one is odd
  • I8 to Int64
  • U8 to Uint64
  • R4 to Float
  • R8 to Double

The BL to Float & U4 to Int64 seem odd.

@wschin noted we have been mapping U4 to Int64 for the last two releases of WinML: #947 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    P2Priority of the issue for triage purpose: Needs to be fixed at some point.enhancementNew feature or requestonnxExporting ONNX models or loading ONNX models

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions