Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

A Sneak Peek of MLIR in TensorFlow

612 views

Published on

I saw MLIR related code in main TensorFlow repo in the end of June 2019, so I spent some time learning what can be used now (July 11th, 2019)

Published in: Engineering
  • Be the first to comment

A Sneak Peek of MLIR in TensorFlow

  1. 1. A Sneak Peek of MLIR in TensorFlow Koan-Sin Tan freedom@computer.org Hsinchu Coding Serfs Meeting July 11th, 2019
  2. 2. Why MLIR https://medium.com/tensorflow/mlir-a-new-intermediate-representation- and-compiler-framework-beba999ed18d
  3. 3. • MLIR is intended to be a hybrid IR which can support multiple different requirements in a unified infrastructure. For example, this includes: • The ability to represent all TensorFlow graphs, including dynamic shapes, the user-extensible op ecosystem, TensorFlow variables, etc. • Optimizations and transformations typically done on a TensorFlow graph, e.g. in Grappler. • Quantization and other graph transformations done on a TensorFlow graph or the TF Lite representation. • Representation of kernels for ML operations in a form suitable for optimization. • Ability to host high-performance-computing-style loop optimizations across kernels (fusion, loop interchange, tiling, etc.) and to transform memory layouts of data. • Code generation "lowering" transformations such as DMA insertion, explicit cache management, memory tiling, and vectorization for 1D and 2D register architectures. • Ability to represent target-specific operations, e.g. the MXU on TPUs. • non-goals: • low level machine code generation algorithms (like register allocation and instruction scheduling) • MLIR as a source language that end-users would themselves write kernels in analogous to CUDA C++ https://github.com/tensorflow/mlir/blob/master/README.md
  4. 4. • Entire TensorFlow graph: nope, the “tf” dialect isn’t public yet • Initial MLIR for in TensorFLow repo on June 28th, 2019 • Early TF, TFLite and XLA support: floating point MobilenetV1 TF pb ! TFLite flatbuffer works • No, quantized ones don’t work yet although many components are there • Simple quant, fxp, affine, and vector code is there • So it’s possible to start exploring tiling and other techniques with affine, vector, and other dialects • more GPU supports, including Vulkan SPIR-V • Low-level code generation • MLIR relies on LLVM and other existing backends • Where to start • MLIR’s git repo has • links to 3 slide deck, one of them is a tutorial in Euro-LLVM 2019 • Docs for Toy lang and linear algebra dialect • TensorFlow MLIR: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/mlir
  5. 5. TF .pb -> TFLite .tflite • build TensorFlow MLIR related binaries bazel build --config opt tensorflow/compiler/mlir/... • get your model, e.g., wget http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz • convert it ./bazel-bin/tensorflow/compiler/mlir/lite/tf_tfl_translate -tf-input-shapes=1,224,224,3 -tf-input-data-types=DT_FLOAT -tf- output-arrays=MobilenetV1/Predictions/Reshape_1 /tmp/mobilenet_v1_1.0_224_frozen.pb --tf-input-arrays=input -o /tmp/foo.tflite • yes, it works like a charm. Nope, not for quantized one? neither ./bazel-bin/tensorflow/compiler/mlir/lite/tf_tfl_translate -tf-input-shapes=1,224,224,3 -tf-input-data-types=DT_QUINT8 -tf- output-arrays=MobilenetV1/Predictions/Reshape_1 /tmp/mobilenet_v1_1.0_224_quant_frozen.pb --tf-input-arrays=input -o /tmp/ bar.tflite nor ./bazel-bin/tensorflow/compiler/mlir/lite/tf_tfl_translate -tf-input-shapes=1,224,224,3 -tf-input-data-types=DT_FLOAT -tf- output-arrays=MobilenetV1/Predictions/Reshape_1 /tmp/mobilenet_v1_1.0_224_quant_frozen.pb --tf-input-arrays=input -o /tmp/ bar.tflite —tf-inference-type=TF_QUINT8 works
  6. 6. How the converter works? • Import from GraphDef, in .pb or .pbtxt format, into MLIR • Raise control-flow graph. Converts TF Control Flow dialect to TF dialect. • The Canonicalization pass iteratively applies canonicalization transformations in a greedy way until no further changes occur. Canonicalization includes constant folding. • The Legalize pass converts TensorFlow operations to TensorFlow Lite ones. The operations that cannot be mapped to TensorFlow Lite dialect are left as TensorFlow operations. Unsupported op handling follows the proposed TFLite mechanism. • Optimizations are performed in both the TF & TFLite dialect; aiming for small size and high performance (among the core value proposition of TensorFlow Lite models). • The Export pass writes out TensorFlow Lite FlatBuffer format. This pass operates on MLIR TensorFlow Lite dialect and is simple/direct translation. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/ README.md
  7. 7. tf-mlir-translate • graphdef —> mlir $ ./bazel-bin/tensorflow/compiler/mlir/tensorflow/tf-mlir-translate --help OVERVIEW: MLIR translation driver USAGE: tf-mlir-translate [options] <input file> OPTIONS: Color Options: --color - Use colors in output (default=autodetect) General options: --mlir-max-pattern-match-iterations=<uint> - Max number of iterations scanning the functions for pattern match --mlir-pretty-debuginfo - Print pretty debug info in MLIR output --mlir-print-debuginfo - Print debug info in MLIR output -o=<filename> - Output filename --remarks-yaml-string-table - Translation to perform --deserialize-spirv - deserialize-spirv --graphdef-to-mlir - graphdef-to-mlir --graphdef-to-splatted-mlir - graphdef-to-splatted-mlir --mlir-to-graphdef - mlir-to-graphdef --mlir-to-llvmir - mlir-to-llvmir --mlir-to-nvvmir - mlir-to-nvvmir --serialize-spirv - serialize-spirv --test-only-mlir-to-tf-nodedef - test-only-mlir-to-tf-nodedef --tf-debug-info=<string> - Path to the debug info file of the input graph def. --tf-inference-type=<string> - Sets the type of real-number arrays in the output file. Only allows float and quantized types --tf-input-arrays=<string> - Input tensor names, separated by ',' --tf-input-data-types=<string> - Input tensor data types, separated by ',' --tf-input-max-values=<string> - Sets the upper bound of the input data. Separated by ','; Each entry in the list should match an entry in -tf-input-arrays. This is used when -tf-inference-type is a quantized type. --tf-input-min-values=<string> - Sets the lower bound of the input data. Separated by ','; Each entry in the list should match an entry in -tf-input-arrays. This is used when -tf-inference-type is a quantized type. --tf-input-shapes=<string> - Input tensor shapes. Shapes for different tensors are separated by ':', and dimension sizes for the same tensor are separated by ',' --tf-output-arrays=<string> - Output tensor names, separated by ',' --tf-prune-unused-nodes - Prune unused nodes in the input graphdef --time-trace-granularity=<uint> - Minimum time granularity (in microseconds) traced by time profile
  8. 8. _tf dialect ./bazel-bin/tensorflow/compiler/mlir/tensorflow/tf-mlir-translate --graphdef-to-mlir -tf-input- shapes=1,224,224,3 -tf-input-data-types=DT_FLOAT -tf-output-arrays=MobilenetV1/Predictions/Reshape_1 /tmp/ mobilenet_v1_1.0_224_quant_frozen.pb --tf-input-arrays=input |less func @main(%arg0: tensor<1x224x224x3xf32>) -> tensor<1x1001xf32> attributes {tf.entry_function = {inputs = "input", outputs = "MobilenetV1/Predictions/Reshape_1"}} { %0:2 = "_tf.Const"() {device = "", dtype = "tfdtype$DT_FLOAT", name = "MobilenetV1/Conv2d_0/BatchNorm/ beta", value = opaque<"tf", "0x746674656E736F722464747970653A2044545F464C4F41540A74656E736F725F7368617065207B0A202064696D207B0A20202020 73697A653A2033320A20207D0A7D0A74656E736F725F636F6E74656E743A20225C3234335C3335305C3233355C3237375C3234345C3 330305C303335405C323134395C3337353D685F5C3333315C323736315A5C3333303F5C3232305C3232305C303137405C3235325C33 37375C273F5C3331315C32373523405C3231315C3336325C3237335C3237365C3336345C3230365C3234315C3237375C32373054655 C3237375C3237345C3333375C30323140695C3236355C30303040795C3233375C3237373F5C3230346F393F485C3333314D40515C33 32335C3237345C3237375C3230325C3234305C303335405C3233335C3230365C3233353E5C323633525C3337373F5C3030355C32343 25C3032315C3237375C3232305C3230332A5C323734405C3331355C725C3330305C3332345C3230335040235C3336325C3030375C33 30305C3237355C6E5C303235405C323735295C32323440515C3030345C3334325C3237365C333037465C303334405C3236375C33313 15C3236343F5C3232305C3233365C3237335C323736655C3330325440220A"> : tensor<32xf32>} : () -> (tensor<32xf32>, !_tf.control) %1:2 = "_tf.Identity"(%0#0) {T = "tfdtype$DT_FLOAT", _class = ["loc:@MobilenetV1/Conv2d_0/BatchNorm/ beta"], device = "", name = "MobilenetV1/Conv2d_0/BatchNorm/beta/read"} : (tensor<32xf32>) -> (tensor<32xf32>, !_tf.control) %2:2 = "_tf.Const"() {device = "", dtype = "tfdtype$DT_FLOAT", name = "MobilenetV1/Conv2d_0/BatchNorm/ gamma", value = opaque<"tf", "0x746674656E736F722464747970653A2044545F464C4F41540A74656E736F725F7368617065207B0A202064696D207B0A20202020 73697A653A2033320A20207D0A7D0A74656E736F725F636F6E74656E743A20225C3330315C3030305C3031373F5C333332776E3F5C3 233365C3334305C3230323F5C30303445643F675C3334344D3F2E345C3031363F425C3032325C3234363F5C313737595C3332353E62 5C303137773F4B5C3334355C3233323E5C3332365C3030326F3F5C323035515C3230303F5C323431665C303236405C3232335C32313 65C3032343F5C3231355C323235753F295C3230345C3232373F3F5C3337305C3236363F5C3237365C323736213F5C3332305C333630 5C323036405C3030345C3237355C3334343E5C3337305C22743F5C3235325C3233355C6E3F5C3031305C3031375C3233323F685C333 5315C3232373F5C3233365C3235317E3F5C303337435C3234343F675C3235326A3F5C32333752733F5C3235325C3335325C3232313F 77565C3233313F5C3030355C3032326C3F5C32313053573F220A"> : tensor<32xf32>} : () -> (tensor<32xf32>, ! _tf.control)
  9. 9. TensorFlow Dialects #include "tensorflow/compiler/mlir/tensorflow/ir/control_flow_ops.h" #include "tensorflow/compiler/mlir/tensorflow/ir/tf_executor.h" #include "tensorflow/compiler/mlir/tensorflow/ir/tf_ops.h" using namespace mlir; // Static initialization for TF dialect registration. static DialectRegistration<TFControlFlow::TFControlFlowDialect> TFControlFlowOps; static DialectRegistration<TF::TensorFlowDialect> TFOps; static DialectRegistration<tf_executor::TensorFlowExecutorDialect> TfExecutorDialect; tensorflow/compiler/mlir/tensorflow/ir/dialect_registration.cc
  10. 10. TensorFlow Dialects • More on TensorFlow dialects: • tf: the main dialect, representing the regular operations in a TensorFlow graph (the ones that don’t have special contract with the executor). • tf_executor:  dialect that represents the execution model of the TensorFlow executor (e.g., control dependencies, deadness propagation) • _tf: It's said in the TensorFlow MLIR open source announcement mail thread, https:// groups.google.com/a/tensorflow.org/forum/#!topic/mlir/xe522DD4ZYA, that control flow dialect "_tf" is temporary. • "One intent of this design is that TensorFlow 2.x features can choose to target just the tf dialect, allowing us to phase out the tf_executor dialect in subsequent TensorFlow releases. The combination of the two dialects allows to represent arbitrary existing TensorFlow graphs." [1] [1] "https://github.com/tensorflow/community/pull/115
  11. 11. tf dialect ./bazel-bin/tensorflow/compiler/mlir/tensorflow/tf-mlir-translate --graphdef-to-mlir -tf-input- shapes=1,224,224,3 -tf-input-data-types=DT_FLOAT -tf-output-arrays=MobilenetV1/Predictions/Reshape_1 /tmp/ mobilenet_v1_1.0_224_quant_frozen.pb --tf-input-arrays=input | ./bazel-bin/tensorflow/compiler/mlir/tf- opt --tf-raise-control-flow |less func @main(%arg0: tensor<1x224x224x3xf32>) -> tensor<1x1001xf32> attributes {tf.entry_function = {inputs = "input", outputs = "MobilenetV1/Predictions/Reshape_1"}} { %cst = "tf.Const"() {device = "", dtype = "tfdtype$DT_FLOAT", name = "MobilenetV1/Conv2d_0/BatchNorm/ beta", value = opaque<"tf", "0x746674656E736F722464747970653A2044545F464C4F41540A74656E736F725F7368617065207B0A202064696D207B0A20202020 73697A653A2033320A20207D0A7D0A74656E736F725F636F6E74656E743A20225C3234335C3335305C3233355C3237375C3234345C3 330305C303335405C323134395C3337353D685F5C3333315C323736315A5C3333303F5C3232305C3232305C303137405C3235325C33 37375C273F5C3331315C32373523405C3231315C3336325C3237335C3237365C3336345C3230365C3234315C3237375C32373054655 C3237375C3237345C3333375C30323140695C3236355C30303040795C3233375C3237373F5C3230346F393F485C3333314D40515C33 32335C3237345C3237375C3230325C3234305C303335405C3233335C3230365C3233353E5C323633525C3337373F5C3030355C32343 25C3032315C3237375C3232305C3230332A5C323734405C3331355C725C3330305C3332345C3230335040235C3336325C3030375C33 30305C3237355C6E5C303235405C323735295C32323440515C3030345C3334325C3237365C333037465C303334405C3236375C33313 15C3236343F5C3232305C3233365C3237335C323736655C3330325440220A"> : tensor<32xf32>} : () -> tensor<32xf32> %0 = "tf.Identity"(%cst) {T = "tfdtype$DT_FLOAT", _class = ["loc:@MobilenetV1/Conv2d_0/BatchNorm/beta"], device = "", name = "MobilenetV1/Conv2d_0/BatchNorm/beta/read"} : (tensor<32xf32>) -> tensor<32xf32> %cst_0 = "tf.Const"() {device = "", dtype = "tfdtype$DT_FLOAT", name = "MobilenetV1/Conv2d_0/BatchNorm/ gamma", value = opaque<"tf", "0x746674656E736F722464747970653A2044545F464C4F41540A74656E736F725F7368617065207B0A202064696D207B0A20202020 73697A653A2033320A20207D0A7D0A74656E736F725F636F6E74656E743A20225C3330315C3030305C3031373F5C333332776E3F5C3 233365C3334305C3230323F5C30303445643F675C3334344D3F2E345C3031363F425C3032325C3234363F5C313737595C3332353E62 5C303137773F4B5C3334355C3233323E5C3332365C3030326F3F5C323035515C3230303F5C323431665C303236405C3232335C32313 65C3032343F5C3231355C323235753F295C3230345C3232373F3F5C3337305C3236363F5C3237365C323736213F5C3332305C333630 5C323036405C3030345C3237355C3334343E5C3337305C22743F5C3235325C3233355C6E3F5C3031305C3031375C3233323F685C333 5315C3232373F5C3233365C3235317E3F5C303337435C3234343F675C3235326A3F5C32333752733F5C3235325C3335325C3232313F 77565C3233313F5C3030355C3032326C3F5C32313053573F220A"> : tensor<32xf32>} : () -> tensor<32xf32>
  12. 12. Leaky ReLU • a LeakyReLU example func @teatLeakyReLU(%1: tensor<*xf32>) -> tensor<*xf32> { %2 = "tf.LeakyRelu"(%1) { alpha = 0.1 : f32 } : (tensor<*xf32>) -> tensor<*xf32> return %2 : tensor<*xf32> } • round trip $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { %0 = "tf.LeakyRelu"(%arg0) {alpha = 1.000000e-01 : f32} : (tensor<*xf32>) -> tensor<*xf32> return %0 : tensor<*xf32> }
  13. 13. Leaky ReLU w/ alpha = 1.0 • a LeakyReLU example func @teatLeakyReLU(%1: tensor<*xf32>) -> tensor<*xf32> { %2 = "tf.LeakyRelu"(%1) { alpha = 1.0 : f32 } : (tensor<*xf32>) -> tensor<*xf32> return %2 : tensor<*xf32> } • round trip $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { %0 = "tf.LeakyRelu"(%arg0) {alpha = 1.000000e+00 : f32} : (tensor<*xf32>) -> tensor<*xf32> return %0 : tensor<*xf32> } • constant folding $ bazel-bin/tensorflow/compiler/mlir/tf-opt --test-constant-fold ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { return %arg0 : tensor<*xf32> } • canonicalization $ bazel-bin/tensorflow/compiler/mlir/tf-opt —canonicalize ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { return %arg0 : tensor<*xf32> }
  14. 14. Leaky ReLU Legalization • a LeakyReLU, alpha = 0.1 func @teatLeakyReLU(%1: tensor<*xf32>) -> tensor<*xf32> { %2 = "tf.LeakyRelu"(%1) { alpha = 0.1 : f32 } : (tensor<*xf32>) -> tensor<*xf32> return %2 : tensor<*xf32> } • Leaky ReLU legalization, alpha = 0.1 $ bazel-bin/tensorflow/compiler/mlir/tf-opt --tfl-legalize-tf ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { %0 = “tfl.leaky_relu”(%arg0) {alpha = 1.000000e-01 : f32} : (tensor<*xf32>) -> tensor<*xf32> return %0 : tensor<*xf32> } • Leaky ReLU legalization, alpha = 1.0 $ bazel-bin/tensorflow/compiler/mlir/tf-opt --tfl-legalize-tf ~/work/mlir/test_leaky_relu.mli func @teatLeakyReLU(%arg0: tensor<*xf32>) -> tensor<*xf32> { return %arg0 : tensor<*xf32> }
  15. 15. tf —> tfl: Conv2D+BiasAdd+Relu ! conv_2d
  16. 16. tf.FakeQuant() • simple FakeQuant func @testValidFakeQuantWithMinMaxArgs(%arg0: tensor<8x8x8x8xf32>) -> tensor<8x8x8x8xf32> { %0 = "tf.FakeQuantWithMinMaxArgs"(%arg0) {max = 1.000000e+00 : f32, min = -1.000000e+00 : f32, num_bits = 3 : i64} : (tensor<8x8x8x8xf32>) -> tensor<8x8x8x8xf32> return %0 : tensor<8x8x8x8xf32> } • legalize to tfl $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/test_fake_quant.mlir --tfl-legalize-tf func @testValidFakeQuantWithMinMaxArgs(%arg0: tensor<8x8x8x8xf32>) -> tensor<8x8x8x8xf32> { %0 = "tfl.quantize"(%arg0) {qtype = tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>>} : (tensor<8x8x8x8xf32>) -> tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>> %1 = "tfl.dequantize"(%0) : (tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>>) -> tensor<8x8x8x8xf32> return %1 : tensor<8x8x8x8xf32> } • --tfl-post-quantize $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/test_fake_quant.mlir --tfl-legalize-tf --tfl-post-quantize func @testValidFakeQuantWithMinMaxArgs(%arg0: tensor<8x8x8x8xf32>) -> tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>> { %0 = "tfl.quantize"(%arg0) {qtype = tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>>} : (tensor<8x8x8x8xf32>) -> tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>> return %0 : tensor<8x8x8x8x!quant.uniform<u8:f32, 0.0078431372549019607:128>> }
  17. 17. TFLite Native Quantization • Take input min/max information and set the ArrayInfo (which really is InputOrOutputArrayInfo). • In LegalizeTF, convert ArrayInfo min/max to tf.Quantize and tf.Dequantize nodes. (or tf.FakeQuant) Convert all constant FakeQuants to (tf.FQ -> tfl.Q -> tfl.DQ). • Hardcode logic/propagation needs to happen here. • Run TF constant folding. • In PrepareTFL, convert all tf.FQ to (tfl.Q -> tfl.DQ). • Run quantization pass that take (tfl.DQ (for both input and weights) -> op -> tfl.Q) and replaces with (op). Also replace (constant_float -> tfl.Q) with (constant_quant). https://github.com/tensorflow/mlir/blob/master/g3doc/Quantization.md#tflite-native-quantization
  18. 18. tfl passes namespace mlir { class FunctionPassBase; class ModulePassBase; namespace TFL { // Creates an instance of the TensorFlow Lite dialect LegalizeTF pass. FunctionPassBase *CreateLegalizeTFPass(); // Creates an instance of the TensorFlow Lite dialect Optimize pass. FunctionPassBase *CreateOptimizePass(); // Creates an instance of the TensorFlow Lite dialect PrepareTF pass. FunctionPassBase *CreatePrepareTFPass(); // Creates an instance of the TensorFlow Lite dialect LowerStaticTensorList // pass. ModulePassBase *CreateLowerStaticTensorListPass(); // Creates an instance of the TensorFlow Lite dialect Quantize pass. FunctionPassBase *CreateQuantizePass(); // Creates an instance of the TensorFlow Lite dialect PrepareQuantize pass. FunctionPassBase *CreatePrepareQuantizePass(); // Creates a instance of the TensorFlow Lite dialect PostQuantize pass. FunctionPassBase *CreatePostQuantizePass(bool emit_quant_adaptor_ops); } // namespace TFL } // namespace mlir
  19. 19. quantization passes • prepare-quantize • Applies prepare quantization on the model in TFL dialect. This pass runs before the quantization pass and propagate the quantization parameter across ops. This step is necessary for post-training quantization and also making the quantization rule for some operations in the quantization-aware training quantization simpler. • quantize • tensorflow/compiler/mlir/lite/transforms/quantize.cc • tensorflow/compiler/mlir/lite/transforms/quantize_patterns.td • post-quantize • Remove Quantization Adaptor Ops
  20. 20. TFL optimization • activation into convolution • an add op adding a constant value to a convolution op with constant bias • a mul op multiplying a constant value to a convolution op with constant filter and bias • quantize/dequantize • fully connected with add tensorflow/compiler/mlir/lite/transforms/optimize.cc tensorflow/compiler/mlir/lite/transforms/optimize_patterns.td
  21. 21. control flow: tf.If() func @main(%arg0: tensor<i1>, %arg1: tensor<1xf32>, %arg2: tensor<1xf32>) -> tensor<1xf32> { %0 = "tf.Placeholder.input"(%arg0) : (tensor<i1>) -> tensor<i1> %1 = "tf.Placeholder.input"(%arg1) : (tensor<1xf32>) -> tensor<1xf32> %2 = "tf.Placeholder.input"(%arg2) : (tensor<1xf32>) -> tensor<1xf32> %3 = "tf.If"(%0, %1, %2) { else_branch = @testIfElse, then_branch = @testIfThen } : (tensor<i1>, tensor<1xf32>, tensor<1xf32>) -> tensor<1xf32> return %1 : tensor<1xf32> } func @testIfThen(%arg0: tensor<*xf32>, %arg1: tensor<*xf32>) -> tensor<*xf32> { return %arg0 : tensor<*xf32> } func @testIfElse(%arg0: tensor<*xf32>, %arg1: tensor<*xf32>) -> tensor<*xf32> { return %arg1 : tensor<*xf32> }
  22. 22. tf.If() not legalized $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/test_tf_if_main.mlir —tfl-legalize-tf func @main(%arg0: tensor<i1>, %arg1: tensor<1xf32>, %arg2: tensor<1xf32>) -> tensor<1xf32> { %0 = "tfl.pseudo_input"(%arg0) : (tensor<i1>) -> tensor<i1> %1 = "tfl.pseudo_input"(%arg1) : (tensor<1xf32>) -> tensor<1xf32> %2 = "tfl.pseudo_input"(%arg2) : (tensor<1xf32>) -> tensor<1xf32> %3 = "tf.If"(%0, %1, %2) { else_branch = @testIfElse, then_branch = @testIfThen } : (tensor<i1>, tensor<1xf32>, tensor<1xf32>) -> tensor<1xf32> return %1 : tensor<1xf32> } func @testIfThen(%arg0: tensor<*xf32>, %arg1: tensor<*xf32>) -> tensor<*xf32> { return %arg0 : tensor<*xf32> } func @testIfElse(%arg0: tensor<*xf32>, %arg1: tensor<*xf32>) -> tensor<*xf32> { return %arg1 : tensor<*xf32> }
  23. 23. no tfl.if()? • yes, there is no tfl.if() of equivalent in tensorflow/compiler/mlir/lite/ir/tfl_ops.{cc, h, td} • however, we can convert the mlir in previous page to TFLite flatbuffer, because there is CustomOptionsOffset Translator::CreateIfOpCustomOptions(mlir::TF::IfOp op) { int then_subgraph_index = subgraph_index_map_.at(op.getThen().str()); int else_subgraph_index = subgraph_index_map_.at(op.getElse().str()); auto flex_builder = absl::make_unique<flexbuffers::Builder>(); flex_builder->Map([&]() { flex_builder->Int("then_subgraph_index", then_subgraph_index); flex_builder->Int("else_subgraph_index", else_subgraph_index); }); flex_builder->Finish(); return builder_.CreateVector(flex_builder->GetBuffer()); } tensorflow/compiler/mlir/lite/flatbuffer_translate.cc
  24. 24. flatbuffer_translate --mlir-to-tflite- flatbuffer $ bazel-bin/tensorflow/compiler/ mlir/tf-opt ~/work/mlir/ test_tf_if_main.mlir --tfl- legalize-tf | bazel-bin/ tensorflow/compiler/mlir/lite/ flatbuffer_translate --mlir-to- tflite-flatbuffer | ./bazel-bin/ tensorflow/compiler/mlir/lite/ flatbuffer_to_string -
  25. 25. XLA • simple div in TensoFlow func @div(%arg0: tensor<4xi32>, %arg1: tensor<4xi32>) -> tensor<4xi32> { %0 = "tf.Div"(%arg0, %arg1) : (tensor<4xi32>, tensor<4xi32>) -> tensor<4xi32> return %0 : tensor<4xi32> } • legalize to xla $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/div.mlir —xla-legalize-tf func @div(%arg0: tensor<4xi32>, %arg1: tensor<4xi32>) -> tensor<4xi32> { %0 = xla.div %arg0, %arg1 : tensor<4xi32> return %0 : tensor<4xi32> } • legalize to standard mlir $ bazel-bin/tensorflow/compiler/mlir/tf-opt ~/work/mlir/div.mlir --xla-legalize-tf --xla-legalize-to-std func @div(%arg0: tensor<4xi32>, %arg1: tensor<4xi32>) -> tensor<4xi32> { %0 = divis %arg0, %arg1 : tensor<4xi32> return %0 : tensor<4xi32> }
  26. 26. Recap: MLIR for TF and TFLite • Conversion of Floating point models • Infrastructure for quantized models is there • Custom ops, such as the if control-flow could be done for mlir -> flatbuffer • How about LSTM? It seems something like OpHint [1] is not there yet • XLA: some ops work [1] https://www.tensorflow.org/api_docs/python/tf/lite/OpHint
  27. 27. Existing passes in MLIR repo in early June
  28. 28. • Affine transformations: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md • dma: https://docker.pkg.github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md#dma_start- operation, https://docker.pkg.github.com/tensorflow/mlir/blob/master/g3doc/LangRef.md#dma_wait- operation • Canonicalize: converting into a canonical form, https://github.com/tensorflow/mlir/blob/master/g3doc/ Canonicalization.md • CSE • Fixed point math: currently only two uniformly quantized optimizations supported • Quant: convert const, convert to training time simulated values to quantize/dequantize cast • https://github.com/tensorflow/mlir/blob/master/g3doc/Quantization.md • Linalg dialect opts: https://github.com/tensorflow/mlir/blob/master/g3doc/Tutorials/Linalg/ • lower-affine, lower-to-llvm • memref: memref is a mlir data type: https://github.com/tensorflow/mlir/blob/master/g3doc/ LangRef.md#memref-type
  29. 29. More Passes in TensorFlow
  30. 30. Using other passes? • GPU: nvvmir, spirv, .. • for codegen and other purposes • linalg, affine, memref: • tiling, polyhedral etc. • NO, not yet • MLIR is incremental. Things won’t happen overnight.
  31. 31. Fin

×