- How to use the graph transformer to run a sample model app on XCORE.AI
- Usage from Python to run a sample model on host
- Graph transformer command-line options
- Transforming Pytorch models
- FAQ
- Changelog
- Advanced topics
xmos-ai-tools
is available on PyPI.
It includes:
- the MLIR-based XCore optimizer(xformer) to optimize Tensorflow Lite models for XCore
- the XCore tflm interpreter to run the transformed models on host
Perform the following steps once:
-
pip3 install xmos-ai-tools --upgrade
; use a virtual-environment of your choice.Use
pip3 install xmos-ai-tools --pre --upgrade
instead if you want to install the latest development version.
from xmos_ai_tools import xformer as xf
xf.convert("source model path", "converted model path", params=None)
where params
is a dictionary of compiler flags and parameters and their values.
For example:
from xmos_ai_tools import xformer as xf
xf.convert("example_int8_model.tflite", "xcore_optimised_int8_model.tflite", {
"xcore-thread-count": "5",
})
To see all available parameters, call
from xmos_ai_tools import xformer as xf
xf.print_help()
This will print all options available to pass to xformer. To see hidden options, run print_help(show_hidden=True)
To create a parameters file and a tflite model suitable for loading to flash, use the "xcore-flash-image-file" option.
xf.convert("example_int8_model.tflite", "xcore_optimised_int8_flash_model.tflite", {
"xcore-flash-image-file ": "./xcore_params.params",
})
from xmos_ai_tools.xinterpreters import xcore_tflm_host_interpreter
ie = xcore_tflm_host_interpreter()
ie.set_model(model_path='path_to_xcore_model', params_path='path_to_xcore_params')
ie.set_tensor(ie.get_input_details()[0]['index'], value='input_data')
ie.invoke()
xformer_outputs = []
for i in range(num_of_outputs):
xformer_outputs.append(ie.get_tensor(ie.get_output_details()[i]['index']))