+
Skip to content

SiMa-ai/models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

alt text

README

Welcome to SiMa.ai's ML Model webpage!

A large set of models are supported1 on the SiMa.ai platform as part of Palette.

This Model List:

  • Covers multiple frameworks such as PyTorch and ONNX.
  • Draws from various repositories including Torchvision, ONNX Model Zoo, and Open Model Zoo for OpenVINO.

For all supported models, links/instructions are provided for the pre-trained FP32 models along with compilation scripts and PyTorch to ONNX conversion script.

To run the models on this repository using the Palette ModelSDK, make sure you have successfully installed Palette from the SiMa.ai Developer Portal.

Model List

Model Framework Input Shape Pretrained Model Compilation script
alexnet PyTorch 1, 3, 224, 224 Torchvision Link alexnet.py
bvlcalexnet-7 ONNX 1, 3, 224, 224 ONNX Zoo Link bvlcalexnet-7_fp32_224_224.py
caffenet-9 ONNX 1, 3, 224, 224 ONNX Zoo Link caffenet-9_fp32_224_224.py
colorization-siggraph PyTorch 1, 1, 256, 256 OpenVINO Link colorization-siggraph.py
colorization-v2 PyTorch 1, 1, 256, 256 OpenVINO Link colorization-v2.py
convnext_base PyTorch 1, 3, 224, 224 Torchvision Link convnext_base.py
convnext_large PyTorch 1, 3, 224, 224 Torchvision Link convnext_large.py
convnext_small PyTorch 1, 3, 224, 224 Torchvision Link convnext_small.py
convnext_tiny PyTorch 1, 3, 224, 224 Torchvision Link convnext_tiny.py
convnext-tiny PyTorch 1, 3, 224, 224 OpenVINO Link convnext-tiny.py
ctdet_coco_dlav0_512 PyTorch 1, 3, 512, 512 OpenVINO Link ctdet_coco_dlav0_512.py
deeplabv3_mobilenet_v3_large PyTorch 1, 3, 224, 224 Torchvision Link deeplabv3_mobilenet_v3_large.py
deeplabv3_resnet50 PyTorch 1, 3, 224, 224 Torchvision Link deeplabv3_resnet50.py
deeplabv3_resnet101 PyTorch 1, 3, 224, 224 Torchvision Link deeplabv3_resnet101.py
densenet-12 ONNX 1, 3, 224, 224 ONNX Zoo Link densenet-12_fp32_224_224.py
densenet-9 ONNX 1, 3, 224, 224 ONNX Zoo Link densenet-9_fp32_224_224.py
densenet121 PyTorch 1, 3, 224, 224 Torchvision Link densenet121.py
densenet161 PyTorch 1, 3, 224, 224 Torchvision Link densenet161.py
densenet169 PyTorch 1, 3, 224, 224 Torchvision Link densenet169.py
densenet201 PyTorch 1, 3, 224, 224 Torchvision Link densenet201.py
dla-34 PyTorch 1, 3, 224, 224 OpenVINO Link dla-34.py
drn-d-38 PyTorch 1, 3, 1024, 2048 OpenVINO Link drn-d-38.py
efficientnet_b0 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b0.py
efficientnet_b1 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b1.py
efficientnet_b2 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b2.py
efficientnet_b3 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b3.py
efficientnet_b4 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b4.py
efficientnet_b5 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b5.py
efficientnet_b6 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b6.py
efficientnet_b7 PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_b7.py
efficientnet_v2_m PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_v2_m.py
efficientnet_v2_s PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_v2_s.py
efficientnet_v2_l PyTorch 1, 3, 224, 224 Torchvision Link efficientnet_v2_l.py
efficientnet-b0 TensorFlow 1, 224, 224, 3 OpenVINO Link efficientnet-b0.py
efficientnet-b0-pytorch PyTorch 1, 3, 224, 224 OpenVINO Link efficientnet-b0-pytorch.py
efficientnet-lite4-11 ONNX 1, 224, 224, 3 ONNX Zoo Link efficientnet-lite4-11_fp32_224_224.py
efficientnet-v2-b0 PyTorch 1, 3, 224, 224 OpenVINO Link efficientnet-v2-b0.py
efficientnet-v2-s PyTorch 1, 224, 224, 3 OpenVINO Link efficientnet-v2-s.py
erfnet PyTorch 1, 3, 208, 976 OpenVINO Link erfnet.py
fcn_resnet50 PyTorch 1, 3, 224, 224 Torchvision Link fcn_resnet50.py
fcn_resnet101 PyTorch 1, 3, 224, 224 Torchvision Link fcn_resnet101.py
googlenet PyTorch 1, 3, 224, 224 Torchvision Link googlenet.py
googlenet-9 ONNX 1, 3, 224, 224 ONNX Zoo Link googlenet-9_fp32_224_224.py
googlenet-v3-pytorch PyTorch 1, 3, 299, 299 OpenVINO Link googlenet-v3-pytorch.py
hbonet-0.25 PyTorch 1, 3, 224, 224 OpenVINO Link hbonet-0_25.py
hbonet-1.0 PyTorch 1, 3, 224, 224 OpenVINO Link hbonet-1_0.py
higher-hrnet-w32-human-pose-estimation PyTorch 1, 3, 512, 512 OpenVINO Link higher-hrnet-w32-human-pose-estimation.py
human-pose-estimation-3d-0001 PyTorch 1, 3, 256, 448 OpenVINO Link human-pose-estimation-3d-0001.py
inception_v3 PyTorch 1, 3, 224, 224 Torchvision Link inception_v3.py
lraspp_mobilenet_v3_large PyTorch 1, 3, 224, 224 Torchvision Link lraspp_mobilenet_v3_large.py
mnasnet0_5 PyTorch 1, 3, 224, 224 Torchvision Link mnasnet0_5.py
mnasnet0_75 PyTorch 1, 3, 224, 224 Torchvision Link mnasnet0_75.py
mnasnet1_0 PyTorch 1, 3, 224, 224 Torchvision Link mnasnet1_0.py
mnasnet1_3 PyTorch 1, 3, 224, 224 Torchvision Link mnasnet1_3.py
mobilenet_v2 PyTorch 1, 3, 224, 224 Torchvision Link mobilenet_v2.py
mobilenet-v1-0.25-128 TensorFlow 1, 128, 128, 3 OpenVINO Link mobilenet-v1-0_25-128.py
mobilenet-v1-1.0-224-tf TensorFlow 1, 224, 224, 3 OpenVINO Link mobilenet-v1-1_0-224-tf.py
mobilenet-v2-1.0-224 TensorFlow 1, 224, 224, 3 OpenVINO Link mobilenet-v2-1_0-224.py
mobilenet-v2-1.4-224 TensorFlow 1, 224, 224, 3 OpenVINO Link mobilenet-v2-1_4-224.py
mobilenet-v2-7 ONNX 1, 3, 224, 224 ONNX Zoo Link mobilenet-v2-7_fp32_224_224.py
mobilenet-v2-pytorch PyTorch 1, 3, 224, 224 OpenVINO Link mobilenet-v2-pytorch.py
mobilenet_v3_large PyTorch 1, 3, 224, 224 Torchvision Link mobilenet_v3_large.py
mobilenet_v3_small PyTorch 1, 3, 224, 224 Torchvision Link mobilenet_v3_small.py
mobilenet-yolo-v4-syg Keras 1, 416, 416, 3 OpenVINO Link mobilenet-yolo-v4-syg.py
mobilenetv2-12 ONNX 1, 3, 224, 224 ONNX Zoo Link mobilenetv2-12_fp32_224_224.py
nfnet-f0 PyTorch 1, 3, 256, 256 OpenVINO Link nfnet-f0.py
open-closed-eye-0001 PyTorch 1, 3, 32, 32 OpenVINO Link open-closed-eye-0001.py
quantized_googlenet PyTorch 1, 3, 224, 224 Torchvision Link quantized_googlenet.py
quantized_inception_v3 PyTorch 1, 3, 224, 224 Torchvision Link quantized_inception_v3.py
quantized_mobilenet_v2 PyTorch 1, 3, 224, 224 Torchvision Link quantized_mobilenet_v2.py
quantized_mobilenet_v3_large PyTorch 1, 3, 224, 224 Torchvision Link quantized_mobilenet_v3_large.py
quantized_resnet18 PyTorch 1, 3, 224, 224 Torchvision Link quantized_resnet18.py
quantized_resnet50 PyTorch 1, 3, 224, 224 Torchvision Link quantized_resnet50.py
quantized_resnext101_32x8d PyTorch 1, 3, 224, 224 Torchvision Link quantized_resnext101_32x8d.py
quantized_resnext101_64x4d PyTorch 1, 3, 224, 224 Torchvision Link quantized_resnext101_64x4d.py
quantized_shufflenet_v2_x0_5 PyTorch 1, 3, 224, 224 Torchvision Link quantized_shufflenet_v2_x0_5.py
quantized_shufflenet_v2_x1_0 PyTorch 1, 3, 224, 224 Torchvision Link quantized_shufflenet_v2_x1_0.py
quantized_shufflenet_v2_x1_5 PyTorch 1, 3, 224, 224 Torchvision Link quantized_shufflenet_v2_x1_5.py
quantized_shufflenet_v2_x2_0 PyTorch 1, 3, 224, 224 Torchvision Link quantized_shufflenet_v2_x2_0.py
rcnn-ilsvrc13-9 ONNX 1, 3, 224, 224 ONNX Zoo Link rcnn-ilsvrc13-9_fp32_224_224.py
regnet_x_1_6gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_1_6gf.py
regnet_x_8gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_8gf.py
regnet_x_16gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_16gf.py
regnet_x_32gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_32gf.py
regnet_x_3_2gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_3_2gf.py
regnet_x_400mf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_400mf.py
regnet_x_800mf PyTorch 1, 3, 224, 224 Torchvision Link regnet_x_800mf.py
regnet_y_1_6gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_1_6gf.py
regnet_y_8gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_8gf.py
regnet_y_16gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_16gf.py
regnet_y_128gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_16gf.py
regnet_y_32gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_32gf.py
regnet_y_3_2gf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_3_2gf.py
regnet_y_400mf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_400mf.py
regnet_y_800mf PyTorch 1, 3, 224, 224 Torchvision Link regnet_y_800mf.py
regnetx-3.2gf PyTorch 1, 3, 224, 224 OpenVINO Link regnetx-3_2gf.py
repvgg-a0 PyTorch 1, 3, 224, 224 OpenVINO Link repvgg-a0.py
repvgg-b1 PyTorch 1, 3, 224, 224 OpenVINO Link repvgg-b1.py
repvgg-b3 PyTorch 1, 3, 224, 224 OpenVINO Link repvgg-b3.py
resnet-18-pytorch PyTorch 1, 3, 224, 224 OpenVINO Link resnet-18-pytorch.py
resnet-34-pytorch PyTorch 1, 3, 224, 224 OpenVINO Link resnet-34-pytorch.py
resnet-50-pytorch PyTorch 1, 3, 224, 224 OpenVINO Link resnet-50-pytorch.py
resnet-50-tf TensorFlow 1, 224, 224, 3 OpenVINO Link resnet-50-tf.py
resnet101 PyTorch 1, 3, 224, 224 Torchvision Link resnet101.py
resnet101-v1-7 ONNX 1, 3, 224, 224 ONNX Zoo Link resnet101-v1-7_fp32_224_224.py
resnet152 PyTorch 1, 3, 224, 224 Torchvision Link resnet152.py
resnet152-v1-7 ONNX 1, 3, 224, 224 ONNX Zoo Link resnet152-v1-7_fp32_224_224.py
resnet18 PyTorch 1, 3, 224, 224 Torchvision Link resnet18.py
resnet34 PyTorch 1, 3, 224, 224 Torchvision Link resnet34.py
resnet50 PyTorch 1, 3, 224, 224 Torchvision Link resnet50.py
resnet50-v1-12 ONNX 1, 3, 224, 224 ONNX Zoo Link resnet50-v1-12_fp32_224_224.py
resnet50-v1-7 ONNX 1, 3, 224, 224 ONNX Zoo Link resnet50-v1-7_fp32_224_224.py
resnet50-v2-7 ONNX 1, 3, 224, 224 ONNX Zoo Link resnet50-v2-7_fp32_224_224.py
resnext101_32x8d PyTorch 1, 3, 224, 224 Torchvision Link resnext101_32x8d.py
resnext101_64x4d PyTorch 1, 3, 224, 224 Torchvision Link resnext101_64x4d.py
resnext50_32x4d PyTorch 1, 3, 224, 224 Torchvision Link resnext50_32x4d.py
shufflenet_v2_x0_5 PyTorch 1, 3, 224, 224 Torchvision Link shufflenet_v2_x0_5.py
shufflenet_v2_x1_0 PyTorch 1, 3, 224, 224 Torchvision Link shufflenet_v2_x1_0.py
shufflenet_v2_x1_5 PyTorch 1, 3, 224, 224 Torchvision Link shufflenet_v2_x1_5.py
shufflenet_v2_x2_0 PyTorch 1, 3, 224, 224 Torchvision Link shufflenet_v2_x2_0.py
shufflenet-v2-x1.0 PyTorch 1, 3, 224, 224 OpenVINO Link shufflenet-v2-x1_0.py
single-human-pose-estimation-0001 PyTorch 1, 3, 384, 288 OpenVINO Link single-human-pose-estimation-0001.py
squeezenet1_0 PyTorch 1, 3, 224, 224 Torchvision Link squeezenet1_0.py
squeezenet1_1 PyTorch 1, 3, 224, 224 Torchvision Link squeezenet1_1.py
vgg11 PyTorch 1, 3, 224, 224 Torchvision Link vgg11.py
vgg11_bn PyTorch 1, 3, 224, 224 Torchvision Link vgg11_bn.py
vgg13 PyTorch 1, 3, 224, 224 Torchvision Link vgg13.py
vgg13_bn PyTorch 1, 3, 224, 224 Torchvision Link vgg13_bn.py
vgg16 PyTorch 1, 3, 224, 224 Torchvision Link vgg16.py
vgg16_bn PyTorch 1, 3, 224, 224 Torchvision Link vgg16_bn.py
vgg16-bn-7 ONNX 1, 3, 224, 224 ONNX Zoo Link vgg16-bn-7_fp32_224_224.py
vgg19 PyTorch 1, 3, 224, 224 Torchvision Link vgg19.py
vgg19_bn PyTorch 1, 3, 224, 224 Torchvision Link vgg19_bn.py
vgg19-7 ONNX 1, 3, 224, 224 ONNX Zoo Link vgg19-7_fp32_224_224.py
vgg19-bn-7 ONNX 1, 3, 224, 224 ONNX Zoo Link vgg19-bn-7_fp32_224_224.py
wide_resnet101_2 PyTorch 1, 3, 224, 224 Torchvision Link wide_resnet101_2.py
wide_resnet50_2 PyTorch 1, 3, 224, 224 Torchvision Link wide_resnet50_2.py
yolo-v2-tiny-tf TensorFlow 1, 416, 416, 3 OpenVINO Link yolo-v2-tiny-tf.py
yolo-v3-tf TensorFlow 1, 416, 416, 3 OpenVINO Link yolo-v3-tf.py
yolo-v3-tiny-tf TensorFlow 1, 416, 416, 3 OpenVINO Link yolo-v3-tiny-tf.py
yolo-v4-tf TensorFlow 1, 416, 416, 3 OpenVINO Link yolo-v4-tf.py
yolo-v4-tiny-tf Keras 1, 416, 416, 3 OpenVINO Link yolo-v4-tiny-tf.py
yolof PyTorch 1, 3, 608, 608 OpenVINO Link yolof.py
zfnet512-9 ONNX 1, 3, 224, 224 ONNX Zoo Link zfnet512-9_fp32_224_224.py

Downloading the Models

SiMa.ai's subset of compatible models references repositories like Torchvison, ONNX Model Zoo, and Open Model Zoo for OpenVINO. These repositories offer pretrained models in floating-point 32-bit (FP32) format that need to be quantized and compiled for SiMa.ai's MLSoC using the Palette's ModelSDK. To this end, certain helper scripts are provided as part of this repository that fetch the models from original pretrained model repositories. Here we describe instructions for getting the models from these repositories. To review model details, refer to the original papers, datasets, etc. mentioned in the corresponding source links provided. Bring your data and get started on running models of interest on SiMa.ai's MLSoC.

Torchvision

Torchvision's torchvision.modelssubpackage offers ML model architectures along with pretrained weights. SiMa.ai's ModelSDK can consume models from PyTorch that include the model topology and weights: either using TorchScript, or exporting the models to ONNX. Given developers' familiarity with ONNX, this repository provides a helper script (torchvision_to_onnx.py) to download the Torchvision model(s) and convert them to ONNX automatically.

  • To use the script, either clone the repository or download and copy to the Palette docker image.
  • From inside the Palette container, the following command can be used to download and convert models:

user123@9bb247385914:/home/docker/sima-cli/models$ python torchvision_to_onnx.py --model_name densenet121
Downloading: "https://github.com/pytorch/vision/zipball/v0.16.0" to /root/.cache/torch/hub/v0.16.0.zip
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=DenseNet121_Weights.IMAGENET1K_V1`. You can also use `weights=DenseNet121_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/densenet121-a639ec97.pth" to /root/.cache/torch/hub/checkpoints/densenet121-a639ec97.pth
100%|��������������������������������������������������������������������������������������������������������| 30.8M/30.8M [00:00<00:00, 52.6MB/s]
Before torch.onnx.export tensor([[[[ 1.7745,  0.7670, -0.2136,  ..., -1.5743, -0.4873,  1.0913],
          [ 0.0137, -0.9518,  0.8827,  ..., -0.1733, -0.1817,  2.1811],
          [ 0.6135, -0.9099, -2.0007,  ...,  0.3961, -0.4789, -1.5344],
          ...,
          [-1.1500, -0.1356,  0.5894,  ..., -1.2137,  0.8792,  0.6761],
          [-0.3458, -0.6029,  0.9585,  ...,  0.0141, -1.8495, -0.9339],
          [-0.4006, -1.1134, -0.3972,  ..., -0.5107, -0.8084, -1.4360]]]])
============== Diagnostic Run torch.onnx.export version 2.0.1+cpu ==============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

After torch.onnx.export

  • The downloaded and converted model can be viewed as below.
user123@9bb247385914:/home/docker/sima-cli/models$ ls
densenet121.onnx  LICENSE.txt  README.md	scripts  torchvision_to_onnx.py  

  • The model is now successully downloaded from Torchvision repository and ready for usage with Palette tools.

ONNX Model Zoo

ONNX Model Zoo is a repository of pretrained ML models for various tasks including computer vision. In order to download a Palette-supported pretrained model from ONNX model zoo, the link provided in the Model List section above can be used. Palette has been verified against the model version (indicated by the model name suffix) and it is recommended to download the correct version from the ONNX model zoo. For example, to download mobilenetv2-7 model from ONNX model zoo, the correct version can be located at the link provided in table above and the ONNX model can either be manually downloaded or using the wget as shown below.

user123@9bb247385914:/home/docker/sima-cli/models$ wget https://github.com/onnx/models/blob/main/archive/vision/classification/mobilenet/model/mobilenetv2-7.onnx
--2023-12-22 01:55:36--  https://github.com/onnx/models/blob/main/archive/vision/classification/mobilenet/model/mobilenetv2-7.onnx
Resolving github.com (github.com)... 140.82.112.4
Connecting to github.com (github.com)|140.82.112.4|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10145 (9.9K) [text/plain]
Saving to: ‘mobilenetv2-7.onnx’

mobilenetv2-7.onnx               100%[=========================================================>]   9.91K  --.-KB/s    in 0s      

2023-12-22 01:55:37 (75.9 MB/s) - ‘mobilenetv2-7.onnx’ saved [10145/10145]

Open Model Zoo for OpenVINO

Intel's OpenVINO model zoo offers a helper tool omz_downloader to download the pretrained models to local system. This comes as part of openvino-dev package installable via pip command. Follow instructions in OpenVINO installation guide to install omz_downloader and download pretrained OpenVINO models. In cases where the original pretrained model is in *.pth format, it is essential to convert to *.onnx format using omz_converter tool or other PyTorch to ONNX converter tools.

Model Calibration/Compilation

Helper scripts to compile each model are provided through this repository. The source code for these helper scripts can be reviewed using links in the Model List section. These compiler scripts come with multiple preconfigured settings for input resolutions, calibration scheme, quantization method etc. These can be adjusted per needs and full details on how to exercise various compile options are provided in Palette User Guide, available through SiMa.ai developer zone. After cloning this repository, the user should download the model of interest, and access the corresponding script for that model. It is important to ensure the path of model file in the helper script, referenced through model_path variable, is correct.

  • The model can be compiled from the Palette docker using this helper script with the command:python [HELPER_SCRIPT]
user123@9bb247385914:/home/docker/sima-cli/models$ python torchvision_to_onnx.py --model_name densenet121
user123@9bb247385914:/home/docker/sima-cli/models$ mkdir models && mv densenet121.onnx models/
user123@9bb247385914:/home/docker/sima-cli/models$ ls ./models
densenet121.onnx
user123@9bb247385914:/home/docker/sima-cli/models$ python scripts/densenet121/densenet121.py
Model SDK version: 1.3.0

Running calibration ...DONE
...
Running quantization ...DONE
  • After successful compilation, the resulting files are generated in result/[MODEL_NAME_CALIBRATION_OPTIONS]/mpk folder which now has *.yaml, *.json, *.lm generated as outputs of compilation. These files together can be used for performance estimation as described in the Palette User Guide available as part of Palette.
user123@9bb247385914:/home/docker/sima-cli/models$ ls
LICENSE.txt  models  README.md	result	scripts  torchvision_to_onnx.py
user123@9bb247385914:/home/docker/sima-cli/models$ ls result/
densenet121_asym_True_per_ch_True
user123@9bb247385914:/home/docker/sima-cli/models$ ls result/densenet121_asym_True_per_ch_True/mpk/
densenet121_mpk.tar.gz	densenet121_stage1_mla_compressed.mlc  densenet121_stage1_mla.ifm.mlc  densenet121_stage1_mla.mlc  densenet121_stage1_mla.ofm_chk.mlc

License

The primary license for the models in the SiMa.ai Model List is the BSD 3-Clause License, see LICENSE. However:

Certain models may be subject to additional restrictions and/or terms. To the extent a LICENSE.txt file is provided for a particular model, please review the LICENSE.txt file carefully as you are responsible for your compliance with such restrictions and/or terms.

Footnotes

  1. Models that compile and run fully on SiMa.ai MLSoC Machine Learning Accelerator (MLA) engine.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •  

Languages

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载