DDRNet23-Slim: Optimized for Qualcomm Devices
DDRNet23Slim is a machine learning model that segments an image into semantic classes, specifically designed for road-based scenes. It is designed for the application of self-driving cars.
This is based on the implementation of DDRNet23-Slim found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Getting Started
There are two ways to deploy this model on your device:
Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| ONNX | w8a8 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| QNN_DLC | float | Universal | QAIRT 2.42 | Download |
| QNN_DLC | w8a8 | Universal | QAIRT 2.42 | Download |
| TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | Download |
| TFLITE | w8a8 | Universal | QAIRT 2.42, TFLite 2.17.0 | Download |
For more device-specific assets and performance metrics, visit DDRNet23-Slim on Qualcomm® AI Hub.
Option 2: Export with Custom Configurations
Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for DDRNet23-Slim on GitHub for usage instructions.
Model Details
Model Type: Model_use_case.semantic_segmentation
Model Stats:
- Model checkpoint: DDRNet23s_imagenet.pth
- Inference latency: RealTime
- Input resolution: 2048x1024
- Number of output classes: 19
- Number of parameters: 6.13M
- Model size (float): 21.7 MB
- Model size (w8a8): 6.11 MB
Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|---|---|---|---|---|---|---|
| DDRNet23-Slim | ONNX | float | Snapdragon® X Elite | 24.369 ms | 24 - 24 MB | NPU |
| DDRNet23-Slim | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 16.642 ms | 32 - 253 MB | NPU |
| DDRNet23-Slim | ONNX | float | Qualcomm® QCS8550 (Proxy) | 25.11 ms | 5 - 241 MB | NPU |
| DDRNet23-Slim | ONNX | float | Qualcomm® QCS9075 | 39.885 ms | 24 - 50 MB | NPU |
| DDRNet23-Slim | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 12.623 ms | 7 - 151 MB | NPU |
| DDRNet23-Slim | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 8.933 ms | 12 - 202 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Snapdragon® X Elite | 77.961 ms | 131 - 131 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Snapdragon® 8 Gen 3 Mobile | 57.668 ms | 92 - 286 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Qualcomm® QCS6490 | 300.097 ms | 198 - 216 MB | CPU |
| DDRNet23-Slim | ONNX | w8a8 | Qualcomm® QCS8550 (Proxy) | 75.484 ms | 80 - 343 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Qualcomm® QCS9075 | 63.411 ms | 87 - 90 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Qualcomm® QCM6690 | 269.607 ms | 159 - 168 MB | CPU |
| DDRNet23-Slim | ONNX | w8a8 | Snapdragon® 8 Elite For Galaxy Mobile | 40.839 ms | 85 - 229 MB | NPU |
| DDRNet23-Slim | ONNX | w8a8 | Snapdragon® 7 Gen 4 Mobile | 249.681 ms | 138 - 147 MB | CPU |
| DDRNet23-Slim | ONNX | w8a8 | Snapdragon® 8 Elite Gen 5 Mobile | 40.573 ms | 55 - 198 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Snapdragon® X Elite | 33.867 ms | 24 - 24 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 22.522 ms | 24 - 307 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 98.015 ms | 24 - 222 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 33.275 ms | 24 - 26 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® SA8775P | 40.398 ms | 24 - 223 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® QCS9075 | 53.448 ms | 24 - 52 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 66.976 ms | 23 - 307 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® SA7255P | 98.015 ms | 24 - 222 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Qualcomm® SA8295P | 43.476 ms | 24 - 230 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 15.491 ms | 19 - 243 MB | NPU |
| DDRNet23-Slim | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 10.457 ms | 24 - 260 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Snapdragon® X Elite | 58.885 ms | 6 - 6 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Snapdragon® 8 Gen 3 Mobile | 41.962 ms | 6 - 254 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® QCS8275 (Proxy) | 108.13 ms | 6 - 205 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® QCS8550 (Proxy) | 56.332 ms | 6 - 8 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® SA8775P | 57.084 ms | 6 - 206 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® QCS9075 | 59.885 ms | 6 - 14 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® QCS8450 (Proxy) | 61.244 ms | 6 - 254 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® SA7255P | 108.13 ms | 6 - 205 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Qualcomm® SA8295P | 64.395 ms | 6 - 208 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Snapdragon® 8 Elite For Galaxy Mobile | 40.58 ms | 6 - 221 MB | NPU |
| DDRNet23-Slim | QNN_DLC | w8a8 | Snapdragon® 8 Elite Gen 5 Mobile | 46.904 ms | 6 - 239 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 22.461 ms | 1 - 294 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 97.957 ms | 3 - 207 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 33.017 ms | 2 - 6 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® SA8775P | 40.491 ms | 3 - 206 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® QCS9075 | 53.746 ms | 0 - 40 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 66.844 ms | 3 - 298 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® SA7255P | 97.957 ms | 3 - 207 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Qualcomm® SA8295P | 43.415 ms | 2 - 216 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 15.648 ms | 2 - 231 MB | NPU |
| DDRNet23-Slim | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 10.411 ms | 2 - 245 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Snapdragon® 8 Gen 3 Mobile | 36.995 ms | 1 - 252 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCS6490 | 172.506 ms | 10 - 78 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCS8275 (Proxy) | 95.269 ms | 1 - 198 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCS8550 (Proxy) | 48.779 ms | 1 - 3 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® SA8775P | 49.617 ms | 1 - 200 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCS9075 | 51.434 ms | 0 - 15 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCM6690 | 218.866 ms | 9 - 233 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® QCS8450 (Proxy) | 56.596 ms | 0 - 250 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® SA7255P | 95.269 ms | 1 - 198 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Qualcomm® SA8295P | 56.032 ms | 1 - 202 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Snapdragon® 8 Elite For Galaxy Mobile | 64.905 ms | 1 - 214 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Snapdragon® 7 Gen 4 Mobile | 65.053 ms | 9 - 210 MB | NPU |
| DDRNet23-Slim | TFLITE | w8a8 | Snapdragon® 8 Elite Gen 5 Mobile | 44.851 ms | 1 - 231 MB | NPU |
License
- The license for the original implementation of DDRNet23-Slim can be found here.
References
- Deep Dual-resolution Networks for Real-time and Accurate Semantic Segmentation of Road Scenes
- Source Model Implementation
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
