Deep Learning Model Package Management Tool
This section introduces how to use the deep learning model package management tool.
Introduction
Deep learning model package management tool is designed to manage all deep learning model packages in Mech-MSR. It can be used to optimize model packages exported by Mech-DLK 2.6.1 or above and manage and monitor the operation mode, hardware type, model efficiency, and model package status. In addition, this tool can also monitor the GPU usage of the IPC.
If an Deep Learning Model Package Inference Step is used in the project, you can import the model package to the deep learning model package management tool first and then use the models in the Step. Importing the model package to the tool facilitates optimizing the model package in advance.
Get Started
You can open the tool in the following ways:
-
After creating or opening a project, select
in the menu bar. -
In the graphical programming workspace, click the Config wizard button on the “Deep Learning Model Package Inference” Step.
-
In the graphical programming workspace, select the “Deep Learning Model Package Inference” Step, and then click Open the editor button in Model Manager Tool under the Parameters section.
Interface Description
The options in this interface are described as follows:
Field | Description | ||
---|---|---|---|
Available model package |
The names of imported model packages. |
||
Project name |
Mech-MSR projects that use model packages. |
||
Model package type |
The type of the model package, such as Text Detection and Text Recognition.
|
||
Operation mode |
The operation mode of the model package during inference, including Sharing mode and Performance mode.
|
||
Hardware type |
The hardware type used for model package inference, including GPU (default), GPU (optimization), and CPU.
|
||
Model efficiency |
You can configure the inference efficiency of the model package. |
||
Model package status |
The status of the model package, such as “Loading and optimizing”, “Loading completed”, and “Optimization failed”. |
Common Operations
Follow the steps below to learn about common procedures for using the deep learning model package management tool.
Import the Deep Learning Model Package
-
Open the deep learning model package management tool, and click the Import button in the upper left corner.
-
In the pop-up window, select the model package you want to import, and click the Open button. The model package will appear in the tool list.
To import a model package successfully, the minimum version requirement for the graphics driver is 472.50, and the minimum requirement for the CPU is a 6th-generation Intel Core processor. It is not recommended to use a graphics driver above version 500, which may cause fluctuations in the execution time of deep learning Steps. If the hardware cannot meet the requirement, the deep learning model package cannot be imported successfully. |
Remove the Imported Deep Learning Model Package
If you want to remove an imported deep learning model package, select the model package first, and click the Remove button in the upper right corner.

When the deep learning model package is Loading and optimizing or the project using the deep learning model package is running, the model package cannot be removed. |
Switch the Operation Mode
If you want to switch the Operation mode for deep learning model package inference, you can click the in the Operation mode column in the deep learning model package management tool, and select Sharing mode or Performance mode.

|
Switch Hardware Type
You can change the hardware type for deep learning model package inference to GPU (default), GPU (optimization), or CPU.
Click the button in the Hardware type column in the deep learning model package management tool, and select GPU (default), GPU (optimization), or CPU.

When the deep learning model package is Loading and optimizing or In use (i.e., the project using the model package is running), the Hardware type cannot be changed. |
Configure the Model Efficiency
The process of configuring model efficiency is as follows:
-
Determine the deep learning model package to be configured.
-
Click the corresponding Model efficiency Configure button, and set the Batch size and Precision in the pop-up window. The model execution efficiency is affected by batch size and accuracy parameters.
-
Batch size: The number of images that will be passed through the neural network at once during inference. It defaults to 1 and cannot be changed.
-
Accuracy (only available when Hardware Type is set to “GPU (Optimization)”):
-
FP32: high model accuracy, low inference speed.
-
FP16: low model accuracy, high inference speed.
-
-
Troubleshooting
Fail to Import a Deep Learning Model Package
Symptom
After selecting a deep learning model package to import, a message saying “Deep learning model import failed”.
Possible causes
-
A model package with the same name has been imported.
-
Model packages with the same content have been imported.
-
Hardware and software do not meet the requirements.
Solutions
-
Modify the model package name or remove the imported model package.
-
Check the contents of the model package. If it is the same as the imported model package, you do not need to import it again.
-
Ensure that the minimum required version for the graphics driver is 472.50, and the minimum required CPU version is 6th generation Intel Core.
Fail to Optimize a Deep Learning Model Package
Symptom
When optimizing a deep learning model package, an error message saying “Model package optimization failed”.
Possible cause
Insufficient GPU memory.
Solutions
-
Remove the unused model package in the tool, and then re-import the model package for optimization.
-
Switch the “Operation Mode” of other model packages to “Sharing Mode”, and then import the model package for optimization again.