Deep Learning Model Package Management Tool

This section introduces how to use the deep learning model package management tool.

Introduction

Deep learning model package management tool is designed to manage all deep learning model packages in Mech-MSR. It can be used to optimize model packages exported by Mech-DLK 2.6.1 or above and manage and monitor the operation mode, hardware type, model efficiency, and model package status. In addition, this tool can also monitor the GPU usage of the IPC.

If an Deep Learning Model Package Inference Step is used in the project, you can import the model package to the deep learning model package management tool first and then use the models in the Step. Importing the model package to the tool facilitates optimizing the model package in advance.

Get Started

You can open the tool in the following ways:

  • After creating or opening a project, select Deep Learning  Deep Learning Model Package Management Tool in the menu bar.

  • In the graphical programming workspace, click the Config wizard button on the “Deep Learning Model Package Inference” Step.

  • In the graphical programming workspace, select the “Deep Learning Model Package Inference” Step, and then click Open the editor button in Model Manager Tool under the Parameters section.

Interface Description

The options in this interface are described as follows:

Field Description

Available model package

The names of imported model packages.

Project name

Mech-MSR projects that use model packages.

Model package type

The type of the model package, such as Text Detection and Text Recognition.

Currently, the Fast Positioning model package is not supported.

Operation mode

The operation mode of the model package during inference, including Sharing mode and Performance mode.

  • Sharing mode: Once this option is selected, when multiple Steps use the same model package, the inferences will be done one by one in sequence and less memory will be used.

  • Performance mode: Once this option is selected, when multiple Steps use the same model package, the inferences will be done simultaneously, and the inference speed will be relatively fast. However, more memory will be used in this mode.

Hardware type

The hardware type used for model package inference, including GPU (default), GPU (optimization), and CPU.

  • CPU: Use CPU for model package inference, which will increase inference time and reduce recognition accuracy compared with GPU.

  • GPU (default): Do model package inference without optimizing according to the hardware, and the model package inference will not be accelerated.

  • GPU (optimization): Do model package inference after optimizing according to the hardware. The optimization only needs to be done once and is expected to take 5 to 15 minutes. The inference time will be reduced after optimization.

The deep learning model package management tool determines the Hardware Type option by detecting the computer hardware type. The display rules for each Hardware Type option are as follows.

  • CPU: This option is shown when a computer with an Intel CPU is detected.

  • GPU (default), GPU (optimization): This option is displayed when a computer with an NVIDIA discrete graphics card is detected, and the graphics card driver version is 472.50 or higher.

Model efficiency

You can configure the inference efficiency of the model package.

Model package status

The status of the model package, such as “Loading and optimizing”, “Loading completed”, and “Optimization failed”.

Common Operations

Follow the steps below to learn about common procedures for using the deep learning model package management tool.

Import the Deep Learning Model Package

  1. Open the deep learning model package management tool, and click the Import button in the upper left corner.

  2. In the pop-up window, select the model package you want to import, and click the Open button. The model package will appear in the tool list.

To import a model package successfully, the minimum version requirement for the graphics driver is 472.50, and the minimum requirement for the CPU is a 6th-generation Intel Core processor. It is not recommended to use a graphics driver above version 500, which may cause fluctuations in the execution time of deep learning Steps. If the hardware cannot meet the requirement, the deep learning model package cannot be imported successfully.

Remove the Imported Deep Learning Model Package

If you want to remove an imported deep learning model package, select the model package first, and click the Remove button in the upper right corner.

deep learning model management log out model

When the deep learning model package is Loading and optimizing or the project using the deep learning model package is running, the model package cannot be removed.

Switch the Operation Mode

If you want to switch the Operation mode for deep learning model package inference, you can click the deep learning model management icon 1 in the Operation mode column in the deep learning model package management tool, and select Sharing mode or Performance mode.

deep learning model management select operating mode
  • When the deep learning model package is Loading and optimizing or In use(i.e., the project using the model package is running), the Operation mode cannot be changed.

  • When the operation mode of the model package is Sharing mode, the GPU ID in the Parameters section in Deep Learning Model Package Inference Step cannot be changed.

Switch Hardware Type

You can change the hardware type for deep learning model package inference to GPU (default), GPU (optimization), or CPU.

Click the deep learning model management icon 1 button in the Hardware type column in the deep learning model package management tool, and select GPU (default), GPU (optimization), or CPU.

deep learning model management select hardware type

When the deep learning model package is Loading and optimizing or In use (i.e., the project using the model package is running), the Hardware type cannot be changed.

Configure the Model Efficiency

The process of configuring model efficiency is as follows:

  1. Determine the deep learning model package to be configured.

  2. Click the corresponding Model efficiency Configure button, and set the Batch size and Precision in the pop-up window. The model execution efficiency is affected by batch size and accuracy parameters.

    • Batch size: The number of images that will be passed through the neural network at once during inference. It defaults to 1 and cannot be changed.

    • Accuracy (only available when Hardware Type is set to “GPU (Optimization)”):

      • FP32: high model accuracy, low inference speed.

      • FP16: low model accuracy, high inference speed.

Troubleshooting

Fail to Import a Deep Learning Model Package

Symptom

After selecting a deep learning model package to import, a message saying “Deep learning model import failed”.

Possible causes

  1. A model package with the same name has been imported.

  2. Model packages with the same content have been imported.

  3. Hardware and software do not meet the requirements.

Solutions

  1. Modify the model package name or remove the imported model package.

  2. Check the contents of the model package. If it is the same as the imported model package, you do not need to import it again.

  3. Ensure that the minimum required version for the graphics driver is 472.50, and the minimum required CPU version is 6th generation Intel Core.

Fail to Optimize a Deep Learning Model Package

Symptom

When optimizing a deep learning model package, an error message saying “Model package optimization failed”.

Possible cause

Insufficient GPU memory.

Solutions

  • Remove the unused model package in the tool, and then re-import the model package for optimization.

  • Switch the “Operation Mode” of other model packages to “Sharing Mode”, and then import the model package for optimization again.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.