Comparison of Use Scenarios Between Deep Learning and 3D Matching Algorithms

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

The 3D matching algorithm matches the point cloud model with the scene point clouds and detect the object in the scene, and then output the object poses. In some vision recognition processes, traditional matching and clustering methods may not achieve satisfactory results. In this case, deep learning algorithms can offer superior recognition capabilities. Deep learning falls within the realm of artificial intelligence and involves intricate neural network models. Once a large amount of data is input, deep learning techniques can simulate the human learning process, predict or identify patterns based on extensive datasets, extract data features, and subsequently perform relevant tasks.

This topic introduces how to select the appropriate algorithms for different picking scenarios, aiming to improve the efficiency and accuracy of recognition and picking tasks in practical applications.

Target Objects

Scenario Deep learning algorithm 3D matching algorithm

Feature

Position

Objects are closely placed or overlapping, difficult to distinguish individual objects.

Objects are neatly placed and easy to distinguish.

Contour

Varied, difficult to generate a global model.

Fixed, easy to generate a model.

Quantity

Massive, making clustering difficult and global matching slow.

Few, making global matching fast and accurate.

Object type

Label

To differentiate between the orientations of the objects.

-

Type

Mixed incoming materials.

Single incoming materials.

Stacking method

Neatly layered

-

Able to cluster similar objects.

Randomly stacked

Unable to cluster similar objects.

Able to cluster similar objects.

Randomly stacked

Unable to cluster similar objects.

-

Imaging Performance

Imaging performance Deep learning algorithm 3D matching algorithm

Point cloud quality

Point cloud loss

The matching results are poor, and the 2D features are conspicuous.

-

Point cloud complete

-

The model or edge model matching yields better results.

Image quality

Clear

The RGB features are conspicuous, and the 2D features are conspicuous.

-

Blur

The RGB features are inconspicuous, but the depth map features are conspicuous.

The RGB features are inconspicuous, and the depth map features are inconspicuous.

Project Requirements

Requirement Deep learning algorithm 3D matching algorithm

Scenario

Preparation stage

The objects are of the same type, and the image data is already available or can be acquired.

-

Tuning stage

-

Unable to acquire and label images or train models due to time constraints.

Project stage

The global matching process costs too much time, and unable to speed it up.

-

Accuracy

High accuracy

High accuracy requirements and strict picking requirements.

-

Moderate accuracy

Common scenario with general accuracy requirements.

-

Other Scenarios to Use the Deep Learning Algorithm

  • When using a 2D camera, there are no depth maps or point clouds, only RGB images can be acquired.

  • The features to be detected are only present in RGB images.

  • Detect the presence of objects.

  • The counting feature that cannot be achieved by using point clouds.

  • Text recognition.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.