Add Pick Point by Teaching

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

A pick point is the position on the object that can be grasped by the robot. The pick point is in the object reference frame, and the point’s position and orientation are relative to the object. The robot needs a picking pose (the TCP of the robot when picking) provided by Mech-Vision to actually perform the picking. The picking pose is transformed from the pick point on the target object. Therefore, pick points need to be added to point cloud models, so that Mech-Vision can generate pick points for target objects through matching, and further output picking poses for the robot.

The first pick point you add will be automatically set as the geometric center.

  1. The geometric center in Mech-Vision’s algorithm is introduced to recognize nearly symmetrical (but not strictly speaking symmetrical) objects, and it does not indicate the “point at the center of a symmetrical object” in the general sense.

  2. Every point cloud model should have one and only one geometric center, while they can have multiple pick points.

  3. Please refer to Symmetry Settings in 3D Fine Matching for application instructions on the geometric center.

You’ll need to input the TCP manually when adding a pick point this way. Therefore, please have the TCP data ready for use beforehand.

If you are using Mech-Viz, you can check the TCP by going into Mech-Viz  Resources  Tools and double-clicking the corresponding tool model.

The procedure for adding a pick point by teaching differs depending on how your camera is installed. Please read on to learn about detailed instructions for ETH and EIH.

  1. Mech-Vision automatically determines how the camera is installed based on the extrinsic parameters in the project, and shows the corresponding Add Pick Point by Teaching window.

  2. If the robot is connected through Communication Component, the picking pose and the image-capturing pose can be obtained automatically. Otherwise, these poses must be input manually.

Add Pick Point by Teaching under ETH

  1. Click add pick point by teaching icon 10 in the toolbar to open the Add Pick Point by Teaching window.

  2. Input the TCP obtained beforehand into the TCP section.

    add pick point by teaching add pick point 02
  3. Move the robot to the picking pose using the teach pendant. Operate the tool to perform picking to make sure the picking pose is accurate.

    If you are using a gripper, you can grasp and drop the target object several times to make sure that the object can be firmly grasped in this picking pose.

  4. In the Picking pose section, click Fetch current pose, or input the pose displayed on the teach pendant manually. Click Confirm to generate a pick point. The newly generate pick point will show up in the Model files list.

    add pick point by teaching add pick point 05
  5. Move the robot outside the camera’s field of view. Be careful not to touch the target object in this process to avoid altering its pose.

  6. Generate the point cloud model of the target object using the connected camera. Please follow the instructions in Generate Point Cloud Model of using a real camera.

  7. In the Model files list, select the pick point generated in step 4 and drag it onto the point cloud model to associate the pick point with the model. Successfully associated pick point will be nested below the point cloud model.

    add pick point by teaching add pick point 03

    Click add pick point by teaching icon 11 on the right of Model files, and make it into add pick point by teaching icon 12 to hide all point cloud models.

Add Pick Point by Teaching under EIH

Under EIH, you also need to obtain the image-capturing pose in addition to the picking pose.

  1. Click add pick point by teaching icon 10 in the toolbar to open the Add Pick Point by Teaching window.

  2. Input the TCP obtained beforehand into the TCP section.

    add pick point by teaching add pick point 04
  3. Move the robot to the picking pose using the teach pendant. Operate the tool to perform picking to make sure the picking pose is accurate.

    If you are using a gripper, you can grasp and drop the target object several times to make sure that the object can be firmly grasped in this picking pose.

  4. In the Picking Pose section, click Fetch current pose, or input the pose displayed on the teach pendant manually.

  5. Move the robot outside the camera’s field of view. Be careful not to touch the target object in this process to avoid altering its pose.

  6. Move the robot to the image-capturing pose using the teach pendant. Capture an image to check if the pose is accurate.

  7. In the Image capturing pose section, click Fetch current pose, or input the pose displayed on the teach pendant manually. Click Confirm to generate a pick point. The newly generate pick point will show up in the Model files list.

    add pick point by teaching add pick point 06
  8. Generate the point cloud model of the target object using the connected camera. Please follow the instructions in Generate Point Cloud Model of using a real camera.

  9. In the Model files list, select the pick point generated in step 7 and drag it onto the point cloud model to associate the pick point with the model. Successfully associated pick point will be nested below the point cloud model.

    add pick point by teaching add pick point 03

After you finish the configuration, click File  Save (Shortcut Ctrl + S), the point cloud model and pick point file will be saved to Project Folder/resource/3d_matching by default.

  • xxx.ply is the point cloud model file.

  • geo_center.json is the geometric center file of the point cloud model.

  • pick_points.json is the pick point file.

  • pick_points_labels.json is the label file of the pick point.

When closing the tool window, a message reminding you to save the files will pop up whether the files have been saved or not.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.