More Camera application Code examples


JUNCHAO
 Share

Recommended Posts

We are currently trying to use the camera to complete some basic tasks, but I only find one camera example in the tutorial " camera image display example". If we want to do something more complicated, like camera inspection, or using camera to recognize other targets and send signal to control the gripper. Is there any good example for it that anyone can share? So appreciate!

I know Rivz is pretty good software and has a lot of functions, but we are research based and we hope to access basic codes. We are currently doing Linear Temporal Logic, if you have any experience about its application on Sawyer robot or currently doing that too, please provide some advises! Thanks a lot!

Junchao Li

Link to comment
Share on other sites

Hi Junchao,

I'll take your request into consideration as we build future SDK examples. However, the camera display image example likely has a bunch of the groundwork you need for both of your more complicated tasks. In particular:

Quote

using camera to recognize other targets and send signal to control the gripper

Object recognition is fairly complex, and an area of active research in the community. The camera image display example actually does lay the groundwork for this task, as you can use the "-c" flag to run Canny Edge Detection, functionality implemented with OpenCV. You can use OpenCV to do all sorts of image processing , object detection, and some machine learning.

Once you've detected the object in the image frame, you'll want to transform the object coordinates from the camera frame to the robot's base frame using the transform library, tf2. You can calculate the joint angles required for Sawyer to approach the object using the ik_service found in this example. There are several examples for actuating the gripper.

Hope this helps!

~ Ian

Link to comment
Share on other sites

  • 4 months later...

Hello Ian,

I am currently learning the python code of two examples: "go_to_cartesian" and "camera_imagine_display". I am trying to combine these two together to create a new python code which can make the sawyer robot automatically detects the object and generates the object coordinates then moves its arm to that point. Both codes are useful materials, but I do need some type of the function which can generate the object coordinates which I didn't find in these two examples. In other words, according to your last comments, the camera detects the object and generates the ros message, which will be converted and processed through openCV by using the function called "cv_bridge". This piece of information will then be used to generate the object coordinates on camera based frame, so it can be transformed to robot's based frame after this. My problem is how to generate the coordinates on camera based frame. Do you have any suggestions? like where can I find these good example codes, or what ros or python library should I focus on? I looked up the openCV tutorial, but I don't have its code documentation. 

What about if I plan to use the barcode or QR code? Is any function good for that? Please advice! Appreciate!

Junchao Li 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share