Sign in to follow this  
Wagdi

SDK : usage questions

Recommended Posts

Hello guys,

I've been playing a lot with Sawyer RSDK and started implementing multiple behaviors. Some things are really easy to do as it is quite similar to Baxter but some are not..
So I have a lot of questions here :(
 
But first, do you have any idea when a complete documentation for the SDK will appear ?
 
A clear documentation for the available signals for the camera and their range of values would be really helpful for example. I know we can find some of those informations with this topic : /io/internal_camera/right_hand_camera/config but still not clear for all of them ! 
 
Also, is there any reason why both the head camera and the cognex FPS is slow ? I don't know the value for the head camera but for the cognex one of the available topics says that it's only 10...
 
Do you have any news on the PLC usage with the SDK ? This feature is really important and I'm really curious to know why this one is not available as it should only be a modbus server, is it not ?
 
After some investigations on the SDK and more precisely the Limb API, I managed to play with the joint action trajectory server. 
 
Unfortunately I can't seem to have it working in position_w_id even with the provided example... So I switched to simple position mode and this one seems fine. 
 
Therefore, I pushed my investigations way further and started playing with the IK and building complete trajectories computing time for positions (using a maximum angular speed between to joint positions) and then velocities. Sending that trajectory works fine again in position mode but never in position_w_id which is supposed to be the best one but also the default one. 
 
Finally, I played around with the set_joint_position_speed method from the Limb API. And to be honest I'm wondering how this one works. Sending a complete trajectory to Sawyer should ignore this method call, that's what I think at least I could be wrong ! 
 
Unfortunately it seems like the set_joint_position_speed has an impact on my trajectory... Is that normal ? 
 
Lots of questions here ! I hope that asking all of them in the same post is not too much of troubles !
 
 

Share this post


Link to post
Share on other sites

Hi Wagdi,

Responses to follow inline:

  • But first, do you have any idea when a complete documentation for the SDK will appear ?
I am not sure what you mean by this. If there is a portion of the SDK interface missing from the wiki documentation, please let us know.
  • A clear documentation for the available signals for the camera and their range of values would be really helpful for example. I know we can find some of those informations with this topic : /io/internal_camera/right_hand_camera/config but still not clear for all of them ! 

Camera configuration through the SDK interface is not possible as of yet. Currently streaming image data is the only action that is supported.

  • Also, is there any reason why both the head camera and the cognex FPS is slow ? I don't know the value for the head camera but for the cognex one of the available topics says that it's only 10...
That is the currently supported framerate for the cognex data stream through the JCB network at the preconfigured resolution.
  • Do you have any news on the PLC usage with the SDK ? This feature is really important and I'm really curious to know why this one is not available as it should only be a modbus server, is it not ?
Your feature request is noted, but we do not have a timeline on any SDK implementation for PLCs.
  • After some investigations on the SDK and more precisely the Limb API, I managed to play with the joint action trajectory server. Unfortunately I can't seem to have it working in position_w_id even with the provided example... So I switched to simple position mode and this one seems fine. Therefore, I pushed my investigations way further and started playing with the IK and building complete trajectories computing time for positions (using a maximum angular speed between to joint positions) and then velocities. Sending that trajectory works fine again in position mode but never in position_w_id which is supposed to be the best one but also the default one. 
In the Position With Inverse Dynamics mode of the Joint Trajectory Action Server, the user needs to supply feed forward Velocities and Accelerations in order to smoothly predict the next step in the trajectory per joint. Without these values, the JTAS works in Position mode. This is the default mode, as it provides the smoothest trajectory execution with the MoveIt motion planning framework.
 
  • Finally, I played around with the set_joint_position_speed method from the Limb API. And to be honest I'm wondering how this one works. Sending a complete trajectory to Sawyer should ignore this method call, that's what I think at least I could be wrong !  Unfortunately it seems like the set_joint_position_speed has an impact on my trajectory... Is that normal ? 
Yes, setting the joint position speed will affect the Position Mode speed, by setting the arm's global SpeedRatio. The individual revolute joints in Sawyer's URDF each have a limit tag, which contains a velocity limit in meters per second. This can be viewed in the URDF for each joint:
In Position Mode, the desired speed of any individual joint (i) can be represented as 
joint_velocity = Direction * SpeedRatio * joint_velocity_limit
where SpeedRatio is a value between [0.0, 1.0], and Direction is either +1 or -1, depending on the direction of your Position Command compared to your current position.
 
Hope this helps!
~ Ian
 

Share this post


Link to post
Share on other sites

Hi Ian, 

First, let me thank you for all those answers. They helps a lot !

It's a shame for the PLC as we will probably need to add an external one in case we want to interface Sawyer with external devices.

Regarding the camera configuration, I am sorry to insist on this but I managed to configure some options of it directly from the SDK using the "set_signal_value" method. 

Here is a basic example :

import rospy
rospy.init_node("cognex_flash")

import intera_interface

c = intera_interface.Cameras()
c.cameras_io['right_hand_camera']['interface'].set_signal_value('set_strobe', True)   # sets the camera to strobe
c.cameras_io['right_hand_camera']['interface'].set_signal_value('set_strobe', False) # turns off the strobe

Actually this small piece of code was provided to me by one of Rethink's Software engineer. And I was also told that this part will have this documented for SDK general availability.

Anyway, I am actually digging a lot in what is possible and not possible to do with Sawyer using all its devices and for now, using the camera or the PLC are my two main issues :( .

I'll keep digging.

Again, thank you for those answers Ian.

Wagdi.

Share this post


Link to post
Share on other sites

Hello everybody,

I was reading your conversation:

On 4/19/2017 at 00:23, Ian McMahon said:
  • Finally, I played around with the set_joint_position_speed method from the Limb API. And to be honest I'm wondering how this one works. Sending a complete trajectory to Sawyer should ignore this method call, that's what I think at least I could be wrong !  Unfortunately it seems like the set_joint_position_speed has an impact on my trajectory... Is that normal ? 
Yes, setting the joint position speed will affect the Position Mode speed, by setting the arm's global SpeedRatio. The individual revolute joints in Sawyer's URDF each have a limit tag, which contains a velocity limit in meters per second. This can be viewed in the URDF for each joint:

Indeed, set_joint_position_speed() method modifies the Position Mode speed.

Is it possible to do the same with the Inverse Dynamics Feed Forward Position Mode (default Mode) ?

To be more precise, I am using MoveIt! to plan and execute trajectories. MoveIt! computes positions, velocities, accelerations and sends them to the JTAS which is running in default mode.

Now I would like to reduce or increase the global speed of the robot while the trajectory is being executed. I tried to use the set_joint_position_speed() method without success. Am I missing something ? or is it just not possible to do this ?

Share this post


Link to post
Share on other sites

This is NOT possible.

The time information for the whole trajectory is already in the action server. 

I think the only way to do this would be to make something like a new Joint Trajectory Action Server which would use a ROS parameter (global_speed_ratio) to scale the current point of the trajectory (velocity, acceleration, time) just before "executing" it.

For a collaborative robot, this would be a great feature. We would be able to easily slow down the robot during its task if a human operator gets close to it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this