• Content count

  • Joined

  • Last visited

  • Days Won


josh_jhu last won the day on October 26 2017

josh_jhu had the most liked content!

Community Reputation

3 Neutral

About josh_jhu

  • Rank

Robot Information

  • Company Name
    Please fill out

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. josh_jhu

    Commands Not Recognized

    Has it always been like this?
  2. josh_jhu

    Sawyer Jacobian

    Hello, Is it possible to access Sawyer's Jacobian matrix? I noticed these resources available for obtaining Baxter's Jacobian Is it also possible to use KDL with Sawyer? -Josh
  3. josh_jhu


    I know you can get camera images through the SDK. Try looking through the example scripts till you find the camera example. Once you find it you can set a callback function to be called each time the camera gets new data. From there you can tell it to save the image to a global variable (or object parameter if you include the camera script as part of a class) which will allow you to access it outside of the callback function. -Josh
  4. Hello all, I recently completed the process of getting a Kinectv2 working with Sawyer and I'd like to record the steps I took for anyone who might also want to do this. As a quick primer, Kinectv1 is better on edges, but Kinectv2 is much better outdoors, and has higher depth resolution To start I installed libfreenect2 on my workstation. I had some trouble when I was using ROS Indigo and Ubuntu 14.04 (as recommended in the Rethink workstation setup) However I updated my machine to ROS Kinetic and Ubuntu 16.04 and as far as I am aware Sawyer works fine. Having done that I followed the libfreenect2 instructions for Ubuntu 16.04 and verified that my install was working by running Protonect from the build folder. After that I installed python bindings for libfreenect2, pylibfreenect2. (NOTE: Do not bother with pyfreenect2, it is not maintained and I don't think it ever actually worked) If you want skeleton tracking you should look into OpenNI. I dont have any experience with this though I am working on kinectv2+sawyer robot api, still a wip though. Anyways after the above steps all of the software is taken care of. Then comes the issue of actually mounting the Kinectv2 to the robot. The kinectv2 naturally angles upward by about 15 degrees or so, and I wanted a cheap and easy way to affix it to Sawyer, so I 3-d printed the below mounting block. It has a (6mm) hole in it for screwing the kinectv2 to it, and the bottom is a circular recess that measures ~82.3mm so that it fits very snugly onto Sawer's head (with a bit of filing down). The head will pan to the side long before the mounting block slips. It takes a great deal of force to remove it, which is good as there isnt anything to lock on to without an elaborate setup or drilling a hole into Sawyer. If anyone wants the .stl of the block it is here I can try to answer questions about getting the kinectv2 setup. I likely experienced every possible error message. -Josh
  5. josh_jhu

    Possible to change joint stiffness?

    Thanks for the response Ian. I was using position control. I'll give torque control a try. -Josh
  6. josh_jhu

    Kinect sensor installation in Sawyer robot

    Hi uche, Do you have the original kinect or the kinect v2? -Josh
  7. josh_jhu

    Possible to change joint stiffness?

    Hi, Is it possible to increase the stiffness of Sawyer's joints through the SDK? Thanks, -Josh
  8. josh_jhu

    Disable Field Service Menu

    Hello, Is there a way to disable the Field Service Menu from showing up at startup? In the FSM I have selected to start up in the SDK on next boot but sometimes I still get the FSM on the next boot. I am not entirely sure what triggers a normal SDK boot compared to a FSM boot. -Josh
  9. josh_jhu

    What happens on Start-Up?

    Hi Ian, I managed to resolve my issue. I believe my issue was lack of data points. I was using the reference frames and data from my images in a big least-squares problem, but I believe I had insufficient data points leaving my problem under constrained so that very slight variations in the reported /base -> /right_hand transformation caused huge swings in my scripts output. So I do not think it is any problem with the robot! But your answer did help me eliminate one more unknown, allowing me to solve my problem. And I should have clarified, I am using an imaging device attached to sawyer's hand, and not the built in camera. Also you mentioned joint frames are fixed at manufacturing time. So when I get the /base -> /right_hand transformation, what fixed location on the robot is it referencing these locations from? Like If I wanted to know the translation from the /right_hand frame to the end of a custom end effector, where on the hand should I measure from? Thanks, -Josh
  10. josh_jhu

    What happens on Start-Up?

    I think this is a fitting first question for the forum. I have been having some issues where I run a script one day and it works fine, then I come in the next day (after having shut off sawyer the previous night) and start sawyer back up and rerun the same script and I get a significantly different output. My script only uses two types of data: images, and the robot (base -> hand) transformation that the image was acquired with. Essentially I have sawyer run through a fixed set of poses (defined as a list of joint angles), and at each pose I capture an image and record the base -> hand transformation for that pose. I had the robot run through the same set of poses both days and manually verified that the captured images were the same as the previous day. So before I start accusing people of tampering with my code overnight, I wonder if Sawyer does any redefining of reference frames on start up? Perhaps based on the orientation the joints have when the system is powered on? Any help is appreciated, -Josh