-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controlling LBR IIWA incorporating kinect #154
Comments
I think tcp ip will work.
| |
宋韬
|
|
邮箱:songtao43467226@163.com
|
签名由 网易邮箱大师 定制
On 12/19/2017 21:20, usman2k wrote:
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thanks for you kind response... But I am talking about C++ codes |
Well, I want to do my task without ROS. I am familiar with vrep. Which two libraries ... As i guess one is opencv? |
sure, GRL is designed to be used without ROS and that is easy to set up, but it will take a lot more work to integrate kinect data accurately without ROS. The difficulty won't be with making the kuka move but getting good data from the kinect that tells the kuka the right place to go. To command robots accurately in absolute position/orientation you need to calibrate the depth camera. Additionally, hand eye calibration of the robot cannot be done until the kinect camera is calibrated first. What kind of vision are you doing? Are you developing an application or doing research? How accurate do you need your motion to be and how will you detect the objects? Note that kinect is fairly low resolution and kinectv2 point clouds can be inaccurate and fail when there are reflections. I need more details to give you a sound recommendation, but if you just need to grab kinect frames in C++ code you can use https://github.com/stevenlovegrove/Pangolin, and if you just need to command the robot to go somewhere in joint space you can use the KukaDriver class. If you need to command the robot in cartesian space the best way to do that will again depend on your problem. |
First of all , I need to thank you for you kind response. |
So you have a kuka and kinect object detection code? What language is the detection code in? What kind of material and do you have a gripper? |
I have object detection code in C++ but I dont have code for robot. |
Well the easiest way to get started is to build and install grl and try running the V-REP simulation. That will let you drive the robot to a series of locations. Once you get that working and you are able to drive the robot to a series of cartesian positions you can go to the next step. This can be done easily with V-REP's integrated lua API (for a first test). After that, you can decide if you want to write a C++ V-REP plugin for the vision sensor or if you want to write something custom. Based on what you describe I think a simple plugin would be the way to go for getting things to work quickly. http://www.coppeliarobotics.com/helpFiles/index.html Are you able to install GRL? |
Thanks again for yoUr kind response... |
I need to do small task with LBR IIWA Kuka and using kinect 3d camera. I need to command manipulator by kinect. I know it is a hand eye calibration but how can i make interference between kinect and manipulator by using C++ codes.
The text was updated successfully, but these errors were encountered: