- Eye-to-hand 眼在手外:标定的是相机坐标系相对于机器人基座坐标系的位姿
- Eye-in-hand眼在手上:标定的是相机坐标系相对于机器人工具坐标系的位姿
1.源码构建SDK,注册服务器的公钥:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE
如果仍然无法检索到公钥,请检查并指定代理设置:export http_proxy="http://
2.将服务器添加到存储库列表中:
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
3.安装库:
sudo apt-get install librealsense2-dkms sudo apt-get install librealsense2-utils
4.可选安装开发和调试包:
sudo apt-get install librealsense2-dev sudo apt-get install librealsense2-dbg
5.重新连接英特尔实感深度摄像头并运行:realsense-viewer以验证安装。
6.验证内核是否已更新:运行以下程序输出应包含realsense字符串
modinfo uvcvideo | grep "version:"
1.建立工作空间
mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src/
2.克隆源文件到src文件夹下并编译
git clone https://github.com/IntelRealSense/realsense-ros.git cd .. catkin_make
3.设置环境变量
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc source ~/.bashrc
4.连接相机测试
roslaunch realsense2_camera rs_camera.launch
紧接着打开一个新终端,输入rviz订阅image,topic设置为depth,即可出图!!
cd ~/catkin_ws/src git clone -b melodic-devel https://github.com/pal-robotics/aruco_ros.git cd .. catkin_make
(4)安装visp,并编译visp_hand2eye_calibration这个包,其他包暂时不用全编译
cd ~/catkin_ws/src git clone -b melodic-devel https://github.com/lagadic/vision_visp.git cd .. catkin_make --pkg visp_hand2eye_calibration(5)安装easy_handeye
cd ~/catkin_ws/src git clone https://github.com/IFL-CAMP/easy_handeye cd .. catkin_make(6)编译aubo包,这里把上节的aubo_ws/src的aubo_robot复制到catkin_ws/src,重新编译一下即可 (7)下载aruco标定码:https://chev.me/arucogen/
Dictionary:Original ArUco
Marker ID:自己定
Marker size, mm: 100
easy_handeye/docs/example_launch/ur5e_realsense_calibration.launch
把这个文件复制到easy_handeye/easy_handeye/launch/文件下,并重新命名为eye_to_hand_calibration.launch
原launch文件内容:
修改后的launch文件:
(9)开始标定
roslaunch easy_handeye eye_to_hand_calibration.launch
运行此命令会打开三个界面:
界面1:
界面2:
界面3:
1.界面一:添加image,订阅aruco tracker/result,即可在rviz界面看到图像。
2.界面二:点击check starting pose,检查成功显示Ready to start:click to next pose,紧接着点击Next pose --> Plan --> Execute,若报:can not calibrate in current position,因为机械臂与二维码的距离太远。手动调节一下机械臂即可。
3.机械臂移动新的位置,移动完毕若二维码在视野范围,即可在界面3点击take sample,若二维码丢失,则这个点作废,回到步骤二。
4.重复步骤2和3,直到移动完17个点,点击compute, Result对话框即可出现标定好的结果。
根据计算出来的的translation,可用尺子实测一下,偏差不大,标定成功!!
1.待更新!!!发布tf就运行一个publish.launch文件就可以,具体操作等周末我再更新一下博客,先把环境搭好!
遇到的错误及解决方法:1.cv2.CALIB_HAND_EYE_TSAI
pip install opencv-python==4.2.0.32



