<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>机器人 | 走走走走走你</title><link>https://pxy.netlify.app/category/%E6%9C%BA%E5%99%A8%E4%BA%BA/</link><atom:link href="https://pxy.netlify.app/category/%E6%9C%BA%E5%99%A8%E4%BA%BA/index.xml" rel="self" type="application/rss+xml"/><description>机器人</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>怕什么真理无穷，进一寸有一寸的欢喜！</copyright><lastBuildDate>Tue, 01 Mar 2022 00:00:00 +0000</lastBuildDate><item><title>A lightweight object-level data association and change detection method for robot map</title><link>https://pxy.netlify.app/post/getting-started/</link><pubDate>Tue, 01 Mar 2022 00:00:00 +0000</pubDate><guid>https://pxy.netlify.app/post/getting-started/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>Autonomous mobile robots usually need an object-level map for better reasoning and decision-making. However, the changes of scene objects make it difficult to reuse map and lack of lightweight system solutions. In this paper, an object-level data association method is proposed to construct object-level map for lightweight and low-cost application scenarios. Specifically, we maintain a sparse point cloud map using only a monocular camera. Based on the feature tracking information of ORB-SLAM2, this method introduces semantic information to associate data in parallel and reduce the extra computational burden.
Then, we propose a change detection method for autonomous updating of robot map. We are pioneering in the object level of the environment change detection and map update, to achieve the unity of object level mapping and update. which ensures the consistency and integrity of updated parts. The proposed method has been extensively tested on multiple public data sets and a real robot. The results show that the effect of data association reaches the latest level and is superior to the similar methods in time and space complexity. And the detection rate of object change reached 83.75%. In addition, we have implemented a lightweight robot system. Frame rate reached 20 FPS when using only half the system resources.&lt;/p>
&lt;p>The contributions of this paper are as follows:&lt;/p>
&lt;ol>
&lt;li>A lightweight data association method. Using the spatial relation of map points and combining with the mutex table we proposed.&lt;/li>
&lt;li>A lightweight change detection and map update method. Object-level updates ensure the consistency and integrity of updates.&lt;/li>
&lt;li>A real-time system. For object-level semantic awareness tasks and low-cost hardware platform requirements, the unification of map building, change detection and update is realized.&lt;/li>
&lt;li>Our method was deployed and extensively tested on a robot with a low-power embedded platform. The validity, lightness and excellence of the method are proved.&lt;/li>
&lt;/ol>
&lt;!--
&lt;figure id="figure-the-template-is-mobile-first-with-a-responsive-design-to-ensure-that-your-site-looks-stunning-on-every-device">
&lt;a data-fancybox="" href="https://github.com/pengxinyi-up/academic-page/blob/master/images/fig_system-structure.png" data-caption="The template is mobile first with a responsive design to ensure that your site looks stunning on every device.">
&lt;img src="https://github.com/pengxinyi-up/academic-page/blob/master/images/fig_system-structure.png" alt="" >
&lt;/a>
&lt;figcaption>
The template is mobile first with a responsive design to ensure that your site looks stunning on every device.
&lt;/figcaption>
&lt;/figure>
-->
&lt;h2 id="展示视频">展示视频&lt;/h2>
&lt;h3 id="bilibili简版3分钟推荐httpswwwbilibilicomvideobv1ml4y1j7dvspm_id_from33399900">&lt;a href="https://www.bilibili.com/video/BV1mL4y1j7dV?spm_id_from=333.999.0.0" target="_blank" rel="noopener">Bilibili，简版3分钟（推荐）&lt;/a>&lt;/h3>
&lt;h3 id="bilibili详细版12分钟httpswwwbilibilicomvideobv1ol4y1j75rspm_id_from33399900">&lt;a href="https://www.bilibili.com/video/BV1oL4y1j75R?spm_id_from=333.999.0.0" target="_blank" rel="noopener">Bilibili，详细版12分钟!&lt;/a>&lt;/h3>
&lt;h2 id="展示海报">展示海报&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/pengxinyi-up/academic-page/master/content/post/getting-started/robot.png" alt="机器人大赛" title="机器人大赛">
&lt;img src="https://raw.githubusercontent.com/pengxinyi-up/academic-page/master/content/post/getting-started/EDC.png" alt="研电赛" title="研电赛">&lt;/p></description></item><item><title>基于RGB-D的移动抓取服务机器人的设计与实现</title><link>https://pxy.netlify.app/project/example/</link><pubDate>Mon, 27 Apr 2020 00:00:00 +0000</pubDate><guid>https://pxy.netlify.app/project/example/</guid><description>&lt;h2 id="摘要">摘要&lt;/h2>
&lt;p>&amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp 近年来，人工智能技术的发展推动了机器人的智能化进程，清洁、配送、陪护等服务机器人给人们生活带来的便利正在普及。人类的眼睛和手臂配合可以完成复杂的抓取行为。同理，视觉传感器能帮助机器人捕获丰富的环境信息，机械臂可以完成类人的抓取任务。因此本课题基于RGB-D深度视觉和开源的ROS系统，开展移动抓取机器人的仿真环境和软硬件的设计与实现，主要分为建图导航和机械臂的识别抓取两部分。&lt;/p>
&lt;p>&amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp 首先，基于D-H参数建立了机器人的模型并在RVIZ下可视化，基于Ros_control配置了底盘的差速控制器和机械臂的关节位置控制器，在Gazebo仿真平台建立了机器人的物理仿真模型和演示场景。&lt;/p>
&lt;p>&amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp 在建图导航部分，基于Kobuki底盘和Kinect v1深度相机搭建了移动导航平台，借助ROS Navigation导航框架，利用Gmapping功能包和Amcl功能包完成SLAM任务，利用Move_base功能包基于Dijkstra算法和DWA算法完成导航任务。&lt;/p>
&lt;p>&amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp 在识别抓取部分，在底盘上搭载了一台六自由度机械臂，Arduino UNO微处理器和PCA9685驱动作为机械臂的控制硬件，利用Find_object_3d功能包基于oFast和rBRIEF算法完成目标的识别定位，使用MoveIt配置助手通过Trac_ik插件基于牛顿收敛法和SQP方法完成了机械臂的逆运动学规划。&lt;/p>
&lt;p>&amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp &amp;amp;nbsp 最后，在仿真环境下进行了综合演示实验，结合人脸身份认证和语音指令导航，机器人能够完成移动抓取任务。并且针对建图定位偏移、抓取规划不稳定等问题，提出了硬件结构和规划方法的改进方向，通过抓取改进实验减少了机械臂与场景的碰撞。&lt;/p>
&lt;h2 id="ros机器人仿真实物">ROS机器人（仿真+实物）&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/pengxinyi-up/mobile-grab-Robot/master/photos/system_structure.png" alt="system_structure" title="系统结构">&lt;/p>
&lt;h2 id="模型建立及仿真场景">模型建立及仿真场景&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/pengxinyi-up/mobile-grab-Robot/master/photos/simulation_model.png" alt="simulation_model" title="仿真模型">&lt;/p>
&lt;h2 id="硬件连接及实物图">硬件连接及实物图&lt;/h2>
&lt;p>&lt;img src="https://raw.githubusercontent.com/pengxinyi-up/mobile-grab-Robot/master/photos/hardware_system.png" alt="hardware_system" title="硬件系统">&lt;/p>
&lt;h2 id="系统结构">系统结构&lt;/h2>
&lt;p>&lt;img src="https://img-blog.csdnimg.cn/20200622104500776.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3FxXzM3MzcyMTU1,size_16,color_FFFFFF,t_70" alt="system_structure" title="系统结构">&lt;/p>
&lt;ul>
&lt;li>Simulation design and Realization of face recognition, voice control, autonomous movement, recognition and grasping robot based on ROS&lt;/li>
&lt;li>Thanks for the great work: &lt;a href="https://github.com/introlab/find-object" target="_blank" rel="noopener">find-object&lt;/a>, &lt;a href="https://github.com/procrob/face_recognition" target="_blank" rel="noopener">face_recognition&lt;/a>and&lt;a href="https://www.guyuehome.com/" target="_blank" rel="noopener">古月居&lt;/a>&lt;/li>
&lt;li>Video: &lt;a href="https://www.bilibili.com/video/BV1WK4y147Rw?spm_id_from=333.999.0.0" target="_blank" rel="noopener">&lt;code>Bilibili&lt;/code>&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="详细内容点此链接去我的csdn博客httpsblogcsdnnetqq_37372155category_9650566html">&lt;a href="https://blog.csdn.net/qq_37372155/category_9650566.html" target="_blank" rel="noopener">详细内容，点此链接去我的&lt;code>CSDN博客&lt;/code>!&lt;/a>&lt;/h3></description></item></channel></rss>