Abstrakt
Grasping objects and manipulating them is the main way the robot interacts with its environment. However, for robots to operate in a dynamic environment, a system for determining the gripping position for objects in the scene is also required. For this purpose, neural networks segmenting the point cloud are usually applied. However, training such networks is very complex and their results are unsatisfactory. Therefore, we propose an innovative and end-to- end approach to generating the grip position that replaces (3D) point cloud segmentation with 2D image segmentation. For this purpose, we create an OrthoView module that acts as an adapter between 3D space (point cloud) and 2D space (image). The 2D object mask created in it serves as basic information in the process of selecting the final grip from among the grips generated for the entire scene using Contact GraspNet. An unquestionable conceptual advantage of our solution (OrthoView) is the fact that only one resulting 2D image is created from the point cloud, which can be the result of merging (integrating) 3D images from many cameras. Therefore, it allows for the fusion of information from any number of cameras, without the need to implement solutions for identifying the same objects seen from different perspectives (cameras). In order to test our solution stream, we created 6 scenes of different complexity, on the basis of which we present the effectiveness of our solution. In summary, eliminating the need for 3D point cloud segmentation and reducing the image segmentation problem to infer from only one 2D photo can significantly improve the performance of the position estimation system for grasping objects.
Cytowania
-
0
CrossRef
-
0
Web of Science
-
0
Scopus
Autorzy (2)
Cytuj jako
Pełna treść
pełna treść publikacji nie jest dostępna w portalu
Słowa kluczowe
Informacje szczegółowe
- Kategoria:
- Publikacja monograficzna
- Typ:
- rozdział, artykuł w książce - dziele zbiorowym /podręczniku w języku o zasięgu międzynarodowym
- Tytuł wydania:
- Intelligent and Safe Computer Systems in Control and Diagnostics strony 153 - 166
- Język:
- polski
- Rok wydania:
- 2022
- Opis bibliograficzny:
- Kowalczuk Z., Glinko J.: Autonomous Perception and Grasp Generation Based on Multiple 3D Sensors and Deep Learning// Intelligent and Safe Computer Systems in Control and Diagnostics/ : , 2023, s.153-166
- DOI:
- Cyfrowy identyfikator dokumentu elektronicznego (otwiera się w nowej karcie) 10.1007/978-3-031-16159-9_13
- Weryfikacja:
- Politechnika Gdańska
wyświetlono 188 razy