the initializer is very slow, and does not work very reliably. Meanwhile, deep learning caused quite a stir in the area of 3D reconstruction. RELATED WORK A. 近段时间一直在学习高翔博士的《视觉SLAM十四讲》,学了以后发现自己欠缺的东西实在太多,好多都需要深入系统的学习。. 159. TE-ORB_SLAM2 is a work that investigate two different methods to improve the tracking of ORB-SLAM2 in. The RGB-D case shows the keyframe poses estimated in sequence fr1 room from the TUM RGB-D Dataset [3], andNote. It offers RGB images and depth data and is suitable for indoor environments. M. After training, the neural network can realize 3D object reconstruction from a single [8] , [9] , stereo [10] , [11] , or collection of images [12] , [13] . support RGB-D sensors and pure localization on previously stored map, two required features for a significant proportion service robot applications. DRGB is similar to traditional RGB because it uses red, green, and blue LEDs to create color combinations, but with one big difference. The proposed DT-SLAM approach is validated using the TUM RBG-D and EuRoC benchmark datasets for location tracking performances. 38: AS4837: CHINA169-BACKBONE CHINA. Once this works, you might want to try the 'desk' dataset, which covers four tables and contains several loop closures. This dataset was collected by a Kinect V1 camera at the Technical University of Munich in 2012. 3 Connect to the Server lxhalle. 7 nm. TUM RGB-D dataset The TUM RGB-D dataset [14] is widely used for evaluat-ing SLAM systems. de 2 Toyota Research Institute, Los Altos, CA 94022, USA wadim. rbg. de tombari@in. Year: 2009; Publication: The New College Vision and Laser Data Set; Available sensors: GPS, odometry, stereo cameras, omnidirectional camera, lidar; Ground truth: No The TUM RGB-D dataset [39] con-tains sequences of indoor videos under different environ-ment conditions e. employees/guests and hiwis have an ITO account and the print account has been added to the ITO account. Our approach was evaluated by examining the performance of the integrated SLAM system. der Fakultäten. 01:00:00. 4-linux - optimised for Linux; 2. Color images and depth maps. We provide a large dataset containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. SLAM. 289. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result pefectly suits not just for bechmarking camera. If you want to contribute, please create a pull request and just wait for it to be. tum. tum. de. However, loop closure based on 3D points is more simplistic than the methods based on point features. Content. 822841 fy = 542. 15th European Conference on Computer Vision, September 8 – 14, 2018 | Eccv2018 - Eccv2018. 德国慕尼黑工业大学TUM计算机视觉组2012年提出了一个RGB-D数据集,是目前应用最为广泛的RGB-D数据集。数据集使用Kinect采集,包含了depth图像和rgb图像,以及ground truth等数据,具体格式请查看官网。on the TUM RGB-D dataset. 24 Live Screenshot Hover to expand. Ground-truth trajectories obtained from a high-accuracy motion-capture system are provided in the TUM datasets. Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and. tum. The stereo case shows the final trajectory and sparse reconstruction of the sequence 00 from the KITTI dataset [2]. NET zone. In the following section of this paper, we provide the framework of the proposed method OC-SLAM with the modules in the semantic object detection thread and dense mapping thread. The sequences contain both the color and depth images in full sensor resolution (640 × 480). rbg. The TUM RGB-D benchmark [5] consists of 39 sequences that we recorded in two different indoor environments. An Open3D RGBDImage is composed of two images, RGBDImage. TUM school of Engineering and Design Photogrammetry and Remote Sensing Arcisstr. Only RGB images in sequences were applied to verify different methods. The sensor of this dataset is a handheld Kinect RGB-D camera with a resolution of 640 × 480. The TUM RGB-D dataset , which includes 39 sequences of offices, was selected as the indoor dataset to test the SVG-Loop algorithm. The TUM. In contrast to previous robust approaches of egomotion estimation in dynamic environments, we propose a novel robust VO based on. Many answers for common questions can be found quickly in those articles. In this paper, we present the TUM RGB-D benchmark for visual odometry and SLAM evaluation and report on the first use-cases and users of it outside our own group. 0/16 (Route of ASN) Recent Screenshots. cpp CMakeLists. stereo, event-based, omnidirectional, and Red Green Blue-Depth (RGB-D) cameras. SUNCG is a large-scale dataset of synthetic 3D scenes with dense volumetric annotations. The dataset was collected by Kinect camera, including depth image, RGB image, and ground truth data. This project was created to redesign the Livestream and VoD website of the RBG-Multimedia group. 17123 it-support@tum. We also provide a ROS node to process live monocular, stereo or RGB-D streams. the initializer is very slow, and does not work very reliably. io. The RGB-D dataset contains the following. The TUM dataset is a well-known dataset for evaluating SLAM systems in indoor environments. The standard training and test set contain 795 and 654 images, respectively. TUM RGB-D dataset. de belongs to TUM-RBG, DE. 2. de(PTR record of primary IP) IPv4: 131. Network 131. Share study experience about Computer Vision, SLAM, Deep Learning, Machine Learning, and RoboticsRGB-live . de. It contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. /Datasets/Demo folder. However, the method of handling outliers in actual data directly affects the accuracy of. TUM RGB-D Benchmark Dataset [11] is a large dataset containing RGB-D data and ground-truth camera poses. net. manhardt, nassir. de; ntp2. tum. In [19], the authors tested and analyzed the performance of selected visual odometry algorithms designed for RGB-D sensors on the TUM dataset with respect to accuracy, time, and memory consumption. de. We also show that dynamic 3D reconstruction can benefit from the camera poses estimated by our RGB-D SLAM approach. de. The ICL-NUIM dataset aims at benchmarking RGB-D, Visual Odometry and SLAM algorithms. The test dataset we used is the TUM RGB-D dataset [48,49], which is widely used for dynamic SLAM testing. positional arguments: rgb_file input color image (format: png) depth_file input depth image (format: png) ply_file output PLY file (format: ply) optional. Full size table. 07. Currently serving 12 courses with up to 1500 active students. Then Section 3 includes experimental comparison with the original ORB-SLAM2 algorithm on TUM RGB-D dataset (Sturm et al. Similar behaviour is observed in other vSLAM [23] and VO [12] systems as well. 159. The system is also integrated with Robot Operating System (ROS) [10], and its performance is verified by testing DS-SLAM on a robot in a real environment. Compared with ORB-SLAM2, the proposed SOF-SLAM achieves averagely 96. Moreover, our approach shows a 40. Every image has a resolution of 640 × 480 pixels. tum. Thumbnail Figures from Complex Urban, NCLT, Oxford robotcar, KiTTi, Cityscapes datasets. Contribution . The results indicate that DS-SLAM outperforms ORB-SLAM2 significantly regarding accuracy and robustness in dynamic environments. We are happy to share our data with other researchers. 500 directories) as well as a scope of enterprise-specific IPFIX Information Elements among others. See the settings file provided for the TUM RGB-D cameras. 73% improvements in high-dynamic scenarios. Installing Matlab (Students/Employees) As an employee of certain faculty affiliation or as a student, you are allowed to download and use Matlab and most of its Toolboxes. employs RGB-D sensor outputs and performs 3D camera pose estimation and tracking to shape a pose graph. General Info Open in Search Geo: Germany (DE) — Domain: tum. TUM RGB-D dataset. These sequences are separated into two categories: low-dynamic scenarios and high-dynamic scenarios. system is evaluated on TUM RGB-D dataset [9]. This repository is linked to the google site. Contribution. In the HSL color space #34526f has a hue of 209° (degrees), 36% saturation and 32% lightness. foswiki. system is evaluated on TUM RGB-D dataset [9]. RBG – Rechnerbetriebsgruppe Mathematik und Informatik Helpdesk: Montag bis Freitag 08:00 - 18:00 Uhr Telefon: 18018 Mail: [email protected]. Visual SLAM Visual SLAM In Simultaneous Localization And Mapping, we track the pose of the sensor while creating a map of the environment. +49. The depth here refers to distance. txt at the end of a sequence, using the TUM RGB-D / TUM monoVO format ([timestamp x y z qx qy qz qw] of the cameraToWorld transformation). r. DE zone. We also provide a ROS node to process live monocular, stereo or RGB-D streams. Map Initialization: The initial 3-D world points can be constructed by extracting ORB feature points from the color image and then computing their 3-D world locations from the depth image. ASN data. Meanwhile, a dense semantic octo-tree map is produced, which could be employed for high-level tasks. 2 WindowsEdit social preview. Visual odometry and SLAM datasets: The TUM RGB-D dataset [14] is focused on the evaluation of RGB-D odometry and SLAM algorithms and has been extensively used by the research community. Rechnerbetriebsgruppe. © RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, rbg@in. Login (with in. Modified tool of the TUM RGB-D dataset that automatically computes the optimal scale factor that aligns trajectory and groundtruth. The proposed DT-SLAM approach is validated using the TUM RBG-D and EuRoC benchmark datasets for location tracking performances. Recording was done at full frame rate (30 Hz) and sensor resolution (640 × 480). The process of using vision sensors to perform SLAM is particularly called Visual. Simultaneous localization and mapping (SLAM) is one of the fundamental capabilities for intelligent mobile robots to perform state estimation in unknown environments. 1 Comparison of experimental results in TUM data set. A novel semantic SLAM framework detecting potentially moving elements by Mask R-CNN to achieve robustness in dynamic scenes for RGB-D camera is proposed in this study. The TUM RGB-D dataset, published by TUM Computer Vision Group in 2012, consists of 39 sequences recorded at 30 frames per second using a Microsoft Kinect sensor in different indoor scenes. Registrar: RIPENCC Route: 131. Do you know your RBG. Login (with in. 04 on a computer (i7-9700K CPU, 16 GB RAM and Nvidia GeForce RTX 2060 GPU). g the KITTI dataset or the TUM RGB-D dataset , where highly-precise ground truth states (GPS. de. For those already familiar with RGB control software, it may feel a tad limiting and boring. Mainly the helpdesk is responsible for problems with the hard- and software of the ITO, which includes. net. It is able to detect loops and relocalize the camera in real time. The computer running the experiments features an Ubuntu 14. DVO uses both RGB images and depth maps while ICP and our algorithm use only depth information. Registrar: RIPENCC Recent Screenshots. /build/run_tum_rgbd_slam Allowed options: -h, --help produce help message -v, --vocab arg vocabulary file path -d, --data-dir arg directory path which contains dataset -c, --config arg config file path --frame-skip arg (=1) interval of frame skip --no-sleep not wait for next frame in real time --auto-term automatically terminate the viewer --debug debug mode -. . #000000 #000033 #000066 #000099 #0000CC© RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, [email protected] generatePointCloud. Exercises will be held remotely and live on the Thursday slot about each 3 to 4 weeks and will not be. This dataset is a standard RGB-D dataset provided by the Computer Vision Class group of Technical University of Munich, Germany, and it has been used by many scholars in the SLAM. C. Experimental results on the TUM RGB-D dataset and our own sequences demonstrate that our approach can improve performance of state-of-the-art SLAM system in various challenging scenarios. X. md","contentType":"file"},{"name":"_download. 5. The TUM RGB-D dataset’s indoor instances were used to test their methodology, and they were able to provide results that were on par with those of well-known VSLAM methods. The TUM RGBD dataset [10] is a large set of data with sequences containing both RGB-D data and ground truth pose estimates from a motion capture system. Telephone: 089 289 18018. An Open3D RGBDImage is composed of two images, RGBDImage. your inclusion of the hex codes and rbg values has helped me a lot with my digital art, and i commend you for that. Downloads livestrams from live. 2. cit. Check the list of other websites hosted by TUM-RBG, DE. 1 Performance evaluation on TUM RGB-D dataset The TUM RGB-D dataset was proposed by the TUM Computer Vision Group in 2012, which is frequently used in the SLAM domain [ 6 ]. It is a challenging dataset due to the presence of. This is in contrast to public SLAM benchmarks like e. Meanwhile, a dense semantic octo-tree map is produced, which could be employed for high-level tasks. We integrate our motion removal approach with the ORB-SLAM2 [email protected] file rgb. of the. 1. de. The ground-truth trajectory wasDataset Download. TUM RGB-D SLAM Dataset and Benchmark. Hotline: 089/289-18018. The results demonstrate the absolute trajectory accuracy in DS-SLAM can be improved by one order of magnitude compared with ORB-SLAM2. via a shortcut or the back-button); Cookies are. We increased the localization accuracy and mapping effects compared with two state-of-the-art object SLAM algorithms. 5. See the list of other web pages hosted by TUM-RBG, DE. 5-win - optimised for Windows, needs OpenVPN >= v2. The results show that the proposed method increases accuracy substantially and achieves large-scale mapping with acceptable overhead. Results on TUM RGB-D Sequences. This project will be available at live. : You need VPN ( VPN Chair) to open the Qpilot Website. TUM dataset contains the RGB and Depth images of Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The multivariable optimization process in SLAM is mainly carried out through bundle adjustment (BA). Many answers for common questions can be found quickly in those articles. The monovslam object runs on multiple threads internally, which can delay the processing of an image frame added by using the addFrame function. Most SLAM systems assume that their working environments are static. Includes full time,. 0/16 (Route of ASN) PTR: griffon. vehicles) [31]. TUM RGB-D Scribble-based Segmentation Benchmark Description. Major Features include a modern UI with dark-mode Support and a Live-Chat. de (The registered domain) AS: AS209335 - TUM-RBG, DE Note: An IP might be announced by multiple ASs. Zhang et al. , KITTI, EuRoC, TUM RGB-D, MIT Stata Center on PR2 robot), outlining strengths, and limitations of visual and lidar SLAM configurations from a practical. It also comes with evaluation tools forRGB-Fusion reconstructed the scene on the fr3/long_office_household sequence of the TUM RGB-D dataset. The measurement of the depth images is millimeter. Our extensive experiments on three standard datasets, Replica, ScanNet, and TUM RGB-D show that ESLAM improves the accuracy of 3D reconstruction and camera localization of state-of-the-art dense visual SLAM methods by more than 50%, while it runs up to 10 times faster and does not require any pre-training. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. © RBG Rechnerbetriebsgruppe Informatik, Technische Universität München, 2013–2018, [email protected] guide The RBG Helpdesk can support you in setting up your VPN. VPN-Connection to the TUM set up of the RBG certificate Furthermore the helpdesk maintains two websites. g. Material RGB and HEX color codes of TUM colors. The calibration of the RGB camera is the following: fx = 542. from publication: DDL-SLAM: A robust RGB-D SLAM in dynamic environments combined with Deep. in. The video shows an evaluation of PL-SLAM and the new initialization strategy on a TUM RGB-D benchmark sequence. ORB-SLAM3-RGBL. globalAuf dieser Seite findet sich alles Wissenwerte zum guten Start mit den Diensten der RBG. Cookies help us deliver our services. two example RGB frames from a dynamic scene and the resulting model built by our approach. Freiburg3 consists of a high-dynamic scene sequence marked 'walking', in which two people walk around a table, and a low-dynamic scene sequence marked 'sitting', in which two people sit in chairs with slight head or part of the limb. TKL keyboards are great for small work areas or users who don't rely on a tenkey. The first event in the semester will be an on-site exercise session where we will announce all remaining details of the lecture. Object–object association. Map Points: A list of 3-D points that represent the map of the environment reconstructed from the key frames. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result perfectly suits not just for benchmarking camera trajectory but also reconstruction. Note: during the corona time you can get your RBG ID from the RBG. We recommend that you use the 'xyz' series for your first experiments. unicorn. Students have an ITO account and have bought quota from the Fachschaft. Contribution. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. RGBD images. 159. The RGB and depth images were recorded at frame rate of 30 Hz and a 640 × 480 resolution. RGB-D Vision RGB-D Vision Contact: Mariano Jaimez and Robert Maier In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. Telefon: 18018. This file contains information about publicly available datasets suited for monocular, stereo, RGB-D and lidar SLAM. , illuminance and varied scene settings, which include both static and moving object. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":". 2. See the settings file provided for the TUM RGB-D cameras. deRBG – Rechnerbetriebsgruppe Mathematik und Informatik Helpdesk: Montag bis Freitag 08:00 - 18:00 Uhr Telefon: 18018 Mail: rbg@in. Livestreaming from lecture halls. Note: All students get 50 pages every semester for free. Performance of pose refinement step on the two TUM RGB-D sequences is shown in Table 6. In order to obtain the missing depth information of the pixels in current frame, a frame-constrained depth-fusion approach has been developed using the past frames in a local window. DE top-level domain. As an accurate pose tracking technique for dynamic environments, our efficient approach utilizing CRF-based long-term consistency can estimate a camera trajectory (red) close to the ground truth (green). 92. ManhattanSLAM. Our method named DP-SLAM is implemented on the public TUM RGB-D dataset. To observe the influence of the depth unstable regions on the point cloud, we utilize a set of RGB and depth images selected form TUM dataset to obtain the local point cloud, as shown in Fig. Authors: Raul Mur-Artal, Juan D. ORG top-level domain. tum. Technische Universität München, TU München, TUM), заснований в 1868 році, знаходиться в місті Мюнхені і є єдиним технічним університетом Баварії і одним з найбільших вищих навчальних закладів у. This repository provides a curated list of awesome datasets for Visual Place Recognition (VPR), which is also called loop closure detection (LCD). This may be due to: You've not accessed this login-page via the page you wanted to log in (eg. , sneezing, staggering, falling down), and 11 mutual actions. The accuracy of the depth camera decreases as the distance between the object and the camera increases. This study uses the Freiburg3 series from the TUM RGB-D dataset. KITTI Odometry dataset is a benchmarking dataset for monocular and stereo visual odometry and lidar odometry that is captured from car-mounted devices. g. In order to introduce Mask-RCNN into the SLAM framework, on the one hand, it needs to provide semantic information for the SLAM algorithm, and on the other hand, it provides the SLAM algorithm with a priori information that has a high probability of being a dynamic target in the scene. {"payload":{"allShortcutsEnabled":false,"fileTree":{"Examples/RGB-D":{"items":[{"name":"associations","path":"Examples/RGB-D/associations","contentType":"directory. de registered under . The experiments on the TUM RGB-D dataset [22] show that this method achieves perfect results. de email address to enroll. [3] provided code and executables to evaluate global registration algorithms for 3D scene reconstruction system, and proposed the. Two different scenes (the living room and the office room scene) are provided with ground truth. in. 4-linux -. The following seven sequences used in this analysis depict different situations and intended to test robustness of algorithms in these conditions. de from your own Computer via Secure Shell. Two different scenes (the living room and the office room scene) are provided with ground truth. These sequences are separated into two categories: low-dynamic scenarios and high-dynamic scenarios. de and the Knowledge Database kb. net registered under . Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with varying light, changeable weather, and dynamic interference. t. position and posture reference information corresponding to. Deep Model-Based 6D Pose Refinement in RGB Fabian Manhardt1∗, Wadim Kehl2∗, Nassir Navab1, and Federico Tombari1 1 Technical University of Munich, Garching b. You can change between the SLAM and Localization mode using the GUI of the map. Usage. Note: All students get 50 pages every semester for free. Maybe replace by your own way to get an initialization. RGB-live. txt; DETR Architecture . The results indicate that the proposed DT-SLAM (mean RMSE = 0:0807. 5. 3. de / rbg@ma. In order to ensure the accuracy and reliability of the experiment, we used two different segmentation methods. Bauer Hörsaal (5602. [3] provided code and executables to evaluate global registration algorithms for 3D scene reconstruction system, and proposed the. Sie finden zudem eine Zusammenfassung der wichtigsten Informationen für neue Benutzer auch in unserem. Open3D has a data structure for images. 18. tum. kb. The benchmark contains a large. We conduct experiments both on TUM RGB-D dataset and in real-world environment. in. TUM RGB-D. in. The TUM RGB-D dataset provides many sequences in dynamic indoor scenes with accurate ground-truth data. 55%. Usage. 2. $ . de TUM-RBG, DE. The color image is stored as the first key frame. We may remake the data to conform to the style of the TUM dataset later. md","path":"README. The fr1 and fr2 sequences of the dataset are employed in the experiments, which contain scenes of a middle-sized office and an industrial hall environment respectively. Check other websites in . github","path":". Therefore, a SLAM system can work normally under the static-environment assumption. The presented framework is composed of two CNNs (depth CNN and pose CNN) which are trained concurrently and tested. Export as Portable Document Format (PDF) using the Web BrowserExport as PDF, XML, TEX or BIB. This paper presents a novel SLAM system which leverages feature-wise. The benchmark website contains the dataset, evaluation tools and additional information. 31,Jin-rong Street, CN: 2: 4837: 23776029: 0. tum. We provide examples to run the SLAM system in the KITTI dataset as stereo or monocular, in the TUM dataset as RGB-D or monocular, and in the EuRoC dataset as stereo or monocular. Our dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory. 21 80333 München Tel. tum. rbg. 3% and 90. However, this method takes a long time to calculate, and its real-time performance is difficult to meet people's needs. 756098Evaluation on the TUM RGB-D dataset. The images contain a slight jitter of. Example result (left are without dynamic object detection or masks, right are with YOLOv3 and masks), run on rgbd_dataset_freiburg3_walking_xyz: Getting Started. The TUM RGB-D dataset consists of colour and depth images (640 × 480) acquired by a Microsoft Kinect sensor at a full frame rate (30 Hz). deDataset comes from TUM Department of Informatics of Technical University of Munich, each sequence of the TUM benchmark RGB-D dataset contains RGB images and depth images recorded with a Microsoft Kinect RGB-D camera in a variety of scenes and the accurate actual motion trajectory of the camera obtained by the motion capture system. The last verification results, performed on (November 05, 2022) tumexam. de which are continuously updated. The 216 Standard Colors . TUM RGB-D [47] is a dataset containing images which contain colour and depth information collected by a Microsoft Kinect sensor along its ground-truth trajectory. Laser and Lidar generate a 2D or 3D point cloud specifically. Seen 7 times between July 18th, 2023 and July 18th, 2023. This file contains information about publicly available datasets suited for monocular, stereo, RGB-D and lidar SLAM. Classic SLAM approaches typically use laser range. The number of RGB-D images is 154, each with a corresponding scribble and a ground truth image. de which are continuously updated. In all of our experiments, 3D models are fused using Surfels implemented by ElasticFusion [15]. It contains indoor sequences from RGB-D sensors grouped in several categories by different texture, illumination and structure conditions. Since we have known the categories. Unfortunately, TUM Mono-VO images are provided only in the original, distorted form. You need to be registered for the lecture via TUMonline to get access to the lecture via live. idea. It is perfect for portrait shooting, wedding photography, product shooting, YouTube, video recording and more. The energy-efficient DS-SLAM system implemented on a heterogeneous computing platform is evaluated on the TUM RGB-D dataset . Compared with art-of-the-state methods, experiments on the TUM RBG-D dataset, KITTI odometry dataset, and practical environment show that SVG-Loop has advantages in complex environments with. TUM RGB-D Dataset. Attention: This is a live snapshot of this website, we do not host or control it! No direct hits. Compared with Intel i7 CPU on the TUM dataset, our accelerator achieves up to 13× frame rate improvement, and up to 18× energy efficiency improvement, without significant loss in accuracy. tum. 39% red, 32. The experiments are performed on the popular TUM RGB-D dataset . In addition, results on real-world TUM RGB-D dataset also gain agreement with the previous work (Klose, Heise, and Knoll Citation 2013) in which IC can slightly increase the convergence radius and improve the precision in some sequences (e. RBG – Rechnerbetriebsgruppe Mathematik und Informatik Helpdesk: Montag bis Freitag 08:00 - 18:00 Uhr Telefon: 18018 Mail: rbg@in. in. 5. the corresponding RGB images. 593520 cy = 237. In the following section of this paper, we provide the framework of the proposed method OC-SLAM with the modules in the semantic object detection thread and dense mapping thread. The dataset contains the real motion trajectories provided by the motion capture equipment.