Loading
Round 1: Completed Round 2: Completed Round 3: Completed #computer_vision #localization #slam

[ICRA2022 & IROS2023] General Place Recognition: City-scale UGV Localization

Localization, SLAM, Place Recognition, Visual Navigation, Loop Closure Detection

$0 Prize Money
9871
165
20
689

[IROS2023] General Place Recognition for Autonomous Map Assembling

🌐 Website: https://metaslam.github.io/competitions/iros2023/

Note: Our timely updates and details will appear in our website and github.


Overview

For decades, place recognition has been applied to a variety of localization and navigation tasks. However, only a few methods have been proposed for large-scale map assembling. With the advancement of autonomous driving, last-mile delivery, and multi-agent cooperation, there is a significant demand for efficient and accurate large-scale, crowd-sourced map updating. In this competition, General Place Recognition (GPR) for Autonomous Map Assembling, we provide a comprehensive evaluation platform for large-scale LiDAR/IMU datasets. These datasets have been collected repeatedly at different times in a variety of environments (city, park), with varying overlaps. The goal is to evaluate the data association ability between trajectories that exhibit overlapping regions, without any GPS assistance.For decades, place recognition has been applied to a variety of localization and navigation tasks. However, only a few methods have been proposed for large-scale map assembling. With the advancement of autonomous driving, last-mile delivery, and multi-agent cooperation, there is a significant demand for efficient and accurate large-scale, crowd-sourced map updating. In this competition, General Place Recognition (GPR) for Autonomous Map Assembling, we provide a comprehensive evaluation platform for large-scale LiDAR/IMU datasets. These datasets have been collected repeatedly at different times in a variety of environments (city, park), with varying overlaps. The goal is to evaluate the data association ability between trajectories that exhibit overlapping regions, without any GPS assistance.

Dataset

In this dataset, we include:

  • 10 trajectories including 80 real-world sequences collected from campus of Carnegie Mellon University
  • For each trajectory, we traversed 8 times including 2 forward sequences and 2 backward sequences during day-light and 2 forward and 2 backward sequences during night-light.

Submission and Evaluation

The goal of the competition is to evaluated the performance of place recognition method in overlapping areas of trajectories. Top1 and Top5 recall will be calculated. The final competition rankings are based on the Top1 recall.

Participants please unzip all the files and convert all the point cloud into global descriptors in the same order. The format of submission should be the standard binary file format in Numpy(.npy).


[ICRA2022] General Place Recognition for Large-scale SLAM[ICRA2022] General Place Recognition for Large-scale SLAM

challenge 1 --- Large-scale 3D Localization (3D-3D Localization).

🌐 Website: http://gprcompetition.com

🏆 SDK in Github: https://github.com/MetaSLAM/GPR_Competition

Note: Our timely updates will appear in our website and github.


Overview

The ability of mobile robots to recognize previously visited or mapped areas is essential to achieving reliable autonomous systems. Place recognition for localization and SLAM is a promising method for achieving this ability. However, current approaches are negatively affected by differences in viewpoint and environmental conditions that affect visual appearance (e.g. illumination, season, time of day), and so these methods struggle to provide continuous localization in environments that change over time. These issues are compounded when trying to localize in large-scale (several-kilometer length) maps, where the effects of repeated terrain and geometries result in greater localization uncertainty. This competition aims to push visual and LiDAR state-of-the-art techniques for localization in large-scale environments, with an emphasis on environments with changing conditions.

To evaluate localization performance under long-term and large-scale tasks, we provide benchmark datasets to investigate robust re-localization under changing viewpoints (orientation and translation) and environmental conditions (illumination and time of day). This competition will include two challenges. The winner of each challenge will receive 3000 USD while the runner-up will receive 2000 USD.

This is the first challenge for Large-scale 3D Localization (3D-3D Localization). If you are interested our second challenge of Visual 2D-2D Localization, please pay a visit in [here].

Software Development Kit (SDK)

To accelerate the process of development, we provide a complete SDK for loading datasets and evaluating results. Python APIs are given so it would be convenient for participants to integrate and use the interfaces in their code, which include the API to:

  • Easy access to our datasets;
  • Evaluate metrics for visual/Lidar place recognition;
  • Submit results for online evaluation at crowdAI;

With this SDK, participants just need to focus on their algorithms and try to improve the place recognition accuracy. The SDK is held in the Github repo: https://github.com/MetaSLAM/GPR_Competition

Dataset

This Pittsburgh City-scale Dataset concentrates on the LiDAR place recognition over a large-scale area within urban environment. We collected 55 vehicle trajectories covering partial of the Pittsburgh and thus including diverse enviroments. Each trajectory is at least overlapped at one junction with the others, and some trajectories even have multiple junctions. This feature enables the dataset to be used in tasks such as LiDAR place recognition and multi-map fusion.

The original dataset contains point clouds and GPS data. We generate ground truth poses by SLAM, which is fused with the GPS data and later optimized by Interactive SLAM. With this process, we also get the map of each trajectory. For convenience, we slice the map along the trajectory into several submaps, and a submap has size 50m*50m with the distance between every two submaps being 2m. The global 6DoF ground truth pose of each submap is also given, so that you can easily determine the distance relation of submaps.

Pittsburgh City-scale Dataset

In this dataset, we include:

  • Point cloud submaps (size 40m*40m, ground removed, downsampled to 4096 points, every 2m along the trajectory).
  • Ground truth poses of submaps (6DoF)

Both the training data and testing data can be downloaded [here]. We also provide sample data [here] for you to have a quick look into it. Our SDK can help you manage the data easier.

File structure:

    GPR
    ├── TEST --------------------------> evaluation set for submission
    │   ├── 000000.pcd ----------------> test submap
    │   ├── 000001.pcd
    │   .
    │   .
    │   └── 005622.pcd
    ├── TRAIN -------------------------> training set
    │   ├── train_1
    │   │   ├── 000001.pcd -------------> training submap
    │   │   ├── 000001_pose6d.npy ----> corresponding groundtruth
    │   │   .
    │   │   .
    │   │   ├── 001093.pcd
    │   │   └── 001093_pose6d.pcd
    │   ├── train_2
    │   ├── .
    │   ├── .
    │   └── train_15
    └── VAL ----------------------------> sample tracks for self evaluation
        ├── val_1
        │   ├── DATABASE
        │   │   ├── 000001.pcd
        │   │   ├── 000001_pose6d.npy
        │   │   ├── .
        │   │   ├── .
        │   │   ├── 000164.pcd
        │   │   └── 000164_pose6d.npy
        │   └── QUERY
        │           ├── forward
        │           │     ├──000001.pcd
        │           │     ├──000001_pose6d.npy
        │           │     ├──...
        │           └── backward
        ├── val_2
        ├── val_3
        │   ├── DATABASE
        │   │   ├── 000001.pcd
        │   │   ├── 000001_pose6d.npy
        │   │   ├── .
        │   │   ├── .
        │   │   ├── 000164.pcd
        │   │   └── 000164_pose6d.npy
        │   └── QUERY
        │           ├── rot_15
        │           │     ├──000001.pcd
        │           │     ├──000001_pose6d.npy
        │           │     ├──...
        │           ├── rot_30
        │           └── rot_180
        ├── val_4
        ├── val_5
        └── val_6

After you download the testing data, you will see the file TEST_ROUND_2.tar.gz. It contains both the reference and query submaps, but in a mixed up order so the competitors will not know the their true relation. What you need to do is computing the feature of each submap using your method, and representing them in a (submap_num * feature_dim) numpy.ndarray(the order of features should be exactly the same as the submaps ). Save it as a *.npy file and upload this file.

 

 

Participants