Submit Your Method

Submission form coming soon.

In order to evaluate your method, we require pixel-wise anomaly / obstacle scores, where higher values correspond to anomaly / obstacle prediction. Our benchmark code provides an inference pipeline for that.


Example inference script for a dummy method:

import numpy as np
from tqdm import tqdm
import cv2 as cv
from road_anomaly_benchmark.evaluation import Evaluation

def my_dummy_method(image):
    """ Very naive method: return color saturation """
    image_hsv = cv.cvtColor(image, cv.COLOR_RGB2HSV_FULL)
    scores = image_hsv[:, :, 1].astype(np.float) * (1./255.)
    return scores

def main():
    ev = Evaluation(method_name = 'Dummy', dataset_name = 'AnomalyTrack-test')
    for frame in tqdm(ev.get_dataset()):
        anomaly_p = my_dummy_method(frame.image)
        ev.save_output(frame, anomaly_p)
    ev.wait_to_finish_saving()

if __name__ == '__main__':
    main()

The output scores will be stored by default as hdf5 files in the directory ./outputs/anomaly_p/ (sub-directory Dummy/RoadAnomalyTrack-test for the example script). Replace the function my_dummy_method with your method and change the method_name accordingly. For the obstacle track, change dataset_name to ObstacleTrack-all.