Submit Your Method

Evaluate on Codabench




If you have problems with submitting your method, you can get in touch with us via

Why Evaluate on Codabench?

The progress in out-of-distribution segmentation over the past few years has been nothing short of exceptional. Today's state-of-the-art methods have achieved performance levels we were not sure would be possible when we first launched the Segment Me If You Can benchmark back in 2021. We would like to thank every participant who submitted their methods and helped push the field forward.

After several years of manually evaluating submissions, we have come to realize that we unfortunately no longer have the capacity to maintain the benchmark in this form. However, rather than shutting it down, we wanted to ensure it remains a valuable resource for future research. For this reason, we have migrated to an automated evaluation platform on Codabench: Go to Competition Page

The ground truth labels remain private to ensure the integrity of the benchmark, but researchers can now evaluate their methods more efficiently and continue contributing to this evolving area.


Prepare Predictions

In order to evaluate your method, we require pixel-wise anomaly / obstacle scores, where higher values correspond to anomaly / obstacle prediction. Our benchmark code provides an inference wrapper for that:


from tqdm import tqdm
import cv2 as cv
from road_anomaly_benchmark.evaluation import Evaluation

def my_dummy_method(image):
    """ Very naive method: return color saturation """
    image_hsv = cv.cvtColor(image, cv.COLOR_RGB2HSV_FULL)
    scores = image_hsv[:, :, 1].astype(np.float) * (1./255.)
    return scores

def main():
    ev = Evaluation(method_name = 'Dummy', dataset_name = 'AnomalyTrack-test')
    for frame in tqdm(ev.get_dataset()):
        anomaly_p = my_dummy_method(frame.image)
        ev.save_output(frame, anomaly_p)
    ev.wait_to_finish_saving()

if __name__ == '__main__':
    main()
    

The output scores will be stored as .hdf5 files by default in the directory ./outputs/anomaly_p/ (sub-directory Dummy/AnomalyTrack-all for the example script). Replace the function my_dummy_method with your method and change the method_name accordingly. For the obstacle track, change dataset_name to ObstacleTrack-all.