Submit Your Method
If you like to submit a method, please get in touch with us via
In order to evaluate your method, we require pixel-wise anomaly / obstacle scores, where higher values correspond to anomaly / obstacle prediction. Our benchmark code provides an inference wrapper for that. Then, please send us a download link (via eg. google drive, dropbox, ...) for the computed score files. To facilitate the evaluation process, please make sure that the score files can be downloaded from the command line e.g. using wget/curl or gdown.
Example inference script for a dummy method:
from tqdm import tqdm
import cv2 as cv
from road_anomaly_benchmark.evaluation import Evaluation
def my_dummy_method(image):
""" Very naive method: return color saturation """
image_hsv = cv.cvtColor(image, cv.COLOR_RGB2HSV_FULL)
scores = image_hsv[:, :, 1].astype(np.float) * (1./255.)
return scores
def main():
ev = Evaluation(method_name = 'Dummy', dataset_name = 'AnomalyTrack-test')
for frame in tqdm(ev.get_dataset()):
anomaly_p = my_dummy_method(frame.image)
ev.save_output(frame, anomaly_p)
ev.wait_to_finish_saving()
if __name__ == '__main__':
main()
The output scores will be stored as .hdf5 files by default in the directory ./outputs/anomaly_p/ (sub-directory Dummy/AnomalyTrack-all for the example script). Replace the function my_dummy_method with your method and change the method_name accordingly. For the obstacle track, change dataset_name to ObstacleTrack-all.