Valid submissions both for the point cloud and the 3D mesh can be found in in the respective tables below. For ranking the results we rely on the individual F1 score per class, the mean F1 (mF1) score and the Overall Accuracy (OA) rounded to two decimals. In case of the mesh, the evaluation metrics are weighted according to the covered area of correctly / incorrectly classified faces. You can rank results by each column by clicking the respective column header. If you wish to see further details about one specific contribution, click the participant’s ID in order to show the normalized confusion matrix and two exemplary renderings of the point cloud / mesh. Further information is directly provided by the participants and is accessible via the ➥ icon.