News

  • 2017-03-31 The new version of evaluation code and validation resultsis released.
  • 2017-03-31 Add text version ground truth and fix rounding problem of bounding box annotations.
  • 2016-08-19 Two new algorithms are added into leader-board.
  • 2016-04-17 The face attribute labels i.e. pose and occlusion are available.
  • 2015-11-19 Results of four baseline methods: ACF, Faceness, Multiscale Cascade CNN, and Two-stage CNN are released.
  • 2015-11-19 WIDER FACE v1.0 is released with images, face bounding box annotations, and event category annotations.

Description

WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.

Benchmark

For details on the evaluation scheme please refer to the technical report.
For detection resutls please refer to the result page.

  • Scenario-Ext: A face detector is trained using any external data, and tested on the WIDER FACE test partition.
  • Scenario-Int: A face detector is trained using WIDER FACE training/validation partitions, and tested on WIDER FACE test partition.

Submission

Please contact us to evaluate your detection results. An evaluation server will be available soon.
The detection result for each image should be a text file, with the same name of the image. The detection results are organized by the event categories. For example, if the directory of a testing image is "./0--Parade/0_Parade_marchingband_1_5.jpg", the detection result should be writtern in the text file in "./0--Parade/0_Parade_marchingband_1_5.txt". The detection output is expected in the follwing format:
...
< image name i >
< number of faces in this image = im >
< face i1 >
< face i2 >
...
< face im >
...
Each text file should contain 1 row per detected bounding box, in the format "[left, top, width, height, score]". Please see the output example files and the README if the above descriptions are unclear.

Citation

	@inproceedings{yang2016wider,
	Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
	Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
	Title = {WIDER FACE: A Face Detection Benchmark},
	Year = {2016}}
		

Contact

For questions and result submission, please contact Shuo Yang at shuoyang.1213@gmail.com

License

Creative Common License (cc by-nc-nd)