The HaDa (Hands dataset) contains depth and color images of shopper-shelf interaction from top-view configuration. In particular, the purpose of this dataset is to classify the images in these categories:
- Positive (image contains a hand holding something)
- Negative (image contains a empty hand)
- Neutral (none of the other)
- Refill (shelf refilling operation)
An interaction is described by the sequence of two images.
It contains a total of 13854 images (depth + color) with a dimension of 70 x 70 pixels.The ground truth was manually labelled by human annotators. The following figure shows an example of a dataset instances that includes the four categories described above.
To obtain this dataset, we ask you to complete, sign and return the form below. After that, I will send you the credentials to download it. Note that the dataset is available only for research purposes.
- Fill out this form: request form
- Send it to: vrai@dii.univpm.it (Note: you should send the email from an email address that is linked to your research institution/university)
- Wait for the credentials
- You will be sent a link for the download.
Please cite our work using the following bib:
@Article{Paolanti2020,
author={Paolanti, M. and Pietrini, R. and Mancini, A. and Frontoni, E. and Zingaretti, P.},
title={Deep understanding of shopper behaviours and interactions using RGB-D vision},
journal={Machine Vision and Applications},
year={2020},
volume={31},
number={7-8},
doi={10.1007/s00138-020-01118-w},
art_number={66},
}