HaDa Dataset

The HaDa (Hands dataset) contains depth and color images of shopper-shelf interaction from top-view configuration. In particular, the purpose of this dataset is to classify the images in these categories:

  • Positive (image contains a hand holding something)

          20180309181430932_ID60_T0_OUT

  • Negative (image contains a empty hand)

          20180310184438583_ID404_T1_IN

  • Neutral (none of the other)

          20180308112954924_ID164_T1_OUT

  • Refill (shelf refilling operation)

          20180122100142436_ID72_T0_OUT

An interaction is described by the sequence of two images.

It contains a total of 13854 images (depth + color) with a dimension of 70 x 70 pixels.The ground truth was manually labelled by human annotators. The following figure shows an example of a dataset instances that includes the four categories described above.

To obtain this dataset, we ask you to complete, sign and return the form below. After that, I will send you the credentials to download it. Note that the dataset is available only for research purposes.

  • Fill out this formrequest form
  • Send it to: vrai@dii.univpm.it (Note: you should send the email from an email address that is linked to your research institution/university)
  • Wait for the credentials
  • You will be sent a link for the download.

 

Please cite our work using the following bib:


@Article{Paolanti2020,
author={Paolanti, M. and Pietrini, R. and Mancini, A. and Frontoni, E. and Zingaretti, P.},
title={Deep understanding of shopper behaviours and interactions using RGB-D vision},
journal={Machine Vision and Applications},
year={2020},
volume={31},
number={7-8},
doi={10.1007/s00138-020-01118-w},
art_number={66},
}