In the framework of the ChIA Project, we have been generating datasets that are useful to carry out our research. Please, visit the links below to download the datasets in their current state. We are also finalising the publication of the dataset using the DCAT format.

Annotation Dataset

This dataset contains lists of semantically annotated images using the Appealing/Non-appealing abstract concept category. The dataset contains all the manual annotations collected from five experts, the percentage of Appealing and Non-appealing ratings, and the final category assigned to the images. This dataset is significant in training computer vision models and serves as a ground truth.

Click here to download the dataset.

Training Data

This dataset contains all the images that are used in the ChIA experiment. The dataset covers images used in Round-1 to Round-4 Experiments.

Due to the large size of the dataset, it is linked to an external Google drive here. Please request access and you will be granted temporary access to download the data.

Associated metadata in RDF format for all the images is available here

Computer Vision Models

The details of the computer vision models (Round-3 and Round-4) is available here. Developers who wanted to replicate /reuse the models can get further information by contacting the project members.

CV Generated Annotations

This dataset contains all the annotations that are automatically generated using three of our Computer Vision models.

Click here to download the CSV file of the predictions.