I studied Electronics and Telecommunications in my high school (ITIS in Portogruaro). So I moved toward the web oriented computer science. I took my bachelor degree in Multimedia and Web Technologies in University of Udine, and in the same institute I took also my Master Degree in Multimedia Communication and Information Technologies, with mark 110/110 cum laude. In my master thesis I did an experimental evaluation of the state of the art techniques for crowdsourcing for mobile devices (available here in Italian).
Since January 2014, I am a PhD student in computer science at the University of Udine under the supervision of Prof. Stefano Mizzaro.
In 2014 (from July to December), I spent six months as visiting student at the Royal Melbourne Institute of Technology (RMIT), Australia. Moreover, in that time I have cooperated with SEEK Ltd, which runs Australia's number one website for job seekers. Since April to July 2016, I was hosted as visiting student at the Information School of University of Sheffield (UK).
One important issue in the information retrieval field is how to obtain a good estimates of relevance for a collection of documents with respect to few specific queries. These collections are important for testing, performance measurement and comparisons of information retrieval systems. Unlike the traditional relevance measurements in which binary or nominal scales are adopted, in our works we propose to use the magnitude estimation evaluation technique, a standardly applied in psychophysics to measure judgments of sensory stimuli. Such, stimuli intensities (for us the relevance assessment of documents with respect to some topics) are expressed by strictly positive real numbers; therefore the adopted scale is unbounded. Benefits of this technique are multiple: relevance judgements can be gathered in a continuous and unbound scale; there is always a value smaller or bigger than the previous one to assign to judge a document and, finally, the granularity of judgments is finer. Traditionally, relevance judgements are obtained by human assessors, but this is not scalable, time-consuming and the influence of the chosen assessor is not negligible. Our approach is to use crowdsourcing in order to collect multiple assessments in a short time, asking workers to complete a specific tasks which consist in assessing the relevance of some documents for some topics. The data collected from the crowdsourcing tasks, after an appropriate normalization, have been empirically proven to be reliable by comparing them to the data collected from expert assessors.
One of the main activities which are performed in diagnostic pathology is cell recognition in biological images obtained from scans of human tissues. This is performed by human experts in a non-scalable and time-consuming way. Some softwares performing automatic recognition exist, but the results are still of low quality when compared to those of human experts. Our idea is to use crowdsourcing to obtain good-quality detections by humans workers, in a small time and in a cheap way. Our aim is to understand if crowdsourcing workers with no previous experience in biology, can carry out a detection better than automatic systems by performing simple and ad hoc tasks. We have performed experiments in which we collect many crowdsourcing users recognitions in images of breast cancer tissue, then we've aggregated the results and compared them with those provided by an expert. Early results seem to be encouraging and we are currently working on improvements for the algorithms and the aggregation methods for detections, in order to reach high-quality results comparable to those obtained from human experts.
Information Retrieval (IR) is probably the most evaluation-oriented field in Computer Science. One crucial aspect of evaluation are the evaluation metrics. More than 100 information retrieval metrics are been developed since the 60'. The Axiometrics Project aims to understanding the relation among them, in term of axiomatic properties and statistical relations in both metric science (understanding of metrics) and engineering (their development). The axiomatic approach to IR effectiveness metrics defines a framework based on the notions of measure, measurement, and similarity; it provides a general definition of IR effectiveness metric; and it proposes a set of axioms that every effectiveness metric should satisfy. Ideally, the similarity through metrics is high when these satisfy the same axioms and theorems (based on axioms). So, observing all these similarities, it is possible to create new classifications of metrics which are better than the current, strongly based on heuristic observations.
This work has been partially supported by a Google Research Award.
I'm currently the lecturer of "Laboratory of programming" course, where I teach JAVA programming to the first year students of "Web and multimedia technologies". (Timetable is available here)