Text by Dr Hatice Gunes, University of Cambridge, Department of Computer Science & Technology of the University of Cambridge, UK.

Creating artificially intelligent (AI) systems heavily relies on human data. It is therefore crucial that we (the computer scientists), as the main architects of these systems, have very careful considerations of ethics and how to responsibly design and implement AI systems that will be used in the public sector. These issues are reported and discussed in various places, but I would like to direct the interested readers to the guide [1] produced by the Alan Turing Institute, the United Kingdom’s national institute for data science and artificial intelligence, that I am a member of as a Faculty Fellow. In comparison, what I provide below is a rather simplified description of the ethical processes we need to consider in order to gather facial affect data as part of the WorkingAge project’s in-lab studies


Figure from https://www.forbes.com/sites/jessicabaron/2018/12/27/tech-ethics-issues-we-should-all-be-thinking-about-in-2019/#4e24548f4b21

As the old saying goes, our face is the window to our soul. Our face is a very rich source of information – about our identity, our gender, our age and our emotions and intentions. Therefore, facial affect analysis is a key module of the WA Tool. We, the University of Cambridge (UCAM) partner of the project are responsible for automatic facial affect analysis of the users. This entails the analysis of their facial gestures/facial action units (e.g., smile, frown) and affect (e.g., arousal and valence). In this context, ethical data gathering, and processing refers to two aspects. The first aspect relates to the fact that the predictive models that we develop for facial affect analysis utilise other third-party research data sets. These datasets (e.g., the BP4D Dataset [1] and BAUM-1 Dataset [2]) constitute of visual or at times even multimodal recordings (e.g., facial expressions, audio etc.). Some of these datasets are provided by institutions based in the USA, which may imply importing personal data from non-EU countries into the EU. However, these datasets are anonymised. And prior to granting access, the dataset manager requires filling in License Agreements by the Principal Investigator representing the research team, requesting the data set. Such datasets have their own institutional Ethics Approvals in place and are made available only for research purposes on a case-by-case basis.

The second aspect relates to the fact that we need to evaluate, test and fine-tune the developed models in the context of the WorkingAge project by first designing in-lab studies to replicate as much as possible the WorkingAge environment and its users, and then taking our models out to the real-world in the working contexts that the WorkingAge project focuses on (e.g., factory and office environments).  In order to ensure best practice, for all the studies that we are to undertake as part of the WorkingAge project (both in laboratory and work settings) that involve human participants, we need to apply for ethical approval.

In general, computer science researchers dislike ethics applications. I personally enjoy the process of preparing, submitting and receiving feedback and questions from our Departmental Ethics Committee.  This is important for several reasons. Firstly, this process enables us to secure the informed consent of participants to take part in our experiments. But more importantly, the scrutiny by our Departmental Ethics Committee ensures appropriate consideration for both the ethical and the health and safety implications of the studies we are planning to undertake.

As UCAM, indeed we are the first WorkingAge partner that has obtained ethical approval to undertake the envisaged in-lab studies. Our Ethics Application was submitted to the Cambridge University Department of Computer Science and Technology Research Ethics Committee. In our application, we described the overall WorkingAge project and its goals, as well as the in-lab studies we aim to conduct here in Cambridge. We also described how we ensure that all participants will be volunteers who are carefully informed about each study (and each task). Prior to participating in the studies, the participants need to be provided an Information Sheet detailing the study aims and a Consent Form that will provide them the option to withdraw from the study at any point in time. The Consent Form will also ask them to choose whether they would like their data to be used in processed formed only, or they would give permission for their (facially blurred) recordings and images to be used for example in research papers and/or presentations. A copy of both the Information Sheet detailing the study aims and the Consent Form detailing what the participants consent to, were submitted as additional documents. The data will be anonymized i.e., no individual face will be tied to personal information (name, affiliation, etc.) or psychological profile. We also specified the details of the sensors and the questionnaires that will be used. We described that we will implement the necessary technical and organizational measures for the security of the data to avoid nonauthorized treatment.

During submission, we were confident that we had considered all possible ethical aspects. The Ethics Committee considered our application, and despite our diligence found some aspects that needed further consideration. The Committee was able to offer pragmatic guidance on likely risks and possible constraints. We addressed the issues raised by the committee one by one, and the Ethics Committee was able to approve our application.  We have already shared our experiences and the issues raised by the Committee with the other WorkingAge partners.

We now look forward to carrying out the in-lab tests and learning further from our participants and identifying what works and what does not and implementing these into the WorkingAge Facial Affect Analysis module.

[1]https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

References

[1] X.  Zhang et al.,   BP4D-Spontaneous:  a high-resolution spontaneous 3D dynamic facial expression database, Image and Vision Computing, 32(10):692–706, October 2014.

[2] S. Zhalehpour et al., “BAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States,” IEEE Trans. on Affective Computing, vol. 8 (3), pp. 300–313, 2017.

6 thoughts to “Ethics for In-lab Facial Affect Data Gathering

  • Тркеу

    Can you be more specific about the content of your article? After reading it, I still have some doubts. Hope you can help me. https://www.binance.com/kz/join?ref=53551167

    Reply
  • undress vip

    I really liked your article post.Really looking forward to read more. Will read on…

    Reply
  • indicac~ao da binance

    Can you be more specific about the content of your article? After reading it, I still have some doubts. Hope you can help me. https://www.binance.info/pt-BR/join?ref=OMM3XK51

    Reply
  • Phone Tracker Free

    It is very difficult to read other people’s e-mails on the computer without knowing the password. But even though Gmail has high security, people know how to secretly hack into Gmail account. We will share some articles about cracking Gmail, hacking any Gmail account secretly without knowing a word.

    Reply
  • Carrington Mclean

    Thẳng Bóng Đá Thời Điểm Hôm Nayvtv3 trực tiếp bóng đáNếu cứ chơi như cách vừa tiêu diệt Everton cho tới 3-1 trên Sảnh khách hàng

    Reply
  • бнанс Рестраця

    Thanks for sharing. I read many of your blog posts, cool, your blog is very good.

    Reply

Leave a comment

Your email address will not be published. Required fields are marked *

Twitter
LinkedIn