Next generation AI and Human Behaviour: promoting an ethical approach

HORIZON-WIDERA-2024-ERA-01-12

Image

Expected Outcome

In order to promote a responsible, trustworthy, and human-centric design and development of the next generation of artificial intelligence (AI), proposed actions are expected to contribute to the following outcomes:

  • The operationalisation of available general guidance on the ethics of AI into practical specific guidelines. These guidelines will focus on the impact of AI on human cognition and behaviour. The guidelines must also address the ethics dimension of the study of human behaviour and cognition in developing and improving AI systems. The guidelines should incorporate ethics into the relevant research and development processes and take into account strategies for ensuring adequate participation of all those affected by the development, deployment and use of the relevant applications;
  • Develop and validate education and training material reflecting the produced guidelines. This should be based on participatory processes involving all relevant stakeholders, including citizen groups and industry.

Scope

While the European Union has been a front runner in the global effort to formulate guidelines and regulate AI, it is certain that the road ahead will be full of challenges. One of the key challenges is that the meaning and appropriate interpretation of the key concepts and principles in AI guidelines will often be highly context specific. For example, the ethical risks related to the development or use of AI-based techniques for emotion recognition might be very different when applied for recruitment and selection purposes than when used for detecting levels of distress to assist in dealing with emergency calls or to detect situations of abuse. Similarly, AI-enabled differentiation based on physical characteristics may be unethical in the context of law enforcement but useful in a medical context.

At the same time, research and development of AI-based application surges in all domains, from education, over measuring consumer behaviour, to assisting in making important decisions, such as supporting the mental health of people, promoting driver safety, and filtering job candidates.

Current developments and learning in the field of AI come from a strong collaboration of multidisciplinary teams working together to acquire further knowledge on human cognition and behaviour in order to understand, predict and impact human behaviour (e.g., for improving health or sustainability). To develop affective AI systems, researchers aim to improve their understanding of the way the human brain learns and transfers knowledge. Potentially, this understanding will help to build explainable, trustworthy, and human-centric AI systems and processes. However, while systems for automatic emotion recognition and sentiment analysis can be facilitators of enormous progress (e.g., in improving public health and commerce), they are also enablers of considerable harm (e.g., acting against dissidents, manipulating voters).

In addition to ensuring the protection of research participants, research ethics review plays a pivotal role in facilitating the integration of ethical concerns into research projects and protocols from the conception phase. While ensuring the development of ethical AI will require technical solutions - for example to improve transparency and explainability - guidance for the operationalisation of AI ethical principles (in a non-technical manner) needs to be developed and continuously evaluated in light of new developments in the field (in particular the increasing in-depth study of human behaviour in AI research and development).

Therefore, as policy-makers and AI actors around the world move from principles to implementation, the action should:

  • To better understand the ethical challenges (1) associated with the study of human behaviour and cognition to support the development or improve AI systems and (2) related to the impact of AI on human cognition and behaviour, the current landscape should be reviewed and three chosen exemplary areas of research can be selected (such as of emotion recognition applications, deep learning and general intelligence);
  • Establish specialised, inclusive networks of expertise, comprised of multidisciplinary teams (including, amongst others, engineers, data scientists, legal experts, ethicists, cognitive researchers, researchers with expertise in other relevant areas, research administrators and policy experts);
  • In collaboration with the networks of experts, and based on findings and case-studies, develop operational guidelines for AI systems that build on the study of human cognition and behaviour. These guidelines should incorporate ethics into the relevant research and design processes and facilitate the ethical assessment and auditing of research projects and outcomes (including toolboxes for algorithmic impact assessment). The guidelines should target the research community, with an emphasis on early career researchers as well as the ethics experts (e.g., members of ethics review committees) and project managers. The developed guidelines must adjust the ethics-by-design approaches (as included in the guidelines Ethics by Design and Ethics of Use Approaches for Artificial Intelligence[1]) to the relevant areas of study and development. These should include mechanisms to assess ‘ethics readiness levels’ for the relevant ‘technology readiness levels’ and encompass the relevant mechanisms to incorporate ethics-enhancing methods directly in the design of the research protocols and prototypes (e.g., privacy-enhancing technologies, explainability, human-centred approach in design);
  • Develop a toolbox for international cooperation in AI research and development in the relevant areas, taking into account the regulatory and ethical landscape in key strategic partners (for example China, the Republic of Korea, Japan, Canada and the US). Incorporate principles of benefit sharing in AI research and development in accordance with the Global Code of Conduct for Research in Resource-Poor Settings (TRUST Code)[2];
  • As a result of the above activities, this action should also produce innovative training material (reflecting the guidelines) for students, early career and experienced researchers. In addition, Framework Programme ethics appraisal scheme experts should be trained (250-300). Close attention should be paid to gender balance, as well as to gender equality- and diversity-related ethical aspects. Feedback of the trainees should be used to improve the trainings.

All activities proposed must be based on multidisciplinary, inclusive networks of expertise, including amongst others, engineers, data scientists, AI legal experts, ethicists, cognitive researchers, linguists and educators, as well as private sector representatives. Every effort should be made to achieve a 45% - higher- female participation, especially among students, researchers, and experts. This should also involve relevant ethics and integrity networks, such as ENERI (European Network of Research Ethics Committees and Research Integrity Offices)[3] and ENRIO[4] or (associations of) European networks of (early) career researchers and/or educators in the field of research ethics and integrity. In addition, in order to improve the impact of the expected output (such as effectiveness of training courses, guidelines, toolboxes etc.), cooperation with research management offices and ethics officers in Research Performing Organisations is highly recommended.

In order to achieve the expected outcomes, cooperation with at least two actors from Japan, China, the Republic of Korea and/or African countries not associated to Horizon Europe is required.

It is important to ensure that the publicly available results from relevant EU funded research projects (e.g., SHERPA, SIENNA, TechEthos) [5] and the TRUSTWORTHY AI project[6] are taken into account, and that cooperation is envisaged with the beneficiaries of the Call HORIZON-WIDERA-2023-ERA2023 01-12 – The future role and format of research ethics review in the changing research environments”.

Consortia with EU partners or Associated Countries partners that have not previously collaborated are encouraged to participate.

Budgeted cooperation (including the necessary technical aspects) with the Embassy of Good Science must be included in the proposal, and the output material of the action must be made available on this e-platform.

Finally, the action should aim at valorising and widely disseminating the material produced beyond the community of ethics and integrity experts, in particular by promoting its use for the students and young researchers that will constitute the next generation of ethics experts and reviewers. The priorities of the EU Digital Education Action Plan (2021-2027)[7] should be taken into account. In this perspective, cooperation should be sought with large university/research networks in order to enrich the relevant ethics related curriculum with the material produced by the action. In addition, National Contact Points should be provided with all the materials relevant to support their advisory activities.

For all deliverables and academic publications produced in the context of the activities, an authorship contribution statement should be added, in accordance with a recognised standardised taxonomy developed for this purpose (e.g., CRediT).

[1]https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf

[2]https://ec.europa.eu/research/participants/data/ref/h2020/other/hi/coc_research-resource-poor-settings_en.pdf

[3] http://eneri.eu/

[4]http://www.enrio.eu/

[5] Detailed information of the mentioned EU-funded projects can be found on CORDIS website: https://cordis.europa.eu/

[6]https://www.trustworthyaiproject.eu/framework-for-trustworthy-ai-education/

[7]https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en

 

Institution
Application date
Discipline
Humanities : Anthropology & Ethnology, Digital humanities and big data, Philosophy, Theology and religion
Social sciences : Law, Economy, Management and Public administration, Psychology & Cognitive Sciences, International Relations, Political science, Pedagogic & Education Research, Information and Communication Sciences, Sociology
Citizen Sciences
Other