Home | Registration | Directions |
The Trustworthy and Collaborative Artificial Intelligence workshop aims to explore the dynamic interplay between humans and AI systems, emphasizing the principles and practices that foster trustworthy and effective human-AI collaboration. As AI systems increasingly permeate various aspects of our lives, their design and deployment must align with human values to ensure these AI systems are ethical, trustworthy, and effective.
We seek contributions bridging the gap between machine intelligence and human understanding, e.g., through explainable AI techniques, and how machine learning paradigms - e.g., selective prediction, active learning, and learning to defer - can optimize shared decision-making. We also welcome solutions integrating human-AI monitoring protocols and interactive machine learning. Finally, we encourage insights from user studies and the design of collaborative frameworks that enhance trustworthiness and robustness in human-AI interaction. In brief, our goal is to promote discussion and development of hybrid systems that adapt to evolving contexts while maintaining transparency and trust, augmenting human capabilities and respecting human agency.
We welcome three kinds of submissions: full papers, short papers and published papers (non-archival). Full and short papers should follow the CEUR submission guidelines, using a single-column format.
Both types of papers will be included in the workshop proceedings. Authors can opt out and flag their papers as non-archival upon submission. We also welcome accepted papers published in peer-reviewed conferences or journals. These papers will not undergo the review process but will be evaluated by the Workshop Chairs based on their topic and suitability for the workshop. Moreover, they will not be included in the workshop proceedings. All accepted papers will be publicly available on the workshop website. The PC's recommendations and the presentations' fit to the workshop topics will determine the selection of oral presentations.
The deadlines are as follows:
All deadlines have to be considered as AOE time unless differently specified.
![]() Roberto Pellungrini is a fixed term researcher at Scuola Normale Superiore, Pisa. He earned his PhD in Computer Science in 2020 with a thesis on Data Privacy at the Department of Computer Science of the University of Pisa. His main research interests are Explainable AI, Learning to Defer, Trusthworty AI and Hybrid Decision Making Systems. He serves as reviewer for top ML and AI conferences, such as KDD, SIGSPATIAL, IJCAI and various ML focused journals such as EPJ Data Science and ACM Transactions in Machine Learning. He co-organized Datamod2021 conference, The First Workshop on ``Hybrid Human-Machine Learning and Decision Making'' in ECMLPKDD 2023\&2024 and Discovery Science 2024. |
![]() Andrea Pugnana is a Postdoctoral Researcher at the Department of Computer Science of the University of Pisa. His research interests include abstaining machine learning models, learning to defer, and causal inference. He serves as a reviewer for top ML and AI conferences, such as NeurIPS, AISTATS, ICML, and ICLR, and ML journals, such as the ACM Computing Survey, Machine Learning Journal, and The Journal of Data-centric Machine Learning Research (DMLR). |
![]() Burcu Sayin is a Postdoctoral Researcher at the Department of Information Engineering and Computer Science (DISI) of the University of Trento. Her research interests include cooperative human-machine intelligence, trustworthy AI, cost-sensitive machine learning, active learning, and natural language processing. She serves as a reviewer for top ML and AI conferences like ICML, AAAI, The WebConf, ACL, and CHI. She co-organized The First and Second Workshop on ``Hybrid Human-Machine Learning and Decision Making'' in ECMLPKDD 2023\&2024. She was Website Co-Chair of The 11th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023) and ACM Collective Intelligence Conference (CI 2023). |
![]() Apoorva Singh is a Postdoctoral Researcher in the Mobile and Social Computing Lab (MobS) of the Fondazione Bruno Kessler. Her research interests include Explainable AI, Hybrid Decision Making, Natural Language Processing, and Federated Learning. She actively contributes to the academic community as a reviewer for leading AI and ML conferences, including AAAI, ACL, EMNLP, and KDD, as well as journals such as IEEE TCSS, PLOS One, Scientific Reports, and Pattern Recognition. |
![]() Bruno Lepri Bruno Lepri directs the the Mobile and Social Computing Lab (MobS) at Fondazione Bruno Kessler (Trento, Italy). He is co-director of the Center for Computational Social Science and Human Dynamics and of the Ellis Unit Trento, two joint initiatives between Fondazione Bruno Kessler and the University of Trento. Since July 2022, he is the Chief Scientific Office of Ipazia, a company active on generative AI agents and solutions. Bruno is also a senior research affiliate at Data-Pop Alliance, the first think-tank on big data and development co-created by the Harvard Humanitarian Initiative and the MIT Media Lab as well as a fellow of the ELLIS Human-centric Machine Learning program. His research interests include computational social science, human-centric machine learning, network science, graph neural network and multimodal analysis of human behaviors. |
SoBigData.it receives funding from European Union NextGenerationEU National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) Project: "SoBigData.it Strengthening the Italian RI for Social Mining and Big Data Analytics" Prot. IR0000013 Avviso n. 3264 del 28/12/2021.
This workshop was also funded by the European Union under Grant Agreement no. 101120763 - TANGO. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Health and Digital Executive Agency (HaDEA). Neither the European Union nor the granting authority can be held responsible for them.