Siren

Società Italiana Reti Neuroniche

Call for papers on

Human-Centric Trust in Generative AI: Balancing Human Values and AI Innovation

Special Session of the 32nd Italian Workshop on Neural Networks WIRN 2024
Session Overview
As generative neural networks (GNNs) are increasingly becoming pervasive across diverse fields, the twin pillars of trustworthiness and human-centered design have been emerging as critical considerations. This special session seeks to spotlight pioneering research, thought leadership, and practical applications that address these themes. Our aim is to catalyze a multidisciplinary dialogue that bridges theoretical underpinnings with real-world implications, ensuring that generative AI technologies evolve in a manner that is ethical, transparent, and aligned with human values and needs.
Topics

We welcome submissions across a broad spectrum of topics related to trustworthiness and human-centred approaches in the application of generative neural networks, including but not limited to:

  • Healthcare and Well-being: Leveraging GNNs for ethical and secure biomedical research, patient data privacy, and healthcare innovations, including data augmentation and rare condition generation;
  • Cybersecure AI: Developing trustworthy AI systems for enhancing cybersecurity measures in generative models (e.g., defence strategies against prompt injection, poisoning, model inversion, etc.);
  • Cyber-Physical Systems: Ensuring the reliability and security of generative AI in systems that integrate computational algorithms and physical components;
  • Environmental Impact: Investigating the ecological footprint of generative AI technologies, including studies on their energy consumption, carbon footprint, and strategies for creating more sustainable and efficient generative models;
  • Privacy and Fairness: Strategies for embedding privacy-preserving mechanisms and fairness in generative AI algorithms, with a focus on authentication methods and bias mitigation;
  • Synthetic Dataset Augmentation: Ethical considerations and trustworthiness in augmenting datasets using generative AI to enhance machine learning models;
  • Generated Data Detection: Techniques and ethical considerations in distinguishing between real and AI-generated data to prevent misinformation and ensure data integrity;
  • Machine De-Learning: Exploring methodologies for generative models to forget or discard sensitive information post-training, ensuring data privacy and protection;
  • Law and Compliance: Examining the implications of normatives, standards, and legal frameworks on the development and application of generative AI, with a focus on ethical compliance and governance;
  • Media, Culture and Education: Investigating the use of generative AI within media and cultural productions under the lens of human-centered design, emphasizing ethical considerations, cultural sensitivity, and societal impact;
  • Generative AI Technologies for Social Inclusion: Explore how generative AI technologies can be designed and customized to promote universal accessibility and empower marginalized communities, including those with physical, sensory, or cognitive disabilities, (i.e. through tools such as chatbots sensitive to specific needs and language translation applications).
Submission Types

We invite a wide range of contributions from experts, researchers, PhD students, and practitioners, including:

  • Research Papers: Original research that advances the field of trustworthy and human-cantered generative neural networks;
  • Position Papers: Opinions, perspectives, or theoretical discussions on current trends, challenges, and future directions in the domain;
  • Review Papers: Comprehensive reviews that synthesize recent developments, methodologies, and applications in the field, providing insights into future research avenues.

 

All the submitted papers will be reviewed by the program committee and can be accepted for oral or poster presentation. As this is a special session, accepted contributions will be published as part of the main workshop proceedings. Each paper must comply with the Spinger conference paper format (http://www.springer.com/series/8767) and must not exceed 8 pages. The paper submission is through Easy Chair (https://easychair.org/my/conference?conf=wirn2024). Please refer to the main conference website for more information.

 

Selected papers will be invited to submit and extended versions to Journals’ Special Issues. Don’t hesitate to get in touch with the special session organizers for further information.

Important Dates

The special session follows the main conference dates:

  • Paper Submission deadline: April 30, 2024 (only an extended abstract)
  • Notification of acceptance: May 20, 2024
  • Camera-ready copy: June 5, 2024
  • Conference Dates: June 5-7, 2024

Please refer to the main conference website for information concerning the registration fees and deadlines (https://www.siren-neural-net.it/wirn-2024/registration/).

Special Session Chairs

Stefano Marrone is a Senior Research Fellow and Tenured Assistant Professor at the University of Naples Federico II, specializing in AI since 2012. He has a Ph.D. in Information Technology and Electrical Engineering, focused on Trustworthy AI. His expertise includes Pattern Recognition, Computer Vision, and Natural Language Processing, with applications in biomedical imaging, remote sensing, and forensics. Recently, he has concentrated on the ethical aspects of AI, particularly ethics, fairness, and privacy. Marrone has authored numerous papers and led international projects in these areas. He has held positions as a visiting researcher and collaborated with various European institutions, enhancing his teaching and research impact. He is actively involved in several educational programs at different levels and plays a pivotal role in the Human-Centred Artificial Intelligence Masters (HCAIM) program funded by the European Commission. Additionally, Marrone has published extensively and received multiple awards for his research, including notable achievements in international competitions and significant grants for AI development projects.

 

Dr. Lidia Marassi is a Ph.D. candidate at the University of Naples Federico II, focused on the Ethics of AI, particularly in generative models like transformer architectures and large language models. Her research spans artificial intelligence, digital ethics, and environmental protection, with a special interest in trustworthy AI and the ethical implications associated with new technologies. Marassi’s work includes exploring moral issues in human-computer interactions and the potential for social justice in AI, using philosophical theories like the Capability Approach. She also examines legal aspects of AI, emphasizing the need for a comprehensive legal and ethical framework, and investigates the environmental impact of AI technologies on sectors like agriculture and transportation. Additionally, Lidia Marassi is one of the CINI AI-IS national experts for Uninfo, working on the activities focued on the standardization of ethical and social implications of AI for the CEN/CENELEC agency.