Call for Papers

The Safe Generative AI Workshop invites submissions addressing the critical challenges and opportunities in developing safe and responsible generative AI systems. As generative models continue to revolutionize both academia and industry, it is crucial to address the potential risks and ethical implications of these powerful technologies.

Topics of Interest

We welcome submissions on a wide range of topics related to safe generative AI, including but not limited to:

  1. Generation of harmful or biased content.
  2. Vulnerability to adversarial attacks.
  3. Privacy and security risks.
  4. Bias and fairness issues in generated content.
  5. Ethical implications of deploying generative AI.
  6. Limited robustness in out-of-distribution contexts.
  7. Overconfidence in the reliability of generated content.
  8. Robustness and reliability of generative models.
  9. Safe exploration in generative AI (e.g., for scientific discovery)
  10. Evaluation for safe generative AI

Submission Guidelines

  • Paper length: 4-8 pages (excluding references and appendices)
  • Format: Use the NeurIPS or ICLR LaTeX template
  • Anonymization: Submissions should be anonymized for double-blind review
  • Dual-submission policy (non-archival): Accepted papers will not be archived, welcoming ongoing and unpublished work and allowing for future submission to other conferences. We welcome submissions that are currently under review at other venues, including ICLR 2025. However, we cannot consider work that has been previously published or accepted for publication at any venue. This policy on dual submissions remains in effect throughout the entire reviewing process.

Important Dates (Tentative)

All deadlines are 23:59 AoE

  • Submission deadline: October 2, 2024
  • Author notification: October 9, 2024
  • Camera ready deadline: TBD
  • Workshop date: December 14 or 15, 2024

Submission Process

All submissions should be made through the workshop’s OpenReview portal.

Presentation Format

Accepted papers will be presented as posters during the workshop. Selected papers may also be invited for spotlight talks.

As part of our commitment to recognizing outstanding research, we will establish 1-3 best paper award(s).

Questions?

For any inquiries, please contact: safe-generative-ai-workshop@googlegroups.com.