News
/
October 3, 2024
Download PDF

CEPIC contributes to EU consultation on Code of Practice for AI models

CEPIC successfully contributed an extensive response to the consultation by the European AI Office ahead of its first Code of Practice for General-Purpose AI Models. We are grateful to our members and supporters who dedicated their time to what was a detailed and technical response to an in-depth survey.

This was a crucial opportunity for CEPIC to represent the interests of our members and industry. We outlined key requirements for generative AI models, including transparency on input data, copyright protection, the ability to opt out of AI training, data licensing, and comprehensive risk assessments. As a result of our comprehensive response, CEPIC was invited to a plenary session to discuss the survey results. With over 1,000 written submissions received, we are pleased to have secured a seat at the table to ensure our industry's voice is heard.

On 30 September, CEPIC Executive Director, Sylvie Fodor, attended the plenary session held by the EU as part of its consultation on the Code of Practice for AI models. This consultation, which ran from July 30 to September 18, gathered input from stakeholders worldwide. The goal was to shape new laws surrounding artificial intelligence (AI) and ensure that it is implemented in a responsible and trustworthy manner.

With 427 respondents from industry, rightsholders, civil society, and academia, the consultation aimed to establish a well-rounded foundation for drafting the first version of the AI Code. Below, we summarise the initial findings from the closed questions, which offer valuable insights into transparency, copyright, risk assessment, and internal governance.

Respondent Profile and Distribution

  • 32% of respondents were from the industry, 25% were rightsholders, 16% from civil society, and 13% from academia.
  • Around 250 organisations based in the EU participated, with at least 50% of respondents representing startups and organisations with fewer than 49 employees.

Key Measures: Transparency and Copyright

Several measures received broad support from respondents, reflecting a shared understanding of transparency and copyright in the AI sector:

  • Top 3 measures (~50% support across all respondents):
    1. Clearly defining the tasks the AI model is intended to perform and the nature of AI systems it can integrate into.
    2. Clear licensing of AI models.
    3. Specification of modality, such as the type and format of inputs and outputs (e.g., text or image).
  • Top 3 measures (70%-80% support, very large agreement across stakeholders):
    1. Use of publicly available content scraped from the internet.
    2. Use of copyright-protected content licensed by rightsholders.
    3. Data from public or open repositories.

Technical Risk Mitigation

There was broad agreement across stakeholder groups regarding technical risk mitigation strategies. The top priorities include:

  • Data governance and the need for proper management (supported by 90% of respondents).
  • Model design to ensure trustworthiness in areas like reliability, fairness, and security (also supported by 90%).
  • Fine-tuning AI models to ensure alignment with trustworthiness goals.
  • Techniques for "unlearning" problematic data.
  • Deployment guardrails to prevent misuse.
  • Regular updates and performance assessments for AI models.
  • Measures to identify and mitigate potential misuse of AI systems.

Internal Governance for AI Providers

Internal governance measures were another area of broad agreement across all groups, underscoring the need for:

  • Traceability in relation to datasets, processes, and decisions made during model development.
  • A comprehensive risk management framework throughout the model lifecycle.
  • Transparency reports, such as model or system cards, to provide clarity on AI functionalities.
  • Mechanisms for human oversight of AI decision-making processes.
  • Internal resources allocated to risk assessment and mitigation to handle systemic risks.

Next Steps

The consultation submissions will form the basis for the first draft of the AI Code, which will include a template for AI training content summaries. A comprehensive summary of the results, based on aggregated data, will be published in the autumn.

We will keep you updated on progress around the Code of Practice and the key issues that emerge from this consultation process.

 

 

Share this post