-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for Major Change to the DPG Standard under Indicator 8 #197
Comments
I have become rather despondent with regards to such approaches to technology governance. Let me detail why. There is a spectrum to consider between honest and dishonest actors. Let us discuss two classes: Honest actors I think Risk Assessment guidelines could be good to guide honest developers to help bring to mind the issues they need consider. An honest developer would naturally seek to do their best to reduce the risk of deploying systems, and would find it welcome to have this kind of guidance. However, they are subject to all kinds of pressures, not to speak of constrains of time and effort they are able to put into it. Therefore, there is an opportunity cost to prioritize writing a good Risk Assessment, so careful consideration should by put into whether this requirement would replace something more important. I believe that governance practices and standards are much more important, ensuring that multiple parties with differing interests are involved, that there are automated tests, that compliance to technical standards can be verified, overall, that ecosystems are healthy. Such internal governance practices are much more important to ensure that risks can be understood and mitigated than reports. Dishonest actors: The Alliance must consider the possibility that dishonest actors will try to get commons-washed systems approved as a DPG. So, what else would be used to write an AI Risk Assessment than Generative AI? A GenAI system would be trained on other "successful" Risk Assessments, to ensure that it ticks the right boxes. It could be trained on texts that are written by the auditors, so that it will appeal to them. Reports that are written to clear a bar is, in my rather despondent world, likely to be written not to be honest documentation to bring light, but to be marketing material. I'm equally critical towards similar requirements in EU AI Act, EU DSA, the Biden-administration's efforts (which will never see the light), and so on. There is a very real risk that it will result in a weapons race kind of economic malevolent circle: In a weapons race, if your measures are more expensive than your opponents counter-measures, you loose. The best way to avoid a weapons race is to not get into conflict in the first place. Consider the case where a dishonest submitter will use 2 minutes to generate a Risk Assessment report, while the Alliance's evaluators will require 10 hours to read, understand and examine the claims of the report, and even that may not be enough to determine that the report is bullshit, in the sense of Harry G. Frankfurt. Again, I do not think such reports are the way to do it. For closed systems, it is perhaps the only thing you can have, and post-disaster, enforces may use it to fine, but that will be a meager comfort for those affected. Closed systems are basically ungovernable. Digital Public Goods are governable, but the Open Source community hasn't managed to go from meritocracies to practices that can be anchored in democratic processes. That, together with the increased popularity and role of Open Source creates increased tension. It should rather be the focus to fix this, not increase the red tape that dishonest actors will easily work around. |
Dear @kjetilk, Thank you for your thoughtful input. With the Standard, we are actively working to ensure that the application process remains light for applicants while maintaining meaningful safeguards. We will be providing checklists to guide submissions and reduce unnecessary complexity while ensuring that the process remains robust enough to prevent loopholes that could be exploited by dishonest actors. That said, the DPG application process also inherently relies on a trust-based system with elements of self-declaration, meaning compliance depends on a balance of verification and good faith participation. Additionally, I wanted to share that we have introduced another significant change to the Standard by explicitly including "misleading content" under Indicator 9B. This update strengthens responsible content management policies and aligns with global concerns around misinformation, particularly in AI-generated contexts. It would be great to get your insights on possible solutions—especially regarding the assessment of AI risks for AI DPG accreditation. Looking forward to your thoughts. Best, |
Mandatory AI Risk Assessment Submission for AI Digital Public Goods
Overview
This proposal introduces a significant change to the Digital Public Goods Standard under Indicator 8 by mandating the submission of an AI Risk Assessment for all applicants applying for DPG Status for their AI systems. Applicants will be required to complete the AI Risk Assessment prepared by the DPGA Secretariat (mentioned below) or submit an equivalent industry-recognized template, such as, but not limited to:
Previously, the DPG Standard encouraged adherence to standards and best practices. With this proposal, compliance with the AI Risk Assessment requirement will become mandatory for AI systems seeking DPG recognition. This represents the first occasion where we have had a unique criteria for a DPG-type.
Proposed Change
1. New Requirement:
All applicants applying for DPG status for their AI systems must submit an AI Risk Assessment.
The applicant has the option to:
- Complete the AI Risk Assessment template provided by the DPGA Secretariat.
- Submit an equivalent AI Risk Assessment template recognized by industry standards.
2. Scope:
3. Rationale:
4. Enforcement:
Governance Process
1. Community Engagement:
2. Review and Iteration:
3. Implementation Timeline:
Call to Action
We invite the DPGA community and stakeholders to review this proposal and share their feedback through comments on this GitHub issue. Your input is critical to ensure the effective and equitable implementation of this important change. Please also review and provide your input on the AI Risk Assessment template mentioned below, which DPG applicants can complete as an alternative to industry-recognised templates.
AI Risk Assessment Template
(Proposed to be mandatory under Indicator 8 of the DPG Standard- requiring major change proposal)
Please make a copy of this template and answer the following questions to the best of your ability, or share your risk assessment using another template.
1. Proportionality
Proportionality risks refer to the potential for AI systems to exhibit disproportionate or excessive responses, actions, or impacts relative to the intended purpose. For instance, disproportionality against a particular language, geography, population, or other characteristics that might only represent certain groups of people.
(Consider that biases could occur in the data used to train and test the AI system, in the AI model, or the functioning and outcomes of the AI system.)
(Document the results of relevant fairness assessments and further measures to monitor and reduce remaining bias.)
Examples include but are not limited to:
3. Mitigations
Examples of risk mitigation practices include but are not limited to:
4. Risks and harms / Use cases
5. Transparency
Examples of transparency measures include but are not limited to:
The text was updated successfully, but these errors were encountered: