Page last updated on
AI has become a priority theme due to its impact in the present and future of the internet, as systems and machines are learning through their algorithms and big data to perceive, associate, predict and plan. Its multiple uses are both amazing and promising. At the Association for Progressive Communications, however, we believe that it is imperative to also recognise that AI and emerging technologies are being designed by biased people using biased datasets that fail to represent the diversity of contexts and people and therefore, can lead to discrimination and marginalisation based on racial, ethnic, religious and gender identity, among other factors. We do not deny the potential of AI for collective benefit, but we must acknowledge that it currently replicates the inequalities and oppressions of our world.
It is paramount to address the implications of AI systems for human rights, social justice and sustainable development, particularly the impacts on privacy, security, freedom of expression and association, access to information, access to work and harmful implications for judicial systems, education, policing, social benefits and public health. It is imperative to place human rights, social justice and sustainable development at the centre at all stages of AI systems, including their creation, development, implementation and governance, and potential risks should be continually assessed and managed. AI systems should also include appropriate safeguards to ensure that the processes through which they are developed and applied respect principles that seek to promote a fair and just society.
There should be transparency and responsible disclosure around AI systems information and operations to ensure that people understand AI-based outcomes and can challenge them. Moreover, a presumption of algorithmic bias should always be considered, to better balance the burden of proof, and to encourage the adoption of individual and collective bias mitigation tools, redress mechanisms and controls. AI and machine learning should be trained on thick data and not big data for diagnostics and analysis, rather than for prediction models and drawing deterministic correlations. When AI systems pose unacceptable risk to human rights that cannot be mitigated, regulation should prohibit them. Equality-by-design principles, a human rights approach and an intersectional and gender perspective should be incorporated into the design, development, application and review of AI systems and any algorithmic decision-making systems or digital technologies.
Accountability and transparency must be demanded from technology corporations who are building and selling AI and automatic technologies, for purposes of state and non-state deployment, to ensure that their development and deployment are rooted in the existing international human rights frameworks and do not erode democracy, rights and labour standards or further entrench discrimination.
We want to see a world in which AI is compliant with human rights protection. We want the potential impacts of AI on a wide range of human rights to be recognised and addressed before the development and deployment of new technologies, infrastructure and products. If we continue to allow profit motivations alone to shape these technologies, they will continue to contribute to injustice and perpetuate the crises we are currently facing, including the environmental crisis.
We want to allow ourselves to imagine how to place AI systems in the commons, with shared governance for the well-being of people and the planet and shared goals of a feminist internet where AI projects and tools can be assessed through values such as agency, accountability, autonomy, social justice, non-binary identities, cooperation, decentralisation, consent, diversity, decoloniality, empathy and security, among others.
Civil society voices are vital in the Global Digital Compact process and discussions. We would like to urge the Co-Facilitators of the GDC to ensure the expression of those voices in the upcoming Deep Dives and to take into consideration the letter that CSO have shared on this regard.