Managing the risks of inevitably biased visual artificial intelligence systems

Scientists have long been developing machines that attempt to mimic the human brain. Just as humans are exposed to systemic injustices, machines learn human stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that the bias is reflected not only in the language models, but also in the image datasets used to train the computer vision models. As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition.

These computer vision models are used in downstream applications for security, surveillance, candidate evaluation, border control and information retrieval. Implicit biases also manifest in the decision-making processes of machines, creating lasting impacts on people’s dignity and opportunities. Additionally, nefarious actors can use readily available pre-trained models to impersonate public figures, blackmail, deceive, plagiarize, cause cognitive distortion, and sway public opinion. This machine-generated data poses a significant threat to the integrity of information in the public sphere. Even though machines have advanced rapidly and may offer some opportunities for use in the public interest, their application in societal contexts without proper regulation, scientific understanding and public awareness of their safety and societal implications raises serious ethical concerns. .

Biased gender associations

A good example for exploring such biases appears in gender-biased associations. To understand how gender associations manifest in downstream tasks, we prompted iGPT to complete an image based on a woman’s face. iGPT is a self-supervised model trained on a large number of images to predict the next pixel value, enabling image generation. Fifty-two percent of the auto-completed images had bikinis or low-cut tops. By comparison, men’s faces were auto-completed with career-related suits or clothing 42% of the time. Only seven percent of auto-completed male images featured revealing clothing. To provide a comprehensive analysis of biases in self-supervised computer vision models, we also developed the Image Integration Association Test to quantify implicit model associations that could lead to biased results. Our results reveal that the model contains innocuous associations such as flowers and musical instruments being more pleasant than insects and weapons. However, the model also incorporates biased and potentially harmful social group associations related to age, gender, body weight, and race or ethnicity. Biases at the intersection of race and gender are aligned with theories of intersectionality, reflecting emerging biases not explained by the sum of biases toward race or gender identity alone.

The perpetuation of biases that have been maintained through structural and historical inequalities by these models has important societal implications. For example, biased candidate assessment tools perpetuate discrimination between members of historically disadvantaged groups and predetermine candidates’ economic opportunities. When the administration of justice and the police rely on models that associate certain skin tones, races, or ethnicities with negative valence, people of color wrongly experience life-altering consequences. When computer vision applications directly or indirectly process information related to protected attributes, they contribute to said biases, exacerbating the problem by creating a vicious circle of biases, which will continue unless there are technical, social, and social bias mitigation strategies. and policies are implemented.

State-of-the-art pretrained computer vision models like iGPT are integrated with consequent decision making in complex artificial intelligence (AI) systems. Recent advances in multimodal AI effectively combine language and vision models. The integration of various modalities into an AI system further complicates the security implications of advanced technologies. Although pretrained AI is very expensive to build and operate, publicly available models are freely deployed in business and mission-critical decision-making contexts and facilitate decisions in well-regulated areas, such as administration of justice, education, labor and health care. However, due to the proprietary nature of commercial AI systems and the lack of regulatory oversight of AI and data, there is no standardized transparency mechanism that formally documents when, where and how AI is deployed. . Therefore, the unintended harmful side effects of AI persist long after their authors have been updated or removed.

Establish unacceptable uses of AI, require additional controls and security for high-risk products (such as those in the European Union’s Artificial Intelligence Bill), and standardize the model improvement process for each modality and multimodal combination to issue safety updates and recalls are all promising approaches to address some of the challenges that could lead to irreparable harm. Standards can also help guide developers. For example, the National Institute of Science and Technology (NIST) published the special publication “Towards a standard for identifying and managing biases in artificial intelligence” in 2022 and a draft AI risk management framework summarizing many of these risks and suggesting standards of reliability, fairness, accountability and transparency.

Third-party audits and impact assessments could also play a major role in holding deployers accountable – for example, a House bill in subcommittee (the Algorithmic Accountability Act of 2022) requires assessments of impact of automated decision systems – but third-party audits with real accountability expectations are rare. Ultimately, AI ethics researchers called for public audits, incident reporting systems, stakeholder participation in system development, and notification to individuals when they are subject to automated decision making.

Regulating prejudice and discrimination in the United States has been an ongoing effort for decades. Policy-level bias mitigation strategies have effectively but slowly reduced bias in the system, and therefore in the minds of humans. Humans and vision systems inevitably learn biases from the large-scale socio-cultural data to which they are exposed. Future efforts to improve fairness and redress historical injustices will therefore depend on increasingly influential AI systems. Developing bias measurement and analysis methods for AI, trained on socio-cultural data, would shed light on the biases of social and automated processes. As a result, actionable strategies can be developed by better understanding the evolution and characteristics of biases. While some visual applications can be put to good use (for example, assistive and accessibility technologies designed to help people with disabilities), we should be cautious about the known and foreseeable risks of AI.

As scientists and researchers continue to develop appropriate methods and metrics to analyze the risks and benefits of AI, collaborations with policymakers and federal agencies are informing evidence-based AI policymaking . Introducing the standards required for trustworthy AI would affect how the industry implements and deploys AI systems. Meanwhile, communicating the properties and impact of AI to direct and indirect stakeholders will raise awareness of how AI affects all aspects of our lives, society, the world and the law. Preventing a techno-dystopian reality requires managing the risks of this socio-technical problem through ethical, scientific, humanistic and regulatory approaches.

Comments are closed.