Ensuring Ethical AI: Integrating Moral Standards, DEIA Principles, and Accountability in Technology Development

Introduction

Artificial intelligence (AI) is increasingly affecting various aspects of life, such as healthcare, business, entertainment, and education. As AI becomes more prevalent in our daily lives, the ethical implications of its design, deployment, and governance have received a great deal of attention. Developing equitable, accountable, and ethical AI systems is not only a technological challenge, but also a normative one, as its fairness, accountability, and transparency have practical consequences that affect millions of lives. To address these challenges properly, ethical considerations, moral standards, and DEIA (Diversity, Equity, Inclusion and Accessibility) strategies must be integrated into every AI development and deployment phase. Drawing on Plato's philosophical views about state governance and stability and Hagendorff's (2020) insights on AI ethics, this article advocates for a robust ethical framework that ensures AI accountability and promotes responsible technology use.

The Philosophical Foundation: Plato and AI Ethics

Good governance in AI is a central theme in today’s technological and policy discussions, but its historical underpinnings are often overlooked. Drawing on the philosophical foundations of the U.S. Constitution, rooted in the idea of law as the manifestation of reason, we find a developmental lineage dating back to Plato. Plato’s philosophy of governance stressed that the stability and justice of a state depend on laws grounded in ethical and moral reasoning that benefit the common good rather than the powerful. His ideal view of governance holds that the guiding principle of any system, whether political or technological, must be the well-being of all citizens.

In parallel, AI systems today significantly influence decision-makers, shaping choices with far-reaching consequences. For instance, in healthcare, when AI provides a medical diagnosis, doctors often incorporate that input into their decision-making process when treating patients. In his article, “The impact of artificial intelligence in medicine on the future role of the physician,” Ahuja (2019) claims that AI tools have demonstrated comparable accuracy to human doctors in diagnosing diseases, leading physicians to increasingly rely on AI for support in clinical decision-making. Thus, the need for an ethical governance framework in AI mirrors Plato's vision of a just state. Such a framework should prioritize fairness, transparency, and accountability, ensuring that AI is a force for equity, not exploitation. To be sustainable and just, AI governance must reflect ethical principles that protect the vulnerable and ensure the system's integrity while promoting societal harmony, similar to the Platonic ideal of governance by rules based on reason and morality. This implies that AI systems should be subject to strict control and governance, reflecting moral imperatives that are consistent with the greater good. This entails establishing accountability criteria not only for developers or corporations, but also for the general public and, in particular, individuals who may be negatively impacted by AI decisions.

AI Accountability: To Whom and How?

AI accountability extends far beyond the technical design of algorithms and into the domain of societal responsibility. Accountability in AI must encompass not only the internal workings of AI systems but also the external stakeholders that create, deploy, and regulate these technologies. Hagendorff (2020) critiques the current AI ethics landscape, arguing that while there has been a proliferation of ethical guidelines, many lack the practical enforceability needed to address real-world AI challenges. These guidelines often overlap in their emphasis on fundamental  principles including transparency, fairness, and accountability, yet remain vague and fail to provide actionable pathways for implementation. This critique underscores the need for a more robust, enforceable, and action-oriented approach to AI ethics.

For AI to be truly accountable, ethical AI development should be anchored not only in moral philosophy but also in rigorous regulatory and governance structures. AI systems, particularly those used in high-stakes fields such as healthcare, law enforcement, and finance, must be held to standards that go beyond mere technical performance. They must be answerable to societal norms and legal frameworks that prioritize fairness, equity, and justice. In this vein, AI accountability must address several stakeholders:

  • The Public and Users: AI systems must be accountable to the public and the users they serve. This involves ensuring transparency in how AI decisions are made and providing avenues for recourse in cases of harm or bias. For instance, transparency reports (e.g., transparency best practices from the World Economic Forum), model interpretability (e.g., Microsoft), and explainable AI (e.g., IBM)  approaches are critical in ensuring that AI decisions are not perceived as "black-box" operations. A robust feedback loop between AI developers and users can also enable continuous improvement and foster trust in AI applications.

  • Regulatory Bodies: Governments and regulatory bodies must hold AI systems accountable to legal and ethical standards that protect the public interest. Hagendorff (2020) emphasizes that the current lack of enforceability in AI guidelines hinders their effectiveness. To address this, regulatory oversight must include strict compliance mechanisms for data protection, diversity, equity, inclusion (DEI), and accessibility requirements. Existing AI-specific laws, such as the European Union's AI Act or the proposed U.S. Algorithmic Accountability Act indicate a foundational recognition of the complexities associated with algorithmic decision-making; however, more AI-specific legislation is needed to address the unique challenges posed by algorithmic decision-making. Mökander et al. (2022) compare both Acts and propose that policymakers should look beyond basic algorithmic accountability and instead build governance structures that allow organizations to balance legal and commercial interests. They advocate for frameworks that not only ensure compliance but also guide the ethical and societal impact of AI, enabling justifiable decision-making in the design and deployment of algorithmic systems.

  • Affected Communities: AI systems can disproportionately impact vulnerable or marginalized communities, often exacerbating existing inequalities. For example, Atari et al. (2023) highlight the limitations of large language models (LLMs) in generalizing across diverse human populations. Most AI models reflect the psychological traits of individuals from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies, neglecting the cultural diversity inherent in human populations, and performing poorly on cross-cultural data. This bias, the authors argue, leads to significant scientific and ethical concerns, as neglecting cultural diversity undermines the fairness and inclusivity of AI technologies. For AI systems to truly serve humanity at large, they need to be developed using diverse datasets that reflect the complexities of global populations. This requires engaging multidisciplinary teams that bring a wide array of perspectives to AI model creation, ensuring that cultural variations are adequately recognized. Failure to incorporate such an inclusive and diverse data approach risks reinforcing existing inequalities and limiting the ethical and scientific impact of AI technologies.

Accountability for these stakeholders is "necessary to build a tangible [framework] between abstract values and technical implementations" (Hagendorff, 2020, p. 10). This framework should make AI system deployers feel responsible for identifying potential AI system biases before and after launches, and ensuring that AI development incorporates various perspectives and helps all parts of society. This is important because "AI must be able to make decisions that consider human values and can be justified by human morals" (Woodgate & Ajmerip, 2024, p. 1). This framework is organized into two essential phases: AI model creation and corporate governance.

Phase 1: Ethical AI Model Creation: Mitigating Bias and Ensuring Fairness

The first phase of building equitable AI systems begins with the ethical creation of AI models. This phase involves ensuring that the algorithms and data used to train AI systems are less biased (data free from bias is rare) and are representative of diverse populations.

1. Diverse Data Sets

The foundation of an ethical AI model is rooted in the quality and diversity of the data sets used for training. The data used to train AI models significantly impacts their behavior and decision-making. For example, GPT-4, a user-friendly LLM AI system developed by OpenAI, was trained in two key phases. Initially, the model was exposed to vast datasets from the internet and trained to predict text based on these inputs. This foundational stage was followed by reinforcement learning from human feedback (RLHF), where human reviewers fine-tuned the system. Through RLHF, the model was taught to reject prompts that violate ethical standards, such as those involving illegal activities or harmful content. This approach ensures the model aligns with OpenAI’s guidelines for ethical AI usage, minimizing risks related to harmful behaviors or inappropriate content generation (OpenAI, 2023).

Despite the use of numerous meticulous measures to adhere to ethical principles, GPT-4, as stated by OpenAI, is still susceptible to biases and stereotypes. This is not surprising because the internet - the data used to train GPT-4 - reflects the biases present in our society. Much of the data, especially text and content used to train AI models, comes from articles, social media, blogs, and other sources that carry the implicit biases of their authors. These biases are shaped by historical, cultural, gender, racial, and socio-economic factors. Moreover, the data available on the internet does not represent all populations equally. Certain groups, often marginalized ones, are underrepresented, leading to a skewed perspective in the dataset. For instance, GPT-4 is skewed towards Western views and performs best in English. Zack et al. (2024) also examined the influence of racial and gender biases on the use of GPT-4 in medical education, diagnosis, and care planning. They found that GPT-4 can amplify societal biases by exaggerating demographic differences in disease prevalence. For example, when prompted to describe a patient with sarcoidosis, GPT-4 generated responses that associated Black patients with the disease 97% of the time, a female patient 84% of the time, and Black women 81% of the time, which is significantly more than the actual demographic data would indicate.

Often, biases in AI arise due to unrepresentative data sets that reflect historical inequities or omit key demographic groups. To create equitable AI systems, it is essential to use diverse data sets that are reflective of the broader population. This means ensuring that data sets are inclusive of various genders, ethnicities, ages, socioeconomic backgrounds, and abilities.

  • Ensuring Diversity and Inclusivity: Developing protocols for data collection that actively seek to include underrepresented and marginalized groups. This practice not only ensures fairness but also enhances the robustness and accuracy of AI models by accounting for a broader range of scenarios and user needs (Buolamwini & Gebru, 2018).

  • Bias Detection and Correction: Employing bias detection tools and methods during the data preprocessing phase can help identify and correct skewed representations that may lead to unfair outcomes. For example, the AI Fairness 360 toolkit developed by IBM includes methods for de-biasing datasets and models. It emphasizes the need for a more equitable approach to AI development by integrating fairness into the data preprocessing, in-processing, and post-processing stages (Bellamy et al., 2019).

2. Multidisciplinary Teams

AI development should not be the sole domain of data scientists and engineers. AI development is not just a technical endeavor; it requires insights from a diverse range of disciplines to fully understand the ethical and social implications of AI systems. Effective AI development and deployment require cognitive diversity. Cognitive diversity, while often associated with demographic characteristics such as gender, ethnicity, or age, is not determined by these factors alone. Instead, it encompasses the variation in perspectives, knowledge bases, and cognitive approaches that individuals bring to problem-solving and decision-making processes (Reynolds & Lewis, 2017). This diversity of thought is critical in addressing the multifaceted challenges of AI development, where complex ethical, technical, and social issues require innovative solutions. 

Cognitive diversity brings multidisciplinary teams together (e.g., experts from computer science, ethics, policy, data science, psychology, and other relevant fields) to provide a holistic perspective on AI development and deployment. These diverse perspectives help mitigate the risk of embedding harmful biases into AI algorithms, as people from different backgrounds and experiences are more likely to challenge assumptions, detect blind spots, and advocate for fairness in data collection and model training. Furthermore, a diverse team fosters innovative thinking, leading to AI systems that are more robust, ethical, and responsive to the needs of diverse user groups. By including individuals with varied cultural, disciplinary, and experiential insights, AI can be built in a way that is more representative of and equitable for the broader society.

  • Fostering Interdisciplinary Collaboration: Integrating cross-functional teams ensures a holistic approach to AI development and deployment, balancing technical innovation with ethical considerations. This collaboration fosters a deeper understanding of the societal impact of AI and encourages the development of systems that align with broader human values. For example, Google’s AI Principles explicitly commit to assembling diverse, interdisciplinary teams to guide the responsible development of AI technologies. These principles, informed by feedback from both internal experts from various fields within Google and external specialists, are designed to align AI development with broader societal values and ensure that ethical considerations are integrated throughout the process (Google, 2024). It is one thing to develop and issue principles; another to put them into action. Therefore, they must be implemented accordingly. 

  • Training and Awareness: Team members must be educated on ethical considerations and the importance of DEI principles in AI. Regular training sessions and workshops on topics such as implicit bias, fairness in machine learning, and responsible AI practices can help build a culture of ethical awareness within AI teams. 

Phase 2: Corporate Governance and Leadership: Promoting Ethical AI Practices

While the technical aspects of AI model creation are critical, ethical AI development also requires strong corporate governance and leadership. This phase focuses on how organizations can institutionalize ethical practices through internal policies, corporate social responsibility (CSR), and the promotion of an ethical culture.

1. Internal Policies to Mitigate Bias

Organizations must develop internal policies that specifically target the mitigation of bias in AI systems. These policies should encompass every stage of the AI lifecycle, from the initial data collection and algorithm development to its deployment and continuous monitoring (Deloitte, 2023).

  • Regular Audits and Assessments: Implementing mandatory bias audits for all AI systems prior to their deployment is crucial. Ideally, these audits should be conducted by independent external auditors to ensure impartiality and comprehensiveness. Furthermore, it is imperative to perform regular audits of AI systems post-deployment to identify and mitigate any biases that may develop over time. These audits should be part of a broader risk management strategy that includes evaluating the ethical implications of AI deployments and their impact on various stakeholders.

  • Transparency and Accountability: Organizations should adopt transparency measures that allow stakeholders to understand how AI systems make decisions. This may involve providing explanations for algorithmic outcomes and offering mechanisms for redress if decisions are perceived as unfair or biased. For example, create an ethics committee or a responsible AI department, similar to Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee or Office of Responsible AI, to review all AI projects and ensure they adhere to ethical standards, particularly concerning bias and fairness.

2. Corporate Social Responsibility Policies for Responsible and Ethical AI

Many organizations have long integrated corporate social responsibility (CSR) into their strategic frameworks to align business practices with societal and environmental expectations. For instance, Apple Corporation's CSR initiatives include enhancing labor conditions across its global supply chain and mitigating environmental impacts through product recycling programs and energy efficiency measures (Dudovskiy, 2023). These activities contribute to building a strong brand reputation, fostering consumer loyalty, and attracting socially conscious investors. However, CSR is not merely about reputation management; it fundamentally involves committing to ethical practices that generate real social benefits. In the context of AI, this means going beyond superficial gestures to ensure that AI systems are developed and deployed in ways that genuinely promote social good, mitigate potential risks, and uphold ethical standards. Organizations that incorporate these principles into their CSR efforts can contribute to positive societal outcomes and demonstrate a true commitment to ethical practices rather than simply enhancing their image.

  • Creating Policy and Developing a Code of Conduct for AI Ethics: Developing CSR policies that explicitly address the ethical use of AI. These policies should outline commitments to avoiding harm, enhancing social good, and being transparent about AI use and its potential impacts. Furthermore, creating a code of conduct specific to AI ethics can guide employees in navigating ethical dilemmas and ensure fairness in AI development and deployment. This code should emphasize the importance of philosophical principles of justice and fairness, which will provide a better understanding of AI's ethical challenges. Binns (2018), for example, contends that fairness in AI requires more than statistical parity and includes considerations of individual rights, group equity, and distributive justice. He underlines that machine learning fairness must consider the broader social and political context, ensuring that AI systems are not just technically fair but also adhere to moral and democratic values. This code should then reaffirm the significance of justice, accountability, and transparency also providing practical guidance for adhering to these standards in daily work.

  • Engaging with External Stakeholders: Companies should engage with external stakeholders, including civil society organizations, academic institutions, and regulatory bodies, to align their AI practices with societal expectations and norms. This collaborative approach can help identify emerging ethical challenges, develop industry-wide standards for responsible AI, and lead to the creation of AI ethical advisory councils. For example, Salesforce’s Ethical Use Advisory Council guides the company’s AI practices with a focus on transparency, accountability, and fairness. The council advises on CSR initiatives that leverage AI for social good, such as developing tools that promote accessibility and inclusion (Salesforce, 2024).

3. Promoting a Culture of Ethics Related to AI

Building an ethical AI culture requires more than just policies; it involves cultivating an organizational mindset that values ethics in all AI-related activities. This includes ongoing education, open dialogue about ethical dilemmas, and empowering employees to voice concerns.

  • Encouragement of Open Dialogue: Organizations should encourage open dialogue about ethical challenges related to AI. This can be facilitated through regular ethics committees, forums, and town hall meetings where employees can voice concerns and propose solutions for ethical AI development .

  • Ethics Training Program: Establishing ethics training programs that are mandatory for all employees involved in AI development and deployment. These programs should cover topics such as bias in AI, privacy concerns, and the societal implications of AI technologies. For example, Accenture’s AI Ethics Board not only sets AI regulations but also promotes a culture of ethical AI culture by requiring mandatory ethics and compliance training for the company's 30,000 employees who directly deal with AI (Accenture, 2024).

4. Hiring and Training a Workforce for AI-related Ethics and Changes

The ethical development of AI also requires a workforce that is well-versed in both the technical and ethical aspects of AI. Hiring practices should emphasize not only technical expertise but also a commitment to ethical principles.

  • Diverse Hiring Practices: Including ethical considerations as a core competency in AI job descriptions and evaluation criteria is essential. Organizations should prioritize recruiting a diverse workforce with a range of skills and perspectives that can help prevent biases in AI development and ensure that ethical considerations are integrated into all stages of the AI lifecycle. For example, IBM’s AI ethics program includes hiring guidelines that prioritize candidates with a demonstrated commitment to ethical AI development. IBM also offers training modules on AI ethics as part of its ongoing employee development program (IBM, 2021).

  • Continuous Education and Development: Providing ongoing education and professional development opportunities related to AI ethics and DEI can help employees stay informed about the latest developments and best practices in ethical AI. This includes training on topics such as data privacy, algorithmic fairness, and inclusive design.

5. Advocacy for Industry Change and Policies for Ethical and Responsible AI

To drive industry-wide change towards ethical and responsible AI, organizations must advocate for and participate in the development of policies and standards that promote fairness, accountability, and transparency in AI.

  • Advocating for Policy Reform: Organizations should actively participate in policy discussions and advocate for reforms that promote ethical AI practices. This may involve engaging and collaborating with industry peers, regulatory bodies, and civil society organizations to develop and promote standards for responsible AI. Such collaboration or engagement in public discourse about the ethical implications of AI can contribute to public education efforts on AI ethics and help shape societal expectations and norms around AI for social good (Floridi et al., 2018).

  • Leading by Example in Ethical AI Practices: Organizations should lead by example, demonstrating the benefits of ethical and responsible leadership in AI practices and encouraging other industry players to adopt similar standards. For example, IBM has committed to collaborating with partners and universities worldwide to help reduce the global AI skills gap by training two million learners in AI by the end of 2026, with an emphasis on underrepresented populations (IBM, 2023). IBM’s initiative is critical because it directly contributes to societal and economic advancement in the digital era. Closing the AI skills gap is essential for ensuring that technological progress does not exacerbate existing inequities and that people from all backgrounds and societies have access to high-paying, future-oriented professions in AI. By focusing on underrepresented communities, IBM not only fosters diversity, equity, and inclusion in technology but also guarantees that AI advancements reflect broader societal needs and viewpoints, preventing the perpetuation of biases in AI systems.

Conclusion

Developing equitable, accountable, and moral AI systems is a complex challenge which requires a comprehensive approach integrating ethical principles into both technical and organizational dimensions. Organizations can develop AI systems that align with societal values and contribute to a more equitable and inclusive world by adopting a two-phase approach that focuses on AI model creation and corporate governance. As AI evolves, it is critical that these principles guide its research and deployment, ensuring that the benefits of AI are shared equitably across society.