Ethical Considerations and Responsible Governance of Generative AI: A Systematic Review

Mohammed Karimkhan Pathan ORCiD and Aman Shah
DeepNeuralAI, Pune, India
Correspondence to: Mohammed Karimkhan Pathan, karimkhan.it@gmail.com

Premier Journal of Artificial Intelligence

Additional information

  • Ethical approval: N/a
  • Consent: N/a
  • Funding: No industry funding
  • Conflicts of interest: N/a
  • Author contribution: Mohammed Karimkhan Pathan and Aman Shah – Conceptualization, Writing – original draft, review and editing.
  • Guarantor: Mohammed Karimkhan Pathan
  • Provenance and peer-review:
    Commissioned and externally peer-reviewed
  • Data availability statement: N/a

Keywords: Generative ai, Ethical governance, Intellectual property rights, Bias mitigation, Environmental impact.

Peer Review
Received: 17 February 2025
Revised: 7 April 2025
Accepted: 8 April 2025
Published: 16 April 2025

Abstract

Generative artificial intelligence (AI), a transformative technology capable of generating or creating text, images, and other content, has revolutionized industries while raising critical ethical and governance challenges. This review systematically examines key ethical considerations, such as intellectual property rights, bias, fairness, misinformation, data privacy, environmental impact, and the need for human oversight. These challenges highlight complexities in governing generative AI, requiring robust international guidelines and best practices. By analyzing existing frameworks and case studies, our review identifies significant gaps in current research and policy. Key findings emphasize the importance of multi-stakeholder collaboration among policymakers, industry leaders, and researchers in developing an adaptive governance framework that prioritizes transparency, accountability, and inclusivity to mitigate risks and promote responsible AI. The review highlights the importance of sustainable AI in addressing environmental concerns and advocates for policies that ensure equitable access while addressing societal impacts such as the spread of misinformation and the potential for exacerbating existing inequalities. By synthesizing insights from diverse sources, this study provides actionable recommendations to guide the ethical and responsible governance of generative AI technologies that align with evolving technological advancements and societal needs.

Introduction

Generative artificial intelligence (AI) enables machines to generate new content—including text, images, audio, and video—by learning from existing data. This technology is rapidly gaining traction, fueling numerous innovative applications across various sectors. Generative AI represents the pinnacle of cutting-edge research and technical advancements, with the potential to significantly alter how humans interact with computers and express their creativity.1 This form of AI focuses on creativity, imagination, and generating new content. At the same time, conventional AI systems primarily concentrate on completing specific tasks or making predictions based on existing data. GenAI tools like ChatGPT, Gemini, PaLM, AlphaCode, DALL-E, etc., have taken the world by storm with their abilities to generate entirely new and original content, much like human creative outputs. Generative AI uses machine learning and deep learning architectures. The AI models are trained on large datasets to learn the patterns and insides of data. And using that knowledge or patterns to generate outputs. A model that knows patterns of data that model generates appropriate and classic output. There are different types of AI models, each with its unique approach to generating content. Some of the most prominent types of generative AI models include Variation Autoencoders, Generative Adversarial Networks, Autoregressive models, Recurrent Neural Networks, Transformer Models, and Reinforcement Learning models. Large Language Models are specific generative AI models that focus on generating text.2

Why Ethical Considerations and the Responsive Governance of GenAI Are Important?

Generative AI technologies have outstanding levels of automation and creativity. However, the advancements also contain significant ethical concerns, like bias, misinformation, privacy, and environmental impact. Continuously, new models are published in the market in various professional and other fields to establish strong governance frameworks as ethical concerns grow. Knowing these problems ensures that the development and deployment of generative AI align with human values.3,4

Objectives of the Review

This review aims to properly and systematically explore the ethical implications of Gen AI, including challenges, bias, privacy concerns, and misinformation. In addition, try to analyze existing governance strategies for responsible AI development and usage.5

Ethical Implications

Figure 1 displays various ethical considerations, key challenges, and mitigation strategies related to AI, organized in a hierarchical structure with color-coded sections.

Fig 1 | AI ethical implications, challenges, and strategies
Figure 1: AI ethical implications, challenges, and strategies.

Intellectual Property Rights

AI-generated content raises significant challenges for ownership. The question is whether content generated by Gen AI holds itself right. AI modes are trained on large datasets generated by large communities. The AI model learns the pattern of that data and makes predictions. The U.S. Copyright Office studied the copyright issues raised by AI. The implications of training Gen AI models on community data or copyrighted works and the allocation of any potential liability. It is a part of generative AI. To address these challenges, there is a need for AI-created content to add sources and clearer definitions of authorship.6–8

Bias and Fairness

Generative AI models are trained on large datasets, which are often assumed to be unbiased. However, the reality is that datasets may encode existing societal biases, leading to the perpetuation and amplification of those biases in the model outputs. For instance, Noble highlights that search engines reinforce systemic biases, particularly against marginalized communities. Similarly, Gebru et al. emphasize the importance of documenting datasets to identify and address biases. Training on irrelevant data that can give failed outputs.9,10

Consequences of Biased AI Systems

When biased models are deployed, particularly in high-stakes contexts, they can lead to disproportionately harmful outcomes for marginalized communities. O’Neil describes this phenomenon as “weapons of math destruction,” where biased algorithms exacerbate inequality and undermine democratic values. Benjamin further explores the intersection of race and technology, discussing how these systems can perpetuate systemic inequalities, often described as the “new Jim Code.” Such biases in gen AI models can lead to significant ethical challenges, as in Figure 2, particularly when deployed in high-stakes applications like healthcare, hiring, or law enforcement.11,12

Fig 2 | Ethical and legal challenges of AI
Figure 2: Ethical and legal challenges of AI.

Case Studies

Real-world incidents make the dangers of biased generative AI more tangible. For example, Crawford and Paglen13 analyze biases in image datasets, showing how poorly curated datasets can lead to disproportionate errors in image recognition systems. These issues are evidence of such cases as the biased currency detection model, where systems fail to recognize currencies with associated specific regions, or vehicle number recognition systems that disproportionately misidentify vehicles belonging to minority groups. Such cases underscore the ethical imperative to design AI systems that are fair and inclusive. Addressing bias requires a multifaceted approach, including better dataset documentation, critical evaluation of AI claims, and prioritizing explainability in AI systems to ensure transparency and accountability. Without these measures, generative AI risks perpetuating harm under the guise of innovation.14

Misinformation and Disinformation

Role of Generative AI in Misinformation

Nowadays, AI is used to create realistic content such as news articles, digital media, e-books, etc. However, using Gen AI to create false content has broken trust in digital media. Generative AI can create content that blurs the line between fiction and reality, which leads to the creation of deepfake videos and images, social media posts, and the spread of misinformation. This can harm individuals and organizations by distorting and fueling propaganda and public perception.15

Ethical Responsibilities and Mitigation

The developers and companies who develop AI are responsible for minimizing misuse of the system. Ethical AI development requires the regular monitoring and updating of generative models to reflect accurate information. Regularly update the knowledge of the system. Promoting transparency in how content is generated and ensuring responsible deployment of generative AI technologies.16

Data Privacy and Security

Train Gen AI models’ data includes people’s data also without explicit consent. The General Data Protection Regulation (GDPR) is a European Union law that regulates how companies handle and use personal data. GDPR was approved in April 2016. and effect on May 25, 2018. GDPR law provides users control over their data.17

Environmental Impact

Training AI models consume large-scale computational resources, and appropriate training has an environmental impact. GPT-3 consumes 1,287 MWh of electricity and produces 502–552 tons of CO2. This is equivalent to the emissions from 110 to 123 gasoline-powered cars driven for a year. Encouraging the use of carbon-free energy sources. Also, optimizing the architectures of the model. And give priority to reducing environmental impact during AI model training.18,19

Human Oversight and Accountability

Need for Oversight: Generative AI raises questions about transparency, Privacy, unintended plagiarism, social consideration, bias and fairness, accountability, etc. Identifying these ethical concerns is important to ensuring that generative AI technologies are developed and utilized responsibly without causing or hurting anyone at large. Establish a clean and clear accountability architecture, and define the role of users, organization or company, and developers in case of negative outcomes. Provide guidelines and effectuation structure.20,21 Figure 2 is structured as a mind map that outlines various ethical and legal considerations related to AI, with two main categories (Ethical and Legal) branching out from the central concept.

Governance Frameworks

A Governance Framework is nothing but a set of roles, regulations, instructions, etc., which guides the development and use of AI technology. It ensures that AI technology is responsible and ethical. A framework of responsible use of Gen AI is trustable and ensures that these technologies are used in ways that are beneficial and ethical. Explainability is the cornerstone of this framework. Make sure AI decision-making is understandable to users. Users must be able to understand how the AI makes predictions. Develop an explainable AI model and user-friendly interface, which is essential for building trust and facilitating informed consent. Like AI generates content, it should be clear to the user how the content was created and what things are used in prediction. Figure 3 it shows a comprehensive framework depicting the relationship between key stakeholders, core ethical principles, and required actions in AI governance.

Fig 3 | AI Ethics & Governance Ecosystem`
Figure 3: AI Ethics & Governance Ecosystem.
Existing International Guidelines

UNESCO’s Recommendations on the Ethics of AI

Adopted in 2021, provides a comprehensive framework for Gen AI ethics, human rights, and sustainability. It highlights key principles such as. Human-centric (AI wants to serve humanity and improve human well-being.), Data Protection and Privacy (Data used to train AI models and data that used AI working systems should be shared responsibly. Maintain user privacy.), Understandable System (Create AI systems that should be transparent and understandable to the extent possible), and User Accounts (develop or deploy an AI system that individually manages user’s accounts, which is responsible for the ethical and societal impacts of AI systems.11,22

IEEE’s Ethically Aligned Design

This framework was developed by the IEEE Global initiative. For Ethics of Autonomous Systems which provides a set of principles for ethically aligned design. These principles are:

  • Human-centered values: AI systems should be operated to benefit humanity.
  • Safety and Security: Addressing AI actors should avoid unwanted harm and vulnerabilities to attack.
  • Accountability: Gen AI systems should have a clear line of accountability for the development and deployment of AI systems.
  • Transparency and explainability: The system should be understandable to the extent possible.
  • Privacy and data protection: protected and promoted privacy throughout the AI system lifecycle. AI systems should be safe and reliable.
  • Fairness and non-discrimination: The AI system did not discriminate against an individual or group.23

OECD’s AI Principles and European Union’s AI Act

OECD’s AI principles framework was developed by the Organization of Economic Cooperation and Development. The European Union’s AI Act proposed legislation aimed to regulate AI systems based on their level of risk. It provides a set of principles for responsible AI development and deployment. Principle which is:

  • Fairness and privacy,
  • Transparency and explainability,
  • Security and safety,
  • Inclusive growth and well-being,
  • Human rights and democratic values,
  • Accountability.24

Best Practices for Organizations Deploying Generative AI

Companies should prioritize ethical considerations when deploying generative AI by mitigating bias in training data and outputs. Ensuring transparency and explainability in that system, establishing clear data governance and privacy securities, and maintaining human management for AI-driven decisions.

Stakeholder Engagement

The decision-making process involves employees, customers, and a diverse group of stakeholders to ensure needs and a wide range of views are considered. Maintain open line communication with all stakeholders about the goals and potential risks of deploying the AI system. This builds trust. Implement robust feedback mechanisms to get input from stakeholders, users, or developers. This helps identify and address matters promptly.

Ethical Training

Implement and develop training programs that cover the ethical implications of generative AI to understand the ethical considerations and their responsibilities. Train employees to handle Ethical dilemmas related to generative AI using real-world scenarios. This type of practical approach helps to better understand ethical principles. Keep the training programs updated with the latest developments in AI ethics. This ensures the organization remains compliant.

Monitoring

To monitor the effectiveness and impact of generative AI, establish clear performance metrics. And regularly review these metrics to maintain ethical standards. Implement audit trails to track the decision-making processes of Gen AI and identify any biases or unethical behavior. To conduct regular assessments of the AI systems, engage third-party auditors. Independent audits provide a neutral view of the system’s performance.25

Policy Recommendations for Regulating Generative AI

Establish Clear Ethical Guidelines

Governments should develop ethical guidelines for the development and deployment of generative AI. These guidelines should cover issues such as transparency, privacy, and bias. For example, the European Commission’s AI ethics Guidelines provide a robust framework that is also adopted and implemented by other countries’ governments. UNESCO’s 2023 Recommendations on AI Ethics emphasize inclusivity, fairness, and the prevention of harm, offering a global perspective on the ethical governance of AI.22,24

Implement Robust Data Protection Laws

Create or implement robust Laws to handle personal data used by generative AI. Roles similar to the GDPR in the European Union. These laws mandate data collection and provide individuals with the right to access and delete their data. Policies inspired by the Blueprint for an AI Bill of Rights from the White House stress the need to safeguard privacy and uphold human dignity in AI-driven systems.26

Promote Explainability

Governments create roles. Decision-making processes of Gen AI should be understandable, especially those that a significant societal impact. It also discloses AI algorithms and the data that is used to train them, ensuring that AI decisions can be audited. Datasets used to train these systems enable audits and foster trust. The principles outlined by Floridi (2019) advocate for explainability as a cornerstone of building trustworthy AI systems, emphasizing the need for user-centric transparency.27

Encourage Ethical AI Research and Development

Governments should fund and support research into ethical AI practices that include developing technologies that can find biases in AI systems, as well as creating tools that can enhance the transparency and accountability of AI, such as the OECD Principles on AI, promote investment in ethical AI practices to ensure its safe and beneficial development. Furthermore, public-private partnerships can accelerate the development of tools and frameworks that align with these principles.24

Role of Multi-Stakeholder Approaches in Governance

Inclusive Policy Development

In multi-stakeholder approaches, participation of groups like government agencies, private companies, academic sectors, society organizations, and the public is required. This ensures that diverse perspectives are considered in the development of AI policies. UNESCO’s Recommendations on AI Ethics underscore the importance of engaging all stakeholders to create honest and globally relevant AI frameworks.22

Collaborative Standard Setting

Through multi-stakeholder approaches, we involve multiple groups or people, like governments, stakeholders, etc., to develop standards and best practices. And that is widely accepted and adopted. By involving stakeholders and governments, they can create policies that are more likely to be adopted and respected. For instance, the OECD Principles on AI advocate for collaborative standard setting as a means to align AI development with global ethical standards.24

Improvement and Adaptation

AI development is continuously facing new challenges and opportunities. Multi-stakeholder approaches enable continuous feedback, which allows policies to be regularly reviewed and updated in response to technological advancements and public needs. By implementing these recommendations and multi-stakeholder governance, governments can ensure that generative AI is developed and used responsibly and ethically. And it is useful for society.25

Comparison of AI Governance Frameworks

Table 1: Comparison of AI governance frameworks
FrameworkStrengthsWeaknessesImplementation Status
UNESCO Recommendation on the Ethics of AIComprehensive ethical guidelines emphasizing human rights and environmental sustainability. Advocates for international cooperation and inclusivity.Non-binding relies on voluntary adoption by member states. Lacks specific mechanisms for enforcement and monitoring.Adopted in November 2021. Implementation varies among member states. Some have integrated principles into national policies, while others are in the preliminary stages.
IEEE Ethically Aligned DesignFocuses on practical standards for ethical AI system design. Engages a broad range of stakeholders, including technologists and ethicists.Primarily offers guidelines without regulatory authority. Adoption depends on voluntary compliance by organizations and professionals.Utilized as a reference in various industry sectors. Integration into formal regulatory frameworks is limited.
OECD AI PrinciplesProvides a balanced approach emphasizing innovation and trustworthiness. Recognized by multiple countries, facilitating international alignment.Broad principles that may require further specification for practical application. Implementation is subject to national interpretation, leading to variability.Adopted in May 2019. And updated in 2024. As of May 2023, over 1,000 policy initiatives across more than 70 jurisdictions reported alignment with these principles.

The above table highlights the distinct emphases and challenges associated with those frameworks. providing insights into their global implementation and effectiveness. Nowadays, in the evolving landscape of AI, leading companies such as OpenAI, Google, and Meta have implemented various things to ensure the ethical development and deployment of AI systems, like AI transparency reports, red teaming evaluations, and bias audits.

OpenAI has established a red Teaming Network to identify vulnerabilities and rigorously evaluate their AI models. In that way, they connect external experts from diverse fields. All of these aim to proactively assess potential risks and enhance the safety of AI systems. OpenAI is advancing automated red teaming methods. By leveraging AI to simulate a wide array of attack scenarios, OpenAI can identify and mitigate potential threats more efficiently. Google publishes AI transparency reports detailing its AI systems’ functionalities, limitations, and ethical considerations. This aims to provide clear insights into how AI technologies operate and the measures in place to ensure responsible use. Google also conducts internal bias audits to assess and mitigate potential biases in its AI models. By evaluating systematically training data and model outputs, Google strives to promote fairness and prevent discriminatory outcomes in AI applications. Google and OpenAI Meta have committed to transparency by releasing information about their AI systems and content moderation practices, how AI algorithms curate content, and the steps taken to address misinformation and harmful content. They perform regular assessments to identify biases in their AI systems. This is used to enhance equity and inclusivity across their platforms.

Case Studies

Gen AI is used in various fields to reduce workload, education, research, problem-solving, coding, social media, and much more. Nowadays, there are many types of Gen AI models available to solve different types of problems. Every day, the community introduces new AI to a particular objective. AI is invaluable. However, sometimes, AI models make mistakes, and sometimes, people use AI unethically, which creates problems. Here, some of them are introduced.

McDonald’s Ends AI Drive-Through Testes Amid Errors: McDonald’s announced that it is partnering with IBM to develop a drive-through order taker using Gen AI. After working with IBM on this project, McDonald’s stopped working with IBM in 2024. Because of the flow of social media videos, customers are confused and frustrated trying to get AI to understand their orders. In one video, two people try to stop AI as it keeps adding more chicken McNuggets to their order, eventually reaching 260.28 In a June 13, 2024, internal memo obtained by a trade publication, restaurant business McDonald’s announced it would end the partnership with IBM.

ChatGPT hallucinates court cases 2023: In New York City, a lawyer used ChatGPT to prepare a legal brief. The AI-generated fabricated court cases to support the arguments. In submission, the opposing counsel identified the fictitious cases, and the judge reprimanded the lawyer for failing to certify the information. This incident drew general objection and raised worries about the dependability of GenAI tools in critical fields.29

Deepface Scandal in Political Campaigns During State Elections in India in 2023: Deepfake technology was misused to create videos of an opposition leader making inflammatory statements. These videos, generated by advanced GenAI models, were misleading millions of voters and damaging the leader’s reputation.30,31

Future Directions

Lessons Learned from GenAI Misuse

Regulation and governance. Clear policies must govern GenAI application to prevent misuse and ensure ethical deployment. Regulatory frameworks like the “Blueprint for an AI Bill of Rights” highlight user data protection and control over Gen AI misuse. Transparency and Accountability. Developers should integrate safeguards, such as watermarks, disclaimers, etc., in AI outputs, which helps to identify AI-generated content to mitigate issues like misinformation and unauthorized content creation. Human Oversight. AI should complement, not replace, human judgment in critical areas like healthcare, law, and governance. Public education. Raising awareness of GenAI’s capabilities. The public knows the power of AI. Gen AI technologies evolve, and ethical consideration is essential to ensure AI systems are responsible for developing and deploying. Addressing the challenges of misuse and societal impact requires interdisciplinary collaboration, regulatory frameworks, and technical innovation.23,26,29

Some Following Key Guides for Promoting Ethical GenAI Practices

Regulatory Frameworks and Governance

Developing comprehensive regulatory frameworks is essential for managing the ethical use of GenAI. Define clear standards to ensure AI systems are transparent, accountable, and fair.

  • Defining Standards: Standards must outline specific criteria for AI development and application, addressing issues such as data privacy, security, and ethical decision-making. For example, the Blueprint for an AI Bill of Rights introduced by the White House emphasizes protecting user data and controlling the misuse of AI tools.23
  • Policy Implementation: Regulatory bodies must work with AI developers, researchers, and industry leaders to implement policies that balance innovation with ethical concerns. This collaborative approach helps mitigate risks such as algorithmic discrimination and AI-driven misinformation.25
  • Global Reach: Governance structures should adapt to the global nature of GenAI technologies, promoting international collaboration to dive into cross-border challenges.27

Transparency and Explainability

Integrating mechanisms for transparency and explainability in AI systems. It helps to build trust. Techniques like model interpretability and documentation (e.g., model cards) ensure that stakeholders, including developers, users, and regulators, understand how AI models make decisions.

  • Mechanisms for Transparency: Combining clarity techniques like model interpretability allows users to comprehend an AI system’s logic. For example, model cards provide detailed documentation about an AI model’s planned use, limitations, and performance metrics.21
  • Explainability in Critical Fields: In areas such as healthcare and law, explainability ensures that AI recommendations can be evaluated by human experts, promoting accountability and reliability in high-stakes scenarios.20
  • Building Trust: Transparent rules reassure users that AI systems operate ethically and responsibly.

AI Watermarking and Content Authentication

AI-generated content can Embed watermarks or another traceable identifier to help determine artificial media from authentic media, mitigating issues like misinformation and deepfakes.

  • Watermarking: Embedding visible or invisible watermarks in AI-generated content ensures traceability and authenticity.22 This technology can be applied to images, videos, and text outputs to identify their source.
  • Content Authentication: Using traceable identifiers improves the integrity of digital content, mitigating the risks of deepfakes and fabricated information. Tools like blockchain technology can be integrated for secure content tracking and verification.30
  • Policy Integration: Governments and organizations should mandate watermarking for AI-generated media to ensure ethical usage in advertising, journalism, and social media platforms.

Bias Mitigation and Fairness

AI models require addressing biases in training datasets and algorithms to ensure fairness.

  • Mitigation Strategies: Techniques such as data augmentation, re-sampling, and adversarial reduce biases in training datasets. Regular audits and fairness metrics can evaluate the model’s impartiality during deployment.13
  • Research Insights: Researchers like Bender et al.32 have highlighted the risks of biased language models and suggested approaches to mitigate these risks.

Ethics by Design

For AI system use, incorporating ethical principles that design at the outset is a proactive approach.17 This process includes embedding ethical relations into datasets, algorithms, and interfaces. Ensuring compliance with standards like GDPR’s Data Protection Impact Assessment.

  • Ethical Frameworks: Developers should integrate ethical guidelines into every stage of AI development, from data collection to algorithm design and user interface creation.21
  • Regulatory Compliance: Ensuring compliance with standards such as GDPR’s Data Protection Impact Assessment safeguards user privacy and security.
  • Long-term Considerations: Ethics by design expects the societal impacts of AI and prioritizes solutions that align with public interest and values.25

Educational and Public Awareness Initiatives

Promoting AI literacy among the general public is important for mitigating the risks associated with misinformation and unethical use of AI.

  • Public Empowerment: Educational drives can teach individuals to critically evaluate AI-generated content, enabling them to identify deepfakes and other manipulated media.
  • AI Literacy Tools: Interactive tools and workshops designed by AI concepts can empower communities to understand and leverage AI responsibly.
  • Institutional Support: Schools and universities should include AI ethics and literacy in their subjects for preparing future generations. To navigate an AI-driven world.

International Collaboration

Global collaboration is essential for addressing cross-border challenges, which is associated with Gen AI. Organizations like the OECD have established AI principles to promote shared ethical standards. And also for international teamwork. Collaborative initiatives facilitate the sharing of expertise, resources, and best practices among nations and industries. Tackling issues like AI misuse in cyber warfare, cross-border misinformation, and international data privacy requires coordinated efforts and robust partnerships.28,30

Continuous Monitoring

Ethical standards for Gen AI must adapt to specialized improvements. Throughout, monitoring, iterative policy updates, and impact assessment updates help to address appearing challenges actually. Developers, regulators, and users should collaborate to purify AI systems and address real-world importance proactively.16,29

Conclusion

Ethical use of Generative AI is pivotal to unlocking its transformative prospect while minimizing risks for society. In Generative AI development and deployment, ethical considerations are extremely important to check that the product benefits all of society while minimizing harm through bias and unfair practices. To ensure the various ethical considerations such as fairness, transparency, and privacy, engineers can build AI systems that are trustworthy, user-friendly, and responsible. The future of ethical GenAI depends on a collective effort of governments, academia, industry, and society. By fostering clearness, fairness, and responsibility, we can harness the potential of generative AI while mitigating risks and promoting trust.

References

1          Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press. 2014

2          Russell SJ, Norvig P. Artificial Intelligence: A Modern Approach. 4th ed. Pearson. 2020.

3          Floridi L. The Ethics of Information. Oxford University Press. 2013.

4          Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press. 2016.

5          Vinuesa R, Azizpour H, Leite I, Balaam M, Faiña A, Dickson K, et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun. 2020;11(1):233. https://doi.org/10.1038/s41467-019-14108-y

6          U.S. Copyright Office. Copyright and the digital millennium: The implications of AI-generated content. 2021. https://www.copyright.gov

7          Barocas S, Selbst AD. Big data’s disparate impact. Cal L Rev. 2016;104(3):671–732. https://doi.org/10.2139/ssrn.2477899

8          Noble SU. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. 2018.

9          Makhortykh M, Urman A, Ulloa R. Detecting race and gender bias in visual representation of AI on web search engines. In Boratto L, et al., editors. Advances in Bias and Fairness in Information Retrieval 2021;(pp. 36–50). Springer. https://doi.org/10.1007/978-3-030-97736-5_4

10       Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Daumé III H, et al. Datasheets for datasets. In CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 2018;(pp. 1–14). https://doi.org/10.1145/3173574.3173871

11       O’Neil C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. 2016.

12       Benjamin R. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. 2019.

13       Crawford K, Paglen T. Excavating AI: The politics of images in AI training sets. Excavating AI. 2019. https://excavating.ai

14       Narayanan A, Haeberlen A. How to recognize AI snake oil. Commun ACM. 2021;64(11):26–9. https://doi.org/10.1145/3488560

15       Zuboff S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. 2019.

16       Weidinger L, Mellor J, Rauh M, Griffin C, Uesato J, Huang PS, et al. Ethical and social risks of large language models. arXiv preprint. 2021. https://arxiv.org/abs/2112.04359

17       European Parliament. General Data Protection Regulation (GDPR) 2016/679. Official Journal of the European Union. 2016.

18       Strubell E, Ganesh A, McCallum A. Energy and policy considerations for deep learning in NLP. In ACL 2019: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019;(pp. 3645–50). https://doi.org/10.18653/v1/P19-1355

19       Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D. Concrete problems in AI safety. arXiv preprint. 2016. https://arxiv.org/abs/1606.06565

20       Miller T. Explanation in artificial intelligence: Insights from the social sciences. Artif Intell. 2019;267:1–38. https://doi.org/10.1016/j.artint.2018.07.007

21       Crawford K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. 2021.

22       UNESCO. Recommendation on the ethics of artificial intelligence. 2023. https://unesdoc.unesco.org/ark:/48223/pf0000379920

23       The White House Office of Science and Technology Policy. Blueprint for an AI Bill of Rights. n.d. https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/

24       OECD. OECD principles on artificial intelligence. n.d. https://www.oecd.org/going-digital/ai/principles/

25       Smith J, Lee T. Emerging trends in AI regulation. J Technol Policy. 2024;15(3),45–60.

26       European Union. General Data Protection Regulation (GDPR). Regulation (EU) 2016/679. 2016. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679

27       Floridi L. Establishing the rules for building trustworthy AI. Nat Mach Intell. 2019;1(6):261–2. https://doi.org/10.1038/s42256-019-0055-y

28       Restaurant Business Online. McDonald’s halts AI-driven ordering amid customer frustration. 2024. https://www.restaurantbusinessonline.com/technology/mcdonalds-ending-its-drive-thru-ai-test

29       Heath A. Lawyer faces court after ChatGPT hallucinated cases. The Verge. 2023. https://www.theverge.com/policy/677373/lawyers-chatgpt-hallucinations-ai

30       Chesney R, Citron DK. Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Aff. 2019;98(1):147–55.

31       Vaccari C, Chadwick A. Deepfakes and disinformation: Societal implications of synthetic media in elections. J Public Policy. 2020;40(4):510–29. https://doi.org/10.1017/S0143814X20000200

32       Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: Can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; 2021. p. 610–623.


Premier Science
Publishing Science that inspires