Syed Sibghatullah Shah
Quaid-i-Azam University, Islamabad, Pakistan ![]()
Correspondence to: s.sibghats@eco.qau.edu.pk

Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: No industry funding
- Conflicts of interest: N/a
- Author contribution: Syed Sibghatullah Shah – Conceptualization, Writing – original draft, review and editing
- Guarantor:Syed Sibghatullah Shah
- Provenance and peer-review:
Commissioned and externally peer-reviewed - Data availability statement: N/a
Keywords: Gender bias in AI, Digital literacy, Women empowerment in technology, Inclusive AI design, AI workforce diversity.
Peer Review
Received: 10 November 2024
Revised: 25 December 2024
Accepted: 27 December 2024
Published: 8 January 2025
Abstract
Purpose: This narrative review investigates the interplay between gender bias in artificial intelligence (AI) systems and the potential of digital literacy to empower women in technology. By synthesising research from 2010 to 2024, the study examines how gender bias manifests in AI, its impact on women’s participation in technology, and the effectiveness of digital literacy initiatives in addressing these disparities.
Methods: A systematic literature search was conducted across major academic databases, including Web of Science, Scopus, IEEE Xplore, and Google Scholar. The review focused on peer-reviewed articles, reports, and case studies published between 2010 and 2024 that addressed gender bias in AI, women’s participation in technology, and digital literacy initiatives. A thematic analysis framework was employed to identify and synthesise recurring themes and patterns.
Results: The findings reveal systemic gender biases embedded in AI applications across diverse domains, such as recruitment, healthcare, and financial services. These biases stem from factors including the under-representation of women in AI development teams, biased training datasets, and algorithmic design choices. Digital literacy programs emerge as a promising intervention, fostering a critical awareness of AI bias, encouraging women to pursue AI careers, and catalysing growth in women-led AI projects.
Conclusions: Although gender bias in AI poses significant challenges, this review highlights digital literacy as a transformative tool for achieving gender equity in AI development and application. The study highlights the importance of inclusive AI design, gender-responsive education policies, and sustained research efforts to mitigate bias and promote equity.
Introduction
Artificial intelligence (AI) has emerged as one of the most transformative forces of the twenty-first century, reshaping industries, economies, and societies at an unprecedented pace.1 However, alongside the remarkable advancements in AI technologies, there are increasing concerns regarding the pervasive gender biases embedded in these systems and the broader implications for women’s participation and representation in the digital realm. This narrative review explores the complex interplay between gender bias in AI and the potential of digital literacy to empower women in this evolving technological landscape. Gender bias in AI is well-documented, manifesting in various ways, including biased algorithms, the under-representation of women in AI development teams, and the reinforcement of gender stereotypes within AI applications.2,3 These biases not only reflect existing societal inequalities but also risk amplifying them, embedding such disparities more deeply within our increasingly AI-driven world.
A lack of women in AI is due to several things, some of which are long-standing societal biases. Only 22% of AI workers around the world are women.4 This number is alarming because AI will change the way we use technology in the future. AI systems can reinforce gender stereotypes and biases without trying to if the people who work on them are not diverse. This could affect many AI-driven apps and change how decisions are made. Furthermore, bias against women in AI goes beyond the number of women working in the field and includes the data and algorithms that support these technologies.5 AI systems that were taught on data that is historically biased can keep and even worsen gender differences. For instance, it has been shown that AI-based hiring tools are less helpful for women and that voice recognition systems often do not work well with female sounds.6
Giving women the skills and information to think critically about AI technologies through digital literacy seems to be a promising way to help reduce gender bias in AI. Digital literacy includes not only knowing how to use technology but also how to think about, criticise, and deal with digital technologies, like AI systems.7,8 The purpose of this research is to identify the causes and manifestations of gender bias in AI systems, how this bias hinders women’s advancement in the tech industry, how digital literacy initiatives can help eliminate this gap, and the most effective strategies for promoting gender parity in the development and application of AI. For an equitable society, there is a need to prevent AI systems from gender bias. The reason behind this is the growing significance of AI systems in several aspects of our lives, including our professional lives, educational institutions, medical treatments, and interpersonal connections.9–11 Also, as the world economy becomes more digitalised, being able to change and work with AI technologies will become more important.12,13 This analysis focuses on how tech-savvy women might progress in the field of AI. It does this as part of a bigger plan to close the tech gap between men and women and make sure that everyone can benefit from the digital revolution. This review aims to address the following research questions:
- How do gender biases manifest within AI systems, and what are their primary sources?
- To what extent can digital literacy initiatives empower women to engage critically with AI technologies and mitigate gender disparities in the field?
This study aims to examine the origins and consequences of gender bias in AI systems. It also explores potential strategies for leveraging digital literacy to mitigate gender disparities and foster gender equity within the technology sector.
Methodology
This narrative review thoroughly gathered literature on gender bias in AI and digital skills for women’s growth. Combining data from different fields using a narrative review method, this study looks into the complicated link between gender bias in AI and how digital literacy could help solve the issue. A comprehensive understanding of the impact of gender bias on AI requires the integration of findings from many domains and approaches. That is why we used a narrative review. As opposed to systematic reviews, narrative reviews are better for looking at how society and technology are connected because they allow for critical thought and theme synthesis. Authors support this approach by showing how narrative reviews can help bring together important information from different fields.14–16 The study goals are met by this method because it lets us look into where gender bias in AI comes from and seek answers that involve digital literacy.
Search Strategy
A systematic and iterative search strategy was developed to identify and retrieve relevant literature on gender bias in AI and the role of digital literacy in addressing this bias. The search strategy was carefully designed and tested to ensure it captured a comprehensive and representative sample of the current literature base. Table 1 shows a list of the databases and search words.
| Table 1: Search strategy. | |||
| Databases | Search Terms | Grey Literature Sources | |
| Web of Science | “Artificial intelligence” AND (“gender bias” OR “women”) | UNESCO reports | |
| Scopus | “Digital literacy” AND (“women” OR “empowerment”) | World Economic Forum white papers | |
| IEEE Xplore | “AI ethics” AND “gender” | AI Now Institute publications | |
| Google Scholar | “Women in tech” AND “AI” | OECD policy papers | |
| ACM Digital Library | “Gender equality” AND “artificial intelligence” | Tech company diversity reports | |
The study questions, key ideas, and an initial look at foundational articles in the field were used to come up with the search terms. A test search was done in a few databases (Web of Science, Scopus, and IEEE Xplore) to see the relevance and completeness of the search words. The results of these tests were used to guide future improvements. The search words were chosen to include a wide range of points of view because the topic is multidisciplinary and includes technology, education, and gender studies. The following changes were made based on the pilot tests: Larger words like “gender” were paired with smaller words like “bias” or “equality” to make sure they were clear. Boolean operators (like AND, OR) and wildcards (like *) were added to account for different ways of putting things and to get more results. As an example: “artificial intelligence” AND (“gender bias” OR just “women”); “digital literacy” AND (“empowerment” OR both “inclusion”).
Relevance to AI, gender studies, and computer literacy informed the selection of the cited materials. The first search terms were used to test each database to make sure it returned appropriate results. Web of Science is a search engine that focuses on large scholarly works and studies that span multiple fields. The search terms “artificial intelligence” AND (“gender bias” OR “women”) retrieved research that mostly addressed the societal impacts of AI. Scopus collects writing about technology and schooling. The search term “digital literacy” AND (“women” OR “empowerment”) was changed to exclude general topics about digital education that had nothing to do with gender. The IEEE Xplore database was chosen because it focuses on technical and engineering books. “AI ethics” AND “gender” attempted to eliminate irrelevant technical discussions by implementing several changes. Google Scholar is used to find grey literature and other types of material. “Women in tech” AND “AI” were tested and the results were mostly about job trends and differences between men and women. The ACM Digital Library is a specialised resource for computer science and studies of how people communicate with computers. “Artificial intelligence” and “gender equality” were changed to include a study on diversity in AI.
Reflective Search Terms
The following steps were taken to make sure that the search words matched the current body of literature. We have reviewed the first results of the pilot searches to ensure their relevance and completeness, which has improved the terms. The last set of search words was compared to both well-known classics and newer studies in the field. This step made sure that the terms covered important texts without leaving out any important parts. The words were chosen to cover different ways of putting ideas (like “AI ethics” vs. “ethics in AI”) and how important ideas overlap (like “gender” with “bias,” “equality,” and “inclusion”). Different search terms had to be used for each database because of its indexing rules and major focus areas, but the main ideas had to stay the same. As an example, IEEE Xplore used the words “AI ethics” and “gender” to find technical papers about algorithmic bias. Conversely, Scopus sought out social science articles using keywords such as “empowerment” and “digital literacy.” These specific search terms made sure that the approach was best for each database’s strengths. The search words were improved to make sure that the ideas were consistent while still being able to be changed to fit the needs of each database. A full record of the search process was kept, including the reasoning behind choosing the terms and the results of the pilot tests.
The first search turned up 300 sources, of which 107 sources were chosen for the study after eliminating duplicates, and selecting which ones to include requires the use of criteria. This thorough and organised method makes sure that the literature chosen accurately shows the present state of study on gender bias in AI and digital literacy. Boolean operators and wildcards were used in the search to find different uses of words. We also used the “snowballing” method, which involves looking through the reference lists of important papers to find more relevant sources.
Inclusion and Exclusion Criteria
The criteria for extracting literature on AI and gender are outlined in Table 2.
| Table 2: Criteria for extracting literature on AI and gender. | |
| Criteria | Details |
| Inclusion Criteria | Peer-reviewed articles published between 2010 and 2024 |
| Reports and white papers from recognised institutions | |
| Case studies of digital literacy initiatives | |
| Theoretical and empirical studies on gender bias in AI | |
| Publications in English or with available English translations | |
| Exclusion Criteria | Publications before 2010, unless seminal works |
| Non-English-language publications without available translations | |
| Opinion pieces or editorials without substantial evidence or analysis | |
| Search Results | The initial search yielded 300 potential sources. |
| Final Selection | After applying inclusion/exclusion criteria and removing duplicates, 107 sources were selected for review. |
Analysis Approach
We employed thematic analysis because it allows us to discover commonalities and motifs across various forms of written expression.
Thematic Analysis
The main way that the patterns and themes in the chosen works were found, organised, and interpreted was through thematic analysis. Using Braun and Clarke’s17 six-step method, this process made it possible to investigate gender bias in AI and how digital literacy can help reduce it in a planned and thorough way. This method was perfect for putting together results from different fields because it allows for flexibility while still being methodologically sound.
Steps of Thematic Analysis
Familiarisation and the Initial Coding: For a better understanding of the topics covered and to identify recurring themes, all 107 selected works were read as a whole. Articles were coded separately to create an initial coding scheme. This was done to account for overlap with grey literature. Inductively, codes were made with a focus on both obvious and hidden information. As an example, “Not enough women on AI teams” is written in manifest code. Code that cannot be seen: “Structural barriers to diversity in technology fields.”
Codebook Development: Initial codes were identified with the similarities and differences after much deliberation, a single codebook was finalised. This makes sure that all pages are coded the same way.
Comprehensive Coding: All the chosen pieces (n = 107) were coded in NVivo qualitative data analysis software using the codebook. NVivo made it easier to organise and find coded data, which made analysis quick and efficient.
Theme Review and Refinement: Codes were put together into bigger themes that showed trends in the data. As an example, the first topic is the systemic causes of gender bias in AI, for example, skewed datasets used for training and predominantly male-dominated development teams. The second theme is how digital literacy affects women’s freedom in AI (for example, by making them more job-ready and encouraging critical thinking).
Theme Naming and Definition: Themes were examined to make sure there was both internal homogeneity (the sameness within a theme) and outward heterogeneity (the ability to tell themes apart). There were clear definitions for each theme, and quotes were chosen to show important points. As an example, “AI hiring algorithms tend to repeat historical biases, making it harder for women to get hired (Figure 1).”

The topic analysis showed a few main ideas that are important for understanding gender bias in AI and how digital literacy can help solve these problems. One important issue that came up was the structural roots of gender bias in AI. This showed how women are under-represented in AI development teams and how biased training datasets reinforce harmful stereotypes. These structural problems make AI systems more biased, which reflects and makes worse social problems of inequality. The ways that bias shows up in AI systems was another important theme. This included biased results based on gender in areas like hiring, healthcare, and financial algorithms, where AI apps often make differences worse. For example, hiring algorithms tend to favour men for technical jobs, and healthcare algorithms may not give women’s health signs enough weight, which makes things even less fair.
A third theme focused on how digital knowledge could give women more power. It became clear that digital literacy could change things by helping women find and fight flaws in AI systems. Digital literacy programs not only help people think more critically but also make it easier for women to get into and do well in AI-related fields. The success of these projects shows that they can help women advance in their careers in AI and make the tech world more fair and open to everyone. Together, these themes made it possible to put all the findings and come up with strategies that can be used to support gender equality in AI. The study shows how important it is to work together to make AI creation and use more equality between men and women by fixing structural inequality, reducing systemic bias, and giving women more power through targeted interventions.
Quality Control and Being Flexible
We used some quality control methods to ensure the accuracy of our analysis. All 107 articles were coded independently. It was critical to carefully record the review’s search method, analysis, and choices regarding inclusion and exclusion. Our goal was to get better and more reliable results by being aware of our positionality and doing reflexive activities. This method carefully and thoroughly brings together all the studies on gender bias in AI and digital literacy to help women get ahead. All articles were included in the inter-rater reliability study. This all-around method makes sure that the coding system is fully used, which makes the thematic analysis more rigorous and reliable. The whole dataset was coded separately, which allows a full evaluation of reliability to happen. To find out how consistent the scoring was, Cohen’s kappa coefficient, which is a measure of inter-rater agreement, was used. The high level of agreement indicated by the resultant coefficient of 0.82 indicates that the coding method was both consistent and trustworthy. Any disagreements found during the independent coding process were settled. This led to repeated improvements to the codebook that made it clearer and more useful across a wide range of studies.
To eliminate the possibility of bias or under-representation that could arise from relying solely on a group, the inter-rater reliability method is used for all 107 articles. This approach guarantees that all examined literature is incorporated into the thematic analysis. As part of the topic analysis process, researchers had to do things like reflect on their points of view and possible biases throughout the study. In addition, basic themes were shown to an outside group of experts in AI ethics, gender studies, and digital literacy. Their feedback helped to see details that might have been missed, which made sure the themes made sense in both classroom and real-world settings. The method makes sure there is strong rigour and openness by actively reflecting on the research through team discussions, external validation, and positionality statements, as well as by coding all articles and getting a Cohen’s kappa coefficient of 0.82.
Results
Origins of Gender Bias in AI
Much of different things are linked to gender bias in AI, and each one makes the issue worse in its own way:
Under-representation of Women in AI Development Teams
The main reason AI systems are not fair to women is that there are not enough girls and women working on them. The World Economic Forum18 says that only 22% of AI workers are women and that only 14% of AI bosses are women. Because of this difference, women’s wants, experiences, and points of view are not always taken into account when AI is being made. Teams with members from a range of backgrounds are better able to think about how AI systems might affect people of all types.
Bias in AI Training Data
AI systems usually learn from very large amounts of data from both the past and the present. This data often includes biases that come from society. Seventy-eight per cent of the AI training datasets had big differences between men and women, with three times as many data points relating to men as to women.19,20 This difference in data can make AI systems less accurate or biased when they deal with events or inputs that involve women. For instance, language models that are trained on biased data might show that men are more likely to be in charge or have well-paying jobs, while women are more likely to be doing tasks.21,22
Algorithmic Design Choices
Despite appearances to the contrary, gender bias can be unintentionally introduced or exacerbated by algorithmic design choices.23 As an example, the way optimisation criteria are designed and chosen in recommendation algorithms can lead to wrong results. According to a study by Trauth24 and Chen et al.,25 the criteria used to make recommendation systems more engaging often led to content ideas that were based on gender stereotypes, which reinforced traditional gender roles. Specifically, these systems tended to suggest technical and leadership-related content to male users while suggesting household and caregiving content to female users. Such biases usually happen because of implicit assumptions or mistakes made during the design process. This shows the importance of carefully looking over and testing algorithms for possible biases.
According to Pal et al.18 and the World Economic Forum,26 there are consistently more men than women working in AI jobs. The gap between men and women is smaller in the nonprofit sector, where women make up 37% of the workforce while men make up 63%. The gap is small in other fields, too, like healthcare (26% women vs. 74% men) and education (25% women vs. 75% men). However, the gap is big in fields like manufacturing energy and mining, where women only hold 15% and 18% of AI jobs, respectively.18 This imbalance between men and women shows the importance of working to close the gender gap in AI, especially in technical fields that have usually been dominated by men.
Manifestations of Gender Bias in AI Applications
There are numerous instances of gender bias in AI, and each one has its unique impact on individuals. Some examples that are widely recognised are presented in Table 3.
| Table 3: Manifestation of gender bias. | |
| AI Application | Manifestation of Gender Bias |
| Voice Assistants | Defaulting to female voices for subservient roles |
| Image Recognition | Associating women with domestic activities |
| Resume Screening | Favouring male candidates for technical or leadership positions |
| Language Models | Associating high-paying professions with male pronouns |
| Source: Own elaboration | |
Voice Assistants
Voice assistants like Siri, Google Assistant, and Alexa, often start with female sounds. By giving female voices an implicitly “subservient” part, this choice reinforces traditional gender roles.27,28 Studies have shown that people like hearing female voices in these kinds of jobs.29 This might be because society often associates women with roles of caregiving and support. This choice about how to make something reinforces the idea that women are better at helping others than taking on leading roles.30 Most systems allow us to change the voice of the assistant, but the most common voice is still female, which reinforces these stereotypes by default.
Image Recognition
Image recognition algorithms that are biased against women can make wrong assumptions and classifications that are perturbing.31 It has been shown that some image recognition systems link pictures of women more often with housework or caring for others, while pictures of men are more often linked with work or being outside. This bias might be caused by the fact that training datasets have more pictures of women at home and men at work, which reinforces standard gender roles.32 Because of this, these biases can change how pictures are labelled or categorised, which can change the results of systems that use image recognition, like content moderation tools or advertising algorithms.
Resume Screening
It has been shown that resume-reviewing tools that use AI favour men for expert and leadership roles, even when qualified women also apply.33 Many times, this bias comes from the fact that the data used to train these systems is old and shows that there are gaps between men and women in some skills or jobs. A resume screening tool might learn to support keywords or patterns that are linked to men if it is trained on a set of resumes from past candidates who were mostly men.34 This would make it harder for women to get jobs. This issue shows how perilous it is to teach AI systems with data that shows unfair and biased behaviour from the past, as these systems could make things worse instead of better.
Language Models
When directed to do things like assist customers or write pages, language models such as GPT-3 and BERT frequently give answers that favour one gender over the other.35,36 For example, when asked to finish lines about well-paying jobs, they are more likely to link male pronouns with names like “doctor” or “CEO” and female pronouns with roles like “nurse” or “teacher.”37 Unfortunately, the data that these models are based on is not accurate. These types of data often show biases and social norms. More gender roles are twisted because the language models repeat these stereotypes.
Prevalence of Gender Bias in Different AI Applications
Thompson et al.38 examine how common bias against women is present in different kinds of AI. The following Table 4 shows a list of the percentages of ways that are unfair to women in different areas:
| Table 4: AI Application systems exhibiting gender bias. | |
| AI Application | Percentage of Systems Exhibiting Gender Bias |
| Recruitment | 68% |
| Healthcare | 57% |
| Financial Services | 52% |
| Voice Assistants | 73% |
| Image Recognition | 61% |
| Source: Own elaboration | |
A number of cases of gender bias in AI is shown by these statistics. A lot of bias can be found in voice assistants, job search tools, and picture recognition systems. According to the World Economic Forum,26 men are better than women at 66% of skills, like deep learning and artificial neural networks. Also mostly held by men are computer vision (67%), neural networks (70%), and Apache Spark (74%). When it comes to machine learning and pattern recognition, where men make up 85% and 98% of experts, the gap is even bigger. Because men and women do not have the same level of advanced AI skills, programs that teach women specialised skills in areas like machine learning and neural networks could be very important for closing the AI gender gap. To get rid of gender bias in AI, we need to make AI teams more diverse, collect datasets that are representative of all genders, and test algorithms thoroughly for biases.
Synthesis and Critical Analysis
Gender bias is a highly systemic issue that comes from the way that social structures, historical biases, and technical decisions are made during the development process. There is bias in many areas of AI, from the people who work on them to the data they use to teach them and the choices they make.39 When there are not enough women on AI teams, there are not as many examples and ideas that can help find and fix gender biases.40,41 This is analogous to how historical data demonstrating gender inequalities trains AI systems to perpetuate such biases.5 This keeps a loop going that does not end past injustices but makes them worse. When these things come together, they make results unfair in several situations. This shows that fundamental changes need to be made in how AI systems are created, trained, and tested to produce fair results. AI not only mirrors but also potentially exacerbates existing social problems, such as gender bias, when observed in the actual world.42 The fact that AI is based on data can, ironically, support harmful stereotypes while pretending to be neutral.43 If recommendation systems use “engagement” as a factor, they might show more gender-stereotypical material and force users to play certain roles. In the same way, setting voice assistants to female voices reinforces old ideas that women should play supporting parts. It also shows the flaws of old data-centric methods and stresses the need for systemic changes, like adding ethical and diverse oversight, balancing datasets, and rigorously testing algorithms, to make AI systems that are not only technically strong but also socially responsible and embracing.
Impact of Gender Bias in AI on Women’s Participation in Technology
When there is bias against women in AI, it starts a difficult cycle that makes it harder for women to work in technology and move up in the field. It also changes how women think about the field. This cycle makes it seem like AI is mostly a field for men, which makes women less likely to start or move up in technology-related jobs. According to Thormundsson,44 relative AI skill penetration rate by gender across various countries highlights disparities between male and female representation in AI skills. India leads with a significant gender gap, showing the highest male skill penetration rate (2.78) compared to females (1.65). The United States follows, with male penetration at 2.21 and female at 1.23. Other countries, such as Germany, Israel, and Canada, show similar trends where men significantly outnumber women in AI skill penetration. In all countries shown, males have a higher rate of AI skill penetration than females, indicating a global gender gap in AI expertise. The smallest gaps are observed in countries like Australia and the United Arab Emirates, but even there, males still hold a higher skill rate. This highlights the need for targeted initiatives to encourage and support women in developing AI skills globally to reduce this gender disparity.
Lack of Role Models and Stereotype Reinforcement
Another big effect of gender bias in AI is that it reinforces the idea that technology and AI are fields controlled by men. Because there are not many obvious female role models in AI, young women may not want to work in this field because they think it is an unwelcoming place. Research by Priyadarshini and Priyadarshini45 discovered that using gender-biased AI in schools was linked to a drop in the number of women who wanted to get STEM degrees. When AI programs mimic gender biases in schools, they can make women less ambitious, which lowers the number of talented women who enter the field.46,47
Stereotype Threat and Performance Impact
When people feel threatened by the possibility of confirming bad stereotypes about their social group, this is called stereotype threat.48 It can have a big effect on how well women do in AI and technology and the careers they choose. According to research by Hussien et al.,49 AI-assisted performance review systems were 23% less likely to suggest women for senior positions than reviews done by humans only. This difference shows that AI might reinforce stereotypes, which would make it harder for women to become leaders. Biased computer programs can make it less possible for women to be promoted when they are used to grade work. This might lead some to believe that men are better at leading in technology.
Workplace Culture and Attrition
It has been hard for a long time for the tech industry to make settings where everyone feels welcome. When women work in places that are not friendly, they tend to quit more often.50,51 This is especially true as they move up in their careers. For a long time, many women were working in tech jobs at different levels and now top tech jobs saw the most job loss.52 Bias and cultural problems can build up over time and have a big impact when a lot of top women leave their jobs.53 These issues can get worse if AI systems are biased making workplaces less friendly, widening the gap between men and women in tech, and keeping the high rate of women quitting their jobs. Women may not have as much access to the tech networks, tools, and help they need to start their businesses because they cannot move up as quickly as men; therefore, their desire to start tech businesses dropped.54
Synthesis and Critical Analysis
Women face bias at every stage of their career in tech, from school and entry-level work to becoming a leader and starting their businesses.55 As stereotypes are reinforced and places are made that are not accepted, AI systems add to a cycle of exclusion that makes the field less diverse. There are not enough women working in tech, which makes biases stronger in AI and slows down the progress and new ideas that could come from having a more diverse workforce. These biases have an accumulated effect on each stage of attrition. There has to be a shift in education, company culture, and career advancement opportunities if we cannot simply improve the algorithms and add more types of training files. Companies need to fix the flaws that are built into AI tools that affect jobs, such as performance reviews, and recommendation systems.56 To end the circle of under-representation, we also need to work together to create role models, mentorship programs, and workplaces where everyone feels welcome. Without these types of steps, AI biases could keep making the tech industry less equal between men and women, which would slow down growth and make the field less open to everyone.
Digital Literacy as an Empowerment Tool
By teaching women how to think critically about technology, digital literacy programs could help close the gender gap in AI.57 Within these programs, women in AI learn skills that improve their self-esteem, teach them how to think critically, and create job opportunities for them. Programs that teach women how to use technology and AI can give them a lot more confidence to work in those fields.40 Because of the greater representation of women in discussions and choices regarding the development of AI, these kinds of systems empower women. The confidence boost can spread, and women who can question and criticise AI systems may inspire other people to think more critically about technology, which will make the field of AI more open and diverse for everyone.
Digital literacy programs are important because they teach women how to think critically, which helps them find bugs in AI systems and fix them.58 These programs can help users become more informed about AI and work for more fair AI practices by teaching women how to think critically about how AI technologies are made and how they work. When women can closely examine AI systems, they can spot troubling trends, such as the use of gender stereotypes in AI ideas, and work for improvements.59,60 With these important skills, women can hold people who make AI systems accountable and push for more fair and equal AI systems to be made. Programs that teach women how to use technology can help them get their first tech jobs. If women study programs like AI4ALL, they more likely go into jobs related to AI after ending the programs.61 Figure 2 shows how bias against women can show up in AI and digital skills. Inequality between men and women in society is the first step that affects bias in AI systems.62 Because there are not enough women working on AI, systems do not include different points of view, which makes these biases stronger. We need programs that teach people how to use technology correctly and AI design methods that work for everyone to solve this problem.

Strategies for Promoting Gender Equality in AI
Several approaches have shown promise in making AI fairer for men and women. These ways of making AI systems and settings try to make them more open to everyone by looking at both the technical and social sides of AI development. More diverse development teams using inclusive AI design methods have been shown to reduce gender bias.63–65 Putting gender-sensitive STEM education at the top of policy lists has a big impact on the number of women working in AI.66 Giving women in the AI and tech fields tips and motivation through mentorship programs is a great way to help them. Role models such as successful women in AI can help dispel preconceptions and encourage more women to pursue careers in the field. Also, mentoring programs give women in areas where men are more common the help they need to get past problems and build successful careers in AI.
Example of AI4ALL
A nonprofit organisation called AI4ALL shows how targeted digital learning programs can help make AI more diverse and open to everyone.67 Through mentorship, summer programs at top colleges, and continued support, AI4ALL’s programs give high school students, especially those from under-represented groups, real-life experience with AI. The outcomes of AI4ALL are very good: 78% participants plan to continue their education in STEM fields, and 91% say they are more interested in jobs in AI.68 The program also helps young women in AI find support, which gives them more power. Some programs that teach women specific digital skills can get more women interested in AI and help them learn the skills they will need for future jobs in the field.
Figure 3 is a word cloud that synthesises the main points discussed regarding digital literacy’s function in empowering women and gender bias in AI. We are primarily focused on tackling gender inequities in technology and the potential of digital literacy to create diversity. Key terms like “gender,” “AI,” “digital,” and “empowerment” are clearly emphasised. Using words like “inclusivity,” “representation,” “bias,” and “diversity” makes it clear that we need to create an AI environment that is fair for all genders and points of view. The words “skill,” “impact,” “technology,” and “role” show how important it is to make programs just for women, which will help them learn the skills they will need to help build AI systems. Words like “transparency,” “ethical,” and “policy” suggest a bigger approach to creating a fair AI environment, where worries about ethics and policy play a central role.

Critical Analysis
This study’s findings highlight the multifaceted nature and multiple root causes of gender bias in AI. These include the fact that women are under-represented in AI development teams, biased training datasets, and algorithmic design choices that reinforce negative stereotypes. These fundamental issues not only make it harder for women to work in tech, but they also build unfairness into AI systems that affect important areas like hiring, healthcare, and finances. For instance, biased hiring tools continue to hurt women by repeating patterns of exclusion from the past.33 Similarly, healthcare algorithms often do not take gender- specific health factors into account, which makes differences in medical care worse.69 Previous research has also shown that gender needs to be taken into account when designing and using AI systems.70,71
Global Applicability Versus Regional Specificity
In different parts of the world, the ways to fix gender bias in AI tend to work in different ways. There are cultural, economic, and physical differences between men and women that affect how they use technology and how they connect.72 To make gender equality plans that work well in a range of settings, it is important to be aware of these differences. In high-income countries, problems with gender bias in AI are often caused by uneven workforces and biased algorithms.73,74 Most of the time, these countries have better technology, more educational chances for women, especially in STEM fields, and a lot of people who use AI. However, there are still systemic problems, such as the fact that there are not enough women in leadership positions or on AI creation teams. More companies and governments in high-income countries are using tools like IBM’s AI Fairness 360 to find and fix programs that are biased. AI systems must be open and answerable to humans.75 To make the workplace more welcoming, tech companies have put in place hiring quotas, diversity goals, and employee assistance groups.76 Even though these steps have helped, it is still hard for high-income countries to achieve full equality. In tech cultures, gender stereotypes still exist, and there are not enough women in top roles. Also, biases in the past data that is used to train AI systems keep making things unfair, so they need to be audited and updated all the time.
Low- and Middle-Income Countries (LMICs)
Women in LMICs, on the other hand, often face deeper problems, such as limited access to schooling, technology, and finance tools.77 Adding to these problems are deeply ingrained cultural norms that make women less likely to work in STEM areas.
Barriers in LMICs
In LMICs, especially in rural places, many women do not have easy access to good schools. According to UNESCO (2023), girls in sub-Saharan Africa and South Asia are much less likely than boys to finish secondary school and even less likely to study STEM topics.78,79 In LMICs, where the digital gap is worse, women are less likely than men to own devices, use digital tools, or connect to the internet.80 According to the GSMA Mobile Gender Gap Report, women in LMICs are 16% less likely than men to use mobile internet.81 Because of cultural norms, women often have to take care of the home, which limits their time and chances to play with technology or work in AI.62 These norms also make women less likely to go into fields that are controlled by men, which makes the gender gap in technology sectors even bigger.82
To deal with these problems, digital literacy programs need to be made that take into account the many issues women in LMICs face. Some important methods are: community-based training programs like Intel’s She Will Connect have helped close the digital gap by giving women access to affordable devices, hands-on training, and ongoing support. Often, these programs include child care so that moms can take part.83 LMIC governments can make gender-sensitive education policies that encourage girls to learn STEM subjects, especially in places that do not get enough help. According to Salmi and D’Addio,84 subsidies and grants for underprivileged groups can help get rid of financial barriers. By working together with local NGOs and grassroots groups, we can make sure that our actions are sensitive to different cultures and meet the unique needs of women in each community. For instance, Women in Tech Africa challenges social norms and creates safe places for women in technology by combining digital training with advocacy.85 Even though these programs have shown promise, they are still hard to make bigger. Not enough money, broken infrastructure, and cultural resistance make it hard to adopt widely. Additionally, LMICs need to deal with bigger structural problems like economic inequality and political instability to make it easier for women to work in AI.
Global Lessons and the Need for Regional Customisation
The different experiences of low- and middle-income countries show that there is no one-size-fits-all answer to the problem of gender bias in AI. Each region’s cultural, economic, and physical conditions must be taken into account when planning interventions. As an example, in countries with high incomes, efforts can focus on improving advanced techniques for reducing bias and encouraging a diverse group of leaders.86 For women to be able to work in AI in LMICs, foundational investments in education, infrastructure, and community support are necessary. Cross-regional teamwork can make it easier to share information and decide how to use resources. LMICs can help high-income countries by giving them money, technical help, and access to global networks.87 Women can get more power in all areas of life if people in the global AI community work together and share their new ideas. This will make sure that technology is used to bring people together instead of excluding them.
The Importance of Intersectionality in Addressing Gender Bias
Intersectionality is important for understanding and fixing the many levels of injustice that women from disadvantaged groups face. When we use an intersectional framework, we deduce how identities that overlap, like race, disability, ethnicity, and location, combine with gender to create new problems.88 These frameworks are important for making successful interventions because they look at inequality in a more complex way than just looking at gender. In India and sub-Saharan Africa, for example, women in the countryside have very different problems than women in the towns of those same countries. According to Ghouse et al.,89 rural women often have trouble getting an education, using technology, and pursuing their professional goals. This is because social norms hold them back from putting their job goals ahead of household duties. In these situations, digital literacy programs need to do more than just teach technical skills. They also need to deal with real-world problems like how to get reliable internet access, how much devices cost, and the cultural stigma against women working in STEM areas.
Inclusive Design Practices as a Solution
To fix gender bias in AI, inclusive design techniques are just as important. These steps make sure that AI systems are built with different points of view in mind, which makes it less likely that biases will be built in or made stronger. It is important to work together, be responsible, and be open at all stages of making AI with inclusive design. When AI systems are trained on datasets that do not properly represent the range of people these systems are meant to help, bias starts to show up. To make AI systems that work fairly, it is important to use a lot of different kinds of data. To do this, we need to gather and organise information about a lot of different people, places, and things. It is highly important to have ethical tracking systems in place to find and fix any biases that might be present in the creation and use of AI systems. Companies can make sure that ethical issues are taken into account when AI is being designed by putting together ethics panels or advisory boards with a wide range of people, like women from under-represented groups.90,91 As an example, IBM’s AI Ethics Board provides guidance for creating fair and inclusive AI solutions, with a focus on avoiding outcomes that are unfair to some groups.92 Being open about how AI is made and how choices are made is another important part of inclusive design. According to Nazer et al.,93 tools like inclusive AI programs help developers find and get rid of biases in datasets and algorithms. This makes sure that AI systems are in line with morals and values in society.
Challenges and the Need for Global Collaboration
Even though intersectional methods and inclusive design practices have a lot of potential, most of the work on them is still done in high-income areas, leaving LMICs uninformed. LMICs have a hard time adopting these practices because they lack the means, infrastructure, and institutional support that high-income countries have. For instance, companies like Google and IBM are at the forefront of promoting inclusive design standards, but their work mostly helps people in developed markets. LMICs do not always have the money or technology to copy these programs, so AI systems do not always meet the needs of disadvantaged groups.77 People from all over the world need to work together to bring inclusive design practices to places that do not have them yet because of this difference. By providing LMICs with information, funding, and technical expertise, high-income nations and multinational corporations can aid them in identifying effective solutions for their problems.
Future Directions
A great number of people need to work together to solve overlapping problems and support a design that is open to everyone. Some of these people are lawmakers, teachers, AI developers, and neighbourhood groups. Governments should utilise additional ethical means of monitoring and ensuring the usage of diverse types of data to construct AI ethically. Systemic change can happen when rules push companies to hire diverse teams and give people from under-represented groups a voice in making decisions. LMICs need to spend money on infrastructure, training programs, and community-based projects to be able to use inclusive design.94 International and local groups can work together to make sure these efforts are right for the time and will last. Programs that teach women how to use technology should employ intersectional methods to help different groups of women deal with the issues they face. A mix of expert training, advocacy, and support services is what makes interventions work best.
There is a need for intersectionality and inclusive design to fix the unfair treatment of women who are already on the outside and to make AI systems that are fair, equal, and include everyone. By putting diversity and inclusion at the top of their lists, stakeholders can make sure that AI systems are tools that help people instead of ones that keep inequality going. The findings of this study suggest that we need to fix gender bias in AI in a more profound way. This includes both programs that teach people how to use technology and changes to the way technology, schooling, and policy are set up. We need a multipronged strategy that is both universally applicable and locally specific to combat these biases. In this form, it can be used in various social and economic contexts.
Another thing that needs to be explored more is how digital literacy programs can be changed to help women in different places who are having trouble. It may be more important to build AI teams with women in high-income countries with better technology infrastructure because of the need to make algorithms more responsible.95 Conversely, in LMICs, digital literacy programs must address foundational barriers, such as limited access to technology, entrenched cultural norms, and economic constraints, which significantly affect women’s involvement in STEM fields.96 Women from disadvantaged groups often have to deal with more than one kind of abuse. At the local level, groups like Women in Tech Africa teach women digital skills while also speaking up for and helping their communities to give more power to women from groups that are not well-represented.97
We need longitudinal studies to find out how programs that teach women digital literacy affect their long-term interest and success in AI work. One type of study could look at how participating in programs that teach new skills or pair people up with mentors changes job prospects over the course of five to ten years. This is shown by the fact that graduates of targeted programs like Girls Who Code are much more likely to go into technology jobs. The efficacy of legislation aiming to eliminate gender bias should also be thoroughly investigated. It is critical to ensure that algorithms do not contain any bias and to recruit individuals of colour to work in AI fields. The curriculum needs to be changed to take gender into account. Quotas for women in tech companies in Nordic countries have not only increased the number of women working there but also been shown to improve performance and spark new ideas.98 It is also important to look into how AI can be used to support equality between men and women. For instance, AI-powered tools can analyse recruitment practices for bias, provide unbiased comments to job applicants, or spot gender disparities in workplace data.49
Research consistently shows that teams comprising members with different points of view create more fair and creative results.99 Companies need to actively hire women and under-represented races for AI jobs and create cultures at work that help people stay with the company and move up. To find and reduce bias, it is important to check algorithms and information regularly. Reweighting datasets, adversarial debiasing, and fairness-aware machine learning models are some of the techniques that are becoming more popular as proactive ways to make AI systems that are inclusive.100 For instance, scientists have made models that change the weights of training samples to get rid of biased links in language tasks.101 Anyone with a vested interest can monitor AI development to guarantee that it benefits all stakeholders. More than ever, fixing gender bias in AI right away is necessary to make things more fair and equal, as well as to make AI systems more accurate, useful, and moral. AI experts can work towards a future where technology helps everyone, gender included, if they keep brainstorming and sharing their experiences.
Critical Evaluation of Limitations
First, the data sources used in the topic analysis mostly came from English-language publications, which means that critical views from places where English is not the main language may have been left out. This linguistic bias might make it harder to apply the results to other parts of the world, especially in places where local languages are used for most study and everyday life. The study also contends that digital literacy is a necessary step, but these programs alone are not enough to fix the systemic unfairness that exists in AI. Computer literacy classes need to be a part of a bigger plan that includes fair hiring, helpful workplaces, and rules that aim to fight gender stereotypes in and out of school. Studies in the West have shown that mentorship programs can get more women interested in AI. However, to get around systemic barriers, similar programs in LMICs often need extra parts, such as payment help and community outreach.
Conclusion
The review discussed the important topic of bias against women in AI and how women can get ahead in this quickly changing field. It also shows how digital tools for learning could help fix these problems and give women more opportunities in AI. First, we have explored that gender bias in AI is caused by biased training data, the fact that there are not enough women on development teams and algorithmic design choices that may leave women out of the loop. Such biases make it harder for women to work in technology areas. We showed how they feed into a cycle that makes women less likely to work in AI or move up in their current jobs. In the third part of this study, we examined programs that teach people to use technology to close the gender gap in AI. Lastly, we came up with some good ways to encourage equal participation of women in the formation of AI. Some of these are open design, rules on education that take gender into account, and programs that teach women digital skills to help them get started in the field.
As AI continues to change and affect our world, it must support equal rights for men and women. We can work towards a more fair and inclusive AI future that helps everyone by teaching women how to use technology and fixing the biases that are built into the AI development process. Even though this review points to some promising paths, more research is needed to fully understand how these interventions will work in the long run. Longitudinal studies could show how digital literacy programs work over time, and economic analyses could show the costs and benefits of these methods in a more general way. Researchers should investigate the effects of digital literacy programs on women’s engagement and achievement in AI domains in the future. Additionally, they ought to investigate intersectional techniques that investigate the interconnectedness of gender bias in AI with other forms of prejudice. Additionally, an interesting area to research in the future is how AI could be used to support gender equality instead of just reinforcing societal biases. In conclusion, tackling gender bias in AI through digital literacy and empowerment programs is not only the right thing to do for fairness and equality but also the only way to make AI systems that are truly accurate, moral, and helpful for everyone. There are still problems, but the result shows that targeted interventions can make a big difference in promoting gender equality in AI.
References
1 Uctu R, Tuluce NS, Aykac M. Creative destruction and artificial intelligence: the transformation of industries during the sixth wave. J Econ Technol. 2024;2:296-309.
https://doi.org/10.1016/j.ject.2024.09.004
2 Shrestha S, Das S. Exploring gender biases in ML and AI academic research through systematic literature review. Front Artific Intel. 2022;5:976838.
https://doi.org/10.3389/frai.2022.976838
3 Yarger L, Cobb Payton F, Neupane B. Algorithmic equity in the hiring of underrepresented IT job candidates. Online Inform Rev. 2020;44(2):383-95.
https://doi.org/10.1108/OIR-10-2018-0334
4 Thakkar D, Kumar N, Sambasivan N. Towards an AI-powered future that works for vocational workers. In proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020 (pp. 1-13).
https://doi.org/10.1145/3313831.3376674
5 O’Connor S, Liu H. Gender bias perpetuation and mitigation in AI technologies: challenges and opportunities. Ai Soc. 2024;39(4):2045-57.
https://doi.org/10.1007/s00146-023-01675-4
6 Franzoni V. Gender differences and bias in artificial intelligence. In Gender in AI and robotics: the gender challenges from an interdisciplinary perspective 2023 (pp. 27-43). Springer International Publishing.
https://doi.org/10.1007/978-3-031-21606-0_2
7 Buckingham D. Defining digital literacy-What do young people need to know about digital media?. Nord J Digit Liter. 2015;10(Jubileumsnummer):21-35.
https://doi.org/10.18261/ISSN1891-943X-2015-Jubileumsnummer-03
8 Haider J, Sundin O. Information literacy challenges in digital culture: conflicting engagements of trust and doubt. Inform Commun Soc. 2022;25(8):1176-91.
https://doi.org/10.1080/1369118X.2020.1851389
9 Habbal A, Ali MK, Abuzaraida MA. Artificial Intelligence Trust, risk and security management (AI trism): frameworks, applications, challenges and future research directions. Exp Syst Appl. 2024;240:122442.
https://doi.org/10.1016/j.eswa.2023.122442
10 Trajtenberg M. AI as the next GPT: a political-economy perspective. National Bureau of Economic Research; 2018.
https://doi.org/10.3386/w24245
11 Schlicht L, Räker M. A context-specific analysis of ethical principles relevant for AI-assisted decision-making in health care. AI Ethics. 2024;4(4):1251-63.
https://doi.org/10.1007/s43681-023-00324-2
12 Imamov M, Semenikhina N. The impact of the digital revolution on the global economy. Linguist Cult Rev. 2021:968-87.
https://doi.org/10.21744/lingcure.v5nS4.1775
13 Webster C, Ivanov S. Robotics, artificial intelligence, and the evolving nature of work. Springer International Publishing; 2020.
https://doi.org/10.1007/978-3-030-08277-2_8
14 Siddaway AP, Wood AM, Hedges LV. How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Ann Rev Psychol. 2019;70(1):747-70.
https://doi.org/10.1146/annurev-psych-010418-102803
15 Baumeister RF, Leary MR. Writing narrative literature reviews. Rev Gen Psychol. 1997;1(3):311-20.
https://doi.org/10.1037/1089-2680.1.3.311
16 Greenhalgh T, Thorne S, Malterud K. Time to challenge the spurious hierarchy of systematic over narrative reviews?. Eur J Clin Invest. 2018;48(6):e12931.
https://doi.org/10.1111/eci.12931
17 Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101.
https://doi.org/10.1191/1478088706qp063oa
18 Pal KK, Piaget K, Zahidi S. Global Gender Gap Report 2024. World Economic Forum; 2024 [cited 2024 Nov 10]. Available from: https://www.weforum.org/publications/global-gender-gap-report-2024/
19 Jones JJ, Amin MR, Kim J, Skiena S. Stereotypical gender associations in language have decreased over time. Sociol Sci. 2020;7:1-35.
https://doi.org/10.15195/v7.a1
20 Shen T, Li J, Bouadjenek MR, Mai Z, Sanner S. Towards understanding and mitigating unintended biases in language model-driven conversational recommendation. Inform Process Manag. 2023;60(1):103139.
https://doi.org/10.1016/j.ipm.2022.103139
21 Schramowski P, Turan C, Andersen N, Rothkopf CA, Kersting K. Large pre-trained language models contain human-like biases of what is right and wrong to do. Nat Mach Intel. 2022;4(3):258-68.
https://doi.org/10.1038/s42256-022-00458-8
22 Charlesworth TE, Yang V, Mann TC, Kurdi B, Banaji MR. Gender stereotypes in natural language: Word embeddings show robust consistency across child and adult language corpora of more than 65 million words. Psychol Sci. 2021;32(2):218-40.
https://doi.org/10.1177/0956797620963619
23 Fazelpour S, Danks D. Algorithmic bias: Senses, sources, solutions. Philos Compass. 2021;16(8):e12760.
https://doi.org/10.1111/phc3.12760
24 Trauth EM. The role of theory in gender and information systems research. Inform Organ. 2013;23(4):277-93.
https://doi.org/10.1016/j.infoandorg.2013.08.003
25 Chen J, Dong H, Wang X, Feng F, Wang M, He X. Bias and debias in recommender system: A survey and future directions. ACM Trans Inform Syst. 2023;41(3):1-39.
https://doi.org/10.1145/3564284
26 World Economic Forum. Why AI is failing the next generation of women. Emerging Technologies; 2019. Available from: https://www.weforum.org/stories/2019/01/ai-artificial-intelligence-failing-next-generation-women-bias/
27 Strengers Y, Kennedy J. The smart wife: Why Siri, Alexa, and other smart home devices need a feminist reboot. MIT Press; 2021.
https://doi.org/10.7551/mitpress/12482.001.0001
28 Costa P, Ribas L. AI becomes her: Discussing gender and artificial intelligence. Technoetic Arts. 2019;17(1-2):171-93.
https://doi.org/10.1386/tear_00014_1
29 Tripp A, Munson B. Perceiving gender while perceiving language: Integrating psycholinguistics and gender theory. Wiley Interdiscip Rev Cogn Sci. 2022;13(2):e1583.
https://doi.org/10.1002/wcs.1583
30 Bullough A, Guelich U, Manolova TS, Schjoedt L. Women’s entrepreneurship and culture: Gender role expectations and identities, societal culture, and the entrepreneurial environment. Small Bus Econ. 2022;58(2):985-96.
https://doi.org/10.1007/s11187-020-00429-6
31 Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2021;54(6):1-35.
https://doi.org/10.1145/3457607
32 Barlas P, Kyriakou K, Guest O, Kleanthous S, Otterbacher J. To “see” is to stereotype: image tagging algorithms, gender recognition, and the accuracy-fairness trade-off. Proc ACM Hum-Comput Interact. 2021;4(CSCW3):1-31.
https://doi.org/10.1145/3432931
33 Andrews L, Bucher H. Automating discrimination: AI hiring practices and gender inequality. Cardozo L Rev. 2022;44:145.
34 Frissen R, Adebayo KJ, Nanda R. A machine learning approach to recognize bias and discrimination in job advertisements. AI Soc. 2023;38(2):1025-38.
https://doi.org/10.1007/s00146-022-01574-0
35 Navigli R, Conia S, Ross B. Biases in large language models: origins, inventory, and discussion. ACM J Data Inform Qual. 2023;15(2):1-21.
https://doi.org/10.1145/3597307
36 Donald A, Galanopoulos A, Curry E, Muñoz E, Ullah I, Waskow MA, et al. Bias Detection for customer interaction data: a survey on datasets, methods, and tools. IEEE Access. 2023;11:53703-15.
https://doi.org/10.1109/ACCESS.2023.3276757
37 Torres N. Contrastive adversarial gender debiasing. Nat Lang Process J. 2024;8:100092.
https://doi.org/10.1016/j.nlp.2024.100092
38 Thompson JL, Cassario AL, Vallabha S, Gnall SA, Rice S, Solanki P, et al. Registered report protocol: stress testing predictive models of ideological prejudice. PLoS One. 2024;19(8):e0308397.
https://doi.org/10.1371/journal.pone.0308397
39 Baker RS, Hawn A. Algorithmic bias in education. Int J Artif Intell Educ. 2022:1-41.
https://doi.org/10.35542/osf.io/pbmvz
40 Roopaei M, Horst J, Klaas E, Foster G, Salmon-Stephens TJ, Grunow J. Women in AI: barriers and solutions. In 2021 IEEE World AI IoT Congress (AIIoT) 2021 (pp. 0497-0503). IEEE.
https://doi.org/10.1109/AIIoT52608.2021.9454202
41 Newstead T, Eager B, Wilson S. How AI can perpetuate-or help mitigate-gender bias in leadership. Organ Dyn. 2023;52(4): 100998.
https://doi.org/10.1016/j.orgdyn.2023.100998
42 Manasi A, Panchanadeswaran S, Sours E, Lee SJ. Mirroring the bias: gender and artificial intelligence. Gend Technol Dev. 2022;26(3):295-305.
https://doi.org/10.1080/09718524.2022.2128254
43 Borau S. Deception, discrimination, and objectification: ethical issues of female AI agents. J Bus Ethics. 2024:1-9.
https://doi.org/10.1007/s10551-024-05754-4
44 Thormundsson B. Relative penetration rate of artificial intelligence (AI) skills from 2015 to 2023 worldwide, by gender and region. Statista. 2024.
45 Priyadarshini S, Priyadarshini S. Gender disparity in artificial intelligence: creating awareness of unconscious bias. In Artificial intelligence in forecasting 2024 (pp. 98-112). CRC Press.
https://doi.org/10.1201/9781003399292-7
46 Ceci SJ, Ginther DK, Kahn S, Williams WM. Women in academic science: a changing landscape. Psychol Sci Pub Int. 2014;15(3):75-141.
https://doi.org/10.1177/1529100614541236
47 Ceci SJ, Williams WM. Understanding current causes of women’s underrepresentation in science. Proc Natl Acad Sci. 2011;108(8):3157-62.
https://doi.org/10.1073/pnas.1014871108
48 Spencer SJ, Logel C, Davies PG. Stereotype threat. Ann Rev Psychol. 2016;67(1):415-37.
https://doi.org/10.1146/annurev-psych-073115-103235
49 Hussien OA, Hasanaj K, Kaya A, Jahankhani H, El-Deeb S. Unpacking the double-edged sword: how artificial intelligence shapes hiring process through biased HR data. In Market grooming 2024 (pp. 97-119). Emerald Publishing Limited.
50 Mungham G. Youth in pursuit of itself. In Working class youth culture 2023 (pp. 82-104). Routledge.
https://doi.org/10.4324/9781003460251-5
51 Mindell DA, Reynolds E. The work of the future: building better jobs in an age of intelligent machines. MIT Press; 2023.
52 Harrigan J, Reshef A, Toubal F. The march of the techies: Job polarization within and between firms. Res Policy. 2021;50(7):104008.
https://doi.org/10.1016/j.respol.2020.104008
53 Heilman ME. Gender stereotypes and workplace bias. Res Organ Behav. 2012;32:113-35.
https://doi.org/10.1016/j.riob.2012.11.003
54 Reddick CG, Enriquez R, Harris RJ, Sharma B. Determinants of broadband access and affordability: an analysis of a community survey on the digital divide. Cities. 2020;106:102904.
https://doi.org/10.1016/j.cities.2020.102904
55 Kovaleva Y, Hyrynsalmi S, Saltan A, Happonen A, Kasurinen J. Becoming an entrepreneur: a study of factors with women from the tech sector. Inform Soft Technol. 2023;155:107110.
https://doi.org/10.1016/j.infsof.2022.107110
56 Fayyaz Z, Ebrahimian M, Nawara D, Ibrahim A, Kashef R. Recommendation systems: algorithms, challenges, metrics, and business opportunities. Appl Sci. 2020;10(21):7748.
https://doi.org/10.3390/app10217748
57 Wajcman J, Young E, Fitzmaurice A. The digital revolution: Implications for gender equality and women’s rights 25 years after Beijing. 2020.
58 Nikou S, De Reuver M, Mahboob Kanafi M. Workplace literacy skills-how information and digital literacy affect adoption of digital technology. J Doc. 2022;78(7):371-91.
https://doi.org/10.1108/JD-12-2021-0241
59 Bhatia N, Bhatia S. Changes in gender stereotypes over time: a computational analysis. Psychol Women Q. 2021;45(1):106-25.
https://doi.org/10.1177/0361684320977178
60 Chiu TK. Future research recommendations for transforming higher education with generative AI. Comput Educ Artif Intell. 2024;6:100197.
https://doi.org/10.1016/j.caeai.2023.100197
61 Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Human-centered design to address biases in artificial intelligence. J Med Int Res. 2023;25:e43251.
https://doi.org/10.2196/43251
62 Gupta M, Parra CM, Dennehy D. Questioning racial and gender bias in AI-based recommendations: do espoused national cultural values matter?. Inform Syst Front. 2022;24(5):1465-81.
https://doi.org/10.1007/s10796-021-10156-2
63 Policy OP. Harnessing the green and digital transitions for gender equality: insights from the 2024 OECD forum on gender equality. 2024.
64 Houser KA. Can AI solve the diversity problem in the tech industry: mitigating noise and bias in employment decision-making. Stan Tech L Rev. 2019;22:290.
65 Hall P, Ellis D. A systematic review of socio-technical gender bias in AI algorithms. Online Inform Rev. 2023;47(7):1264-79.
https://doi.org/10.1108/OIR-08-2021-0452
66 Kerr AD. Artificial intelligence, gender, and oppression. In Gender equality 2021 (pp. 54-64). Springer International Publishing.
https://doi.org/10.1007/978-3-319-95687-9_107
67 Tegon R. Toward the 4th Agenda 2030 goal: AI support to executive functions for inclusions. In Handbook of research on teaching with virtual environments and AI 2021 (pp. 591-615). IGI Global.
https://doi.org/10.4018/978-1-7998-7638-0.ch025
68 Tena-Meza S, Suzara M, Alvero A. Coding with purpose: learning AI in rural California. ACM Trans Comput Educ. 2022;22(3):1-8.
https://doi.org/10.1145/3513137
69 Kumar V, Prabha C, Hasan MM. Unlocking gender-based health insights with predictive analytics. In Transforming gender-based healthcare with AI and machine learning 2024 (pp. 59-82). CRC Press.
https://doi.org/10.1201/9781003473435-5
70 Ahn J, Kim J, Sung Y. The effect of gender stereotypes on artificial intelligence recommendations. J Bus Res. 2022;141:50-9.
https://doi.org/10.1016/j.jbusres.2021.12.007
71 Van Berkel N, Goncalves J, Russo D, Hosio S, Skov MB. Effect of information presentation on fairness perceptions of machine learning predictors. In Proceedings of the 2021 CHI conference on human factors in computing systems 2021 (pp. 1-13).
https://doi.org/10.1145/3411764.3445365
72 Gnambs T. The development of gender differences in information and communication technology (ICT) literacy in middle adolescence. Comput Hum Behav. 2021;114:106533.
https://doi.org/10.1016/j.chb.2020.106533
73 Lin B, Kuai J. Automated inequalities: examining the social implications of artificial intelligence in China. In Research handbook on artificial intelligence and communication 2023 (pp. 391-404). Edward Elgar Publishing.
https://doi.org/10.4337/9781803920306.00035
74 Patil SD, Husainy A, Hatte PR. Empowerment of women through education and training in artificial intelligence. In AI tools and applications for women’s safety 2024 (pp. 132-149). IGI Global.
https://doi.org/10.4018/979-8-3693-1435-7.ch008
75 Schiff D. Education for AI, not AI for education: the role of education and ethics in national AI policy strategies. Int J Artif Intell Educ. 2022;32(3):527-63.
https://doi.org/10.1007/s40593-021-00270-2
76 Sebastian AM, Peter D. Artificial intelligence in cancer research: Trends, challenges and future directions. Life. 2022;12(12):1991.
https://doi.org/10.3390/life12121991
77 Khan MS, Umer H, Faruqe F. Artificial intelligence for low income countries. Human Soc Sci Commun. 2024;11(1):1-3.
https://doi.org/10.1057/s41599-024-03947-w
78 Fussy DS, Iddy H, Amani J, Mkimbili ST. Girls’ participation in science education: structural limitations and sustainable alternatives. Int J Sci Educ. 2023;45(14):1141-61.
https://doi.org/10.1080/09500693.2023.2188571
79 Desmond C, Watt K, Naicker S, Behrman J, Richter L. Girls’ schooling is important but insufficient to promote equality for boys and girls in childhood and across the life course. Dev Policy Rev. 2024;42(1):e12738.
https://doi.org/10.1111/dpr.12738
80 Bidwell NJ. Women and the sustainability of rural community networks in the global south. In Proceedings of the 2020 international conference on information and communication technologies and development 2020 (pp. 1-13).
https://doi.org/10.1145/3392561.3394649
81 Paul T, Dutta S. Mobile phones and rural women in South Asia and Africa: a systematic review. Gend Technol Dev. 2023;27(2): 227-49.
https://doi.org/10.1080/09718524.2022.2161127
82 Stewart-Williams S, Halsey LG. Men, women and STEM: Why the differences and what should be done?. Eur J Person. 2021;35(1): 3-9.
https://doi.org/10.1177/0890207020962326
83 Panda LP, Rath KC, Rao NJ, Rao AS. Enhancing organizational ecosystems through gender equity: addressing challenges and embracing opportunities. In Effective technology for gender equity in business and organizations 2024 (pp. 195-226). IGI Global.
https://doi.org/10.4018/979-8-3693-3435-5.ch007
84 Salmi J, D’Addio A. Policies for achieving inclusion in higher education. Policy Rev High Educ. 2021;5(1):47-72.
https://doi.org/10.1080/23322969.2020.1835529
85 Harris B, Dragiewicz M, Woodlock D. Technology, domestic violence advocacy and the sustainable development goals. In The emerald handbook of crime, justice and sustainable development 2020 (pp. 295-313). Emerald Publishing Limited.
https://doi.org/10.1108/978-1-78769-355-520201017
86 Onyeador IN, Hudson SK, Lewis Jr NA. Moving beyond implicit bias training: policy insights for increasing organizational diversity. Policy Insights Behav Brain Sci. 2021;8(1):19-26.
https://doi.org/10.1177/2372732220983840
87 Liu J, Dong C. Understanding the complex adaptive characteristics of cross- regional emergency collaboration in China: a stochastic evolutionary game approach. Fract Fract. 2024;8(2):98.
https://doi.org/10.3390/fractalfract8020098
88 Tinner L, Holman D, Ejegi-Memeh S, Laverty AA. Use of intersectionality theory in interventional health research in high-income countries: a scoping review. Int J Environ Res Public Health. 2023;20(14):6370.
https://doi.org/10.3390/ijerph20146370
89 Ghouse SM, Durrah O, McElwee G. Rural women entrepreneurs in Oman: Problems and opportunities. Int J Entrepreneur Behav Res. 2021;27(7):1674-95.
https://doi.org/10.1108/IJEBR-03-2021-0209
90 Munoko I, Brown-Liburd HL, Vasarhelyi M. The ethical implications of using artificial intelligence in auditing. J Bus Ethics. 2020;167(2): 209-34.
https://doi.org/10.1007/s10551-019-04407-1
91 Etzioni A, Etzioni O. Incorporating ethics into artificial intelligence. J Ethics. 2017;21:403-18.
https://doi.org/10.1007/s10892-017-9252-2
92 Oyeniran CO, Adewusi AO, Adeleke AG, Akwawa LA, Azubuko CF. Ethical AI: addressing bias in machine learning models and software applications. Comput Sci IT Res J. 2022;3(3):115-26.
https://doi.org/10.51594/csitrj.v3i3.1559
93 Nazer LH, Zatarah R, Waldrip S, Ke JX, Moukheiber M, Khanna AK, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health. 2023;2(6):e0000278.
https://doi.org/10.1371/journal.pdig.0000278
94 Giebel C, Gabbay M, Shrestha N, Saldarriaga G, Reilly S, White R, et al. Community-based mental health interventions in low- and middle-income countries: a qualitative study with international experts. Int J Equity Health. 2024;23(1):19.
https://doi.org/10.1186/s12939-024-02106-6
95 Reyes-Alardo LV, Guzmán-Mena L, Cruz R, Munoz D. From digital exclusion to digital inclusion: how is the Dominican republic fostering a digital culture?. In From digital divide to digital inclusion: Challenges, perspectives and trends in the development of digital competences 2024 (pp. 217-241). Springer Nature Singapore.
https://doi.org/10.1007/978-981-99-7645-4_10
96 Wong-Villacres M, Kutay C, Lazem S, Ahmed N, Abad C, Collazos C, et al. Computing and Sustainable Societies. ACM J. 2024;2(1).
https://doi.org/10.1145/3608113
97 World Bank. Bridging gender gaps through the RSR-ADSP Gender Window: insights and lessons from women’s economic empowerment interventions. 2024.
98 Liu D, Bjaalid G, Menichelli E, Sun X. Empowering women in academia: navigating institutional dynamics, gender roles, and personal pursuits among female researchers in Norwegian higher education. J Asian Pub Policy. 2024:1-7.
https://doi.org/10.1080/17516234.2024.2386721
99 Neukam M, Bollinger S. Encouraging creative teams to integrate a sustainable approach to technology. J Bus Res. 2022;150:354-64.
https://doi.org/10.1016/j.jbusres.2022.05.083
100 Siddique S, Haque MA, George R, Gupta KD, Gupta D, Faruk MJ. Survey on machine learning biases and mitigation techniques. Digital. 2023;4(1):1-68.
https://doi.org/10.3390/digital4010001
101 Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258. 2021.








