Overview: AI Literacy, Education, and Awareness Building
AI literacy, end-use education, and model explainability are interrelated concepts for developers and stakeholders. AI literacy and end-user education promote the explainability of algorithms by increasing stakeholders' understanding of AI concepts and empowering end-users to engage with AI systems effectively. These educational efforts contribute to creating a culture where transparency and accountability are valued, ultimately enhancing trust and acceptance of AI technologies.
Model Explainability
The explainability of an AI solution impacts both long-term system performance and societal trust in the solution. Understanding how the AI model makes decisions enables stakeholders to assess its efficacy in various scenarios, identify areas for improvement, and refine the system's functionality to meet evolving user needs and expectations.
Model explainability is specific to the AI system and refers to the ability to provide transparent insights into its decision-making processes, enabling stakeholders, including end-users, to understand and interpret the rationale behind its outputs. AI literacy and end-user education contribute to explainability by fostering stakeholders' understanding of AI concepts. It is vital in promoting transparency, accountability, and fairness in AI-driven decision-making processes across various domains. Specifically, it is important for the ongoing function and use of the system by addressing:
- Trust and Understanding: Model explainability helps users, stakeholders, and decision-makers understand how AI models arrive at their conclusions or predictions. This transparency fosters trust in AI systems and their decisions based on their outputs.
- Compliance and Accountability: As the use of AI becomes more prevalent, the regulatory environment around these solutions is developing. By offering clear explanations for AI outcomes, organizations can uphold adherence to regulations concerning data privacy, fairness, and accountability.
- Bias Detection and Mitigation: Transparent AI models allow for the detection and mitigation of biases that may be present in the data or the model itself. Understanding how the model makes decisions facilitates identifying and rectifying biased patterns, leading to fairer and more equitable outcomes.
- Error Diagnosis and Improvement: Explainable AI aids in diagnosing errors or inaccuracies in model predictions. By understanding the “whys” behind certain decisions, developers can improve model performance, enhance robustness, and refine the overall AI system.
- User Acceptance and Adoption: The clarity of AI outputs significantly influences user acceptance and adoption. Model explainability empowers individuals to trust and interact with AI-driven solutions, thereby enhancing user experiences and promoting widespread implementation across diverse stakeholder groups.1
Explainability makes AI decisions understandable to individuals who may not have technical expertise, and developers can achieve it in several ways:
Global Explanations: Provide insights into the overall behavior of the model by analyzing feature importance, model parameters, or feature contributions across the entire dataset.
Interpretable Models: Use models that inherently provide transparency, such as decision trees, linear regression, or rule-based systems. These models offer clear rules or features that explain their predictions.
Local Explanations: Techniques like LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations) generate explanations for individual predictions, making it easier to understand why a specific decision was made.2
Simplification: Simplify complex models using techniques like feature selection, dimensionality reduction, or model distillation, which create more interpretable versions of the original model while preserving its performance.
Post-hoc Explanations: Utilize methods like feature importance scores, partial dependence plots, or sensitivity analysis to explain the model's predictions after it has been trained.
Interactive Visualization: Present model explanations through interactive visualizations that allow users to intuitively explore how different features affect predictions and understand the decision-making process.
Natural Language Generation: Automatically generate human-readable explanations in natural language to describe the reasoning behind model predictions clearly and understandably.
Ethical Considerations: Consider ethical implications and fairness metrics when designing and explaining AI models, ensuring that explanations reflect the model's adherence to fairness principles and ethical guidelines.
Documentation and Communication: Provide comprehensive documentation and communication channels to explain the model's purpose, limitations, and potential biases to stakeholders, promoting trust and transparency in its use.
When deciding on methods for model explanation, several access and equity considerations should guide choices. Firstly, the team should prioritize methods that promote transparency and comprehensibility for users of varying technical backgrounds, ensuring equitable access to information about how the AI system functions. These methods could involve, for example, global explanations, interpretable models, and local explanations that provide insights at both the macro and micro levels of model behavior. Secondly, the team should prioritize methods that enhance accessibility for diverse user groups by employing simplification techniques, interactive visualizations, and natural language generation to present explanations in intuitive formats that cater to different learning styles and cognitive abilities.
For more technical considerations for model explainability, visit the GitHub repository.
To guide your decision-making around the explainability of an AI solution, consider the following four questions: 3
- What stakeholder groups do the results need to be explained to?
- What needs to be explained to each group of stakeholders?
- Why do the results need to be explained to that group?
- What is the best way to explain the results to each group?
Access this resource for an example of these questions in practice.
For more guidance and support with stakeholder engagement, explore this helpful resource: Stakeholder Engagement Throughout The Development Lifecycle.
AI Literacy
AI literacy and end-user education complement each other in promoting the explainability of algorithms by increasing stakeholders' understanding of AI concepts and empowering end-users to engage with AI systems effectively. AI literacy refers to the level of understanding individuals have about artificial intelligence concepts, technologies, and their implications. It encompasses a broad range of topics, including the fundamentals of AI, its applications across various domains, ethical considerations, and societal impacts. A higher level of AI literacy among stakeholders, including administrators, faculty, staff, students, and other stakeholders at an institution, is fundamental to effective solution implementation.
Like most learning processes, AI literacy can be conceptualized along a spectrum. It's beneficial to consider AI literacy in relation to Bloom's taxonomy, a common educational framework that categorizes the various levels of cognitive learning.
To ensure AI literacy, stakeholders need to engage with all levels of Bloom's taxonomy, from remembering and understanding to applying, analyzing, evaluating, and creating, fostering a holistic approach to education and problem-solving. Specifically, stakeholders should have a basic understanding of the functions of AI and how applications use it to produce outputs. With this knowledge, stakeholders can then use and apply AI in everyday life, which may look like leveraging tools to make their work more efficient or using the outputs of specific AI systems in decision-making. As stakeholders use more AI applications, they need to be able to contextualize those outputs and their interactions within AI ethics concepts like fairness, transparency, bias, and accountability. While many end users will not need to evaluate and create AI solutions, leaders at institutions looking to include AI solutions in their technology strategy will need familiarity with evaluation and creation skills to evaluate solutions, define use cases, and co-design the non-technical elements. In contrast with how Bloom’s taxonomy is typically conceptualized and used, stakeholders can evaluate and create without fully understanding AI ethics. Overlooking this knowledge gap could have severe consequences for equitable AI, so a comprehensive skills analysis is important to conduct across stakeholder groups.
We can think about AI literacy in terms of these key domains:
- Basic AI concepts and terminology
- Ethical and societal considerations
- Data literacy
- Practical knowledge of AI tools and technology
- Applications and potential of AI
- AI problem solving and interpretation
For a more in-depth look at these domains, how they align to the continuum of AI literacy, and where you or your users' literacy needs are, access this self-assessment tool.
Education Campaigns
Part of planning for successfully implementing an AI solution involves defining a strategy for supporting various stakeholder groups in using the systems and outputs to align with its equitable use case. This step in the process is essential for the widespread use of an AI system and for the equitable interpretation and use of its outputs. Each group of stakeholders will have unique education and AI literacy needs, including non-technical members of the development teams who will maintain the AI solution.
End-User Education
End-user education campaigns should be tailored to the unique needs of the stakeholder groups identified earlier in the process to address the end users' unique literacy gaps and learning needs. End-user education specifically focuses on educating stakeholders, including non-technical managers monitoring ongoing use and end users of the solution. This education includes teaching end-users how AI systems work, what factors influence their decisions, and how to interpret their outputs to enhance algorithms' perceived explainability.
Begin by assessing the current level of AI literacy among end users within the institution by conducting surveys, interviews, or focus groups to identify gaps in knowledge, misconceptions, and areas of interest related to AI. Based on the assessment results and the level of influence and impact these stakeholder groups have (identified during stakeholder mapping), develop customized training materials tailored to the different end-user groups. These materials may include online courses, workshops, webinars, tutorial videos, and written guides covering basic AI concepts, ethical considerations, and practical applications in higher education. Tailoring the approach to the users' influence and impact is important. For example:
- For institutional administrators and faculty that may need to make decisions with the outputs of an AI system, hands-on workshops, and live demonstrations will provide practical experience with AI tools and technologies. These workshops may allow end users to interact with AI systems, experiment with data analysis techniques, and explore real-world use cases relevant to their academic or administrative roles.
- For students who may only interact with the outputs of AI systems through the decision-making of administrators, building trust will require basic AI literacy and awareness. Integrating AI literacy education into the curriculum across various disciplines and incorporating relevant topics into courses, seminars, and projects will help students develop critical thinking skills around AI and understand the individual and societal implications of AI. Open-source education materials designed for students are available, like this one from AI4ALL.
When defining a plan to address the learning needs of all stakeholders, it's crucial to consider access and equity. These considerations include language proficiency, technological literacy, and socio-economic status. Develop customized training materials tailored to these specific needs, utilizing clear and jargon-free language, visual aids, and interactive elements to enhance user accessibility. Additionally, consider providing multiple delivery formats such as online modules, in-person workshops, and printed materials to accommodate varying learning preferences and accessibility requirements. Training materials should be easily accessible to all stakeholders, regardless of their physical location or technological resources, and offer remote access options and support for individuals with disabilities.
While offering specific professional development and learning opportunities is valuable, creating an environment that encourages curiosity, critical thinking, and awareness about AI empowers end-users to actively participate in ensuring the AI solution's relevance, fairness, and equity. Promoting a culture of openness and dialogue around AI creates a culture where challenges with solutions are seen as opportunities for growth and improvement, encouraging continuous learning, collaboration, and innovation rather than halting progress. The end-user education should provide access to resources through a centralized repository of resources on AI literacy, including articles, research papers, books, and online resources. Ensure that these resources are easily accessible to end users and regularly updated to reflect the latest developments in the field. This education should also facilitate dialog and discussion through forums, discussion groups, or online communities where end users can engage in dialogue, share experiences, and ask questions about AI. Ongoing support and communication can build trust in the solution as well as an open and constructive environment to highlight and mitigate unintended harm.4
Education for Non-technical Leaders and Managers
Non-technical leaders and managers reading this guide may need to deepen their understanding of AI as stewards of AI solutions and monitor for their equity implications. While this should serve as an in-depth guide for equity considerations throughout the machine learning pipeline, we acknowledge that the specific education needs of this group are nuanced.
For more foundational knowledge of AI, this resource has compiled the best courses for non-technical leaders in AI.
For specific training opportunities for leaders and teams seeking to develop how they design and create AI solutions focusing on equity and mitigating bias, Paritii would be happy to partner with you.
- Larksuite. (2024). Model explainability in AI. larksuite.com
- Athar, A. (2020). SHAP (SHapley Additive exPlanations) And LIME (Local Interpretable Model-agnostic Explanations) for model explainability. Analytics Vidhya. medium.com
- Alteryx. (2023). The Essential Guide to Explainable AI (XAI). alteryx.com
- FeedbackFruits. (2023). AI in higher education: 8 strategies for institutional leaders. feedbackfruits.com
Overview: AI Literacy, Education, and Awareness Building
AI literacy, end-use education, and model explainability are interrelated concepts for developers and stakeholders. AI literacy and end-user education promote the explainability of algorithms by increasing stakeholders' understanding of AI concepts and empowering end-users to engage with AI systems effectively. These educational efforts contribute to creating a culture where transparency and accountability are valued, ultimately enhancing trust and acceptance of AI technologies.