Beyond the Algorithm: Unpacking AI's Ethical Dilemmas in L&D

Beyond the Algorithm - Unpacking AIs Ethical Dilemmas in L&D

Articles on the use of Artificial Intelligence (AI) in Learning and Development (L&D) are numerous, I have written quite a number myself, and in writing these I’ve become very aware of the complex ethical issues that surround the use of AI in L&D. Without doubt AI offers great potential for personalised learning, efficiency, and data-driven insights, as discussed in ‘Is AI going to Accelerate L&D Changes for the Better?‘ and ‘Is Artificial Intelligence the Answer to Addressing Skills Gaps?’ But how much do we hear about AI’s ethical dilemmas in L&D? Well, the answer is ‘not a lot’! Surprisingly really, when it’s something that L&D professionals should be very aware of and proactively address to ensure that not only they use AI responsibly, but also to ensure that it’s use is beneficial and not detrimental to staff or the organisation. Last year CIPD published a guide on Using technology responsibly: Guidance for people professionals, which is certainly worth a read.

So where do we start? Well, firstly we mustn’t lose sight of the principles and ethics that L&D professionals are guided by:

  • An approach that puts all colleagues at the heart of their role and prioritises their needs, goals and diversity
  • A commitment to continuous improvement and development through lifelong learning
  • Integrity and professionalism, treating all learners and colleagues with respect and fairness
  • Building trust-based relationships with colleagues that promote the sharing of knowledge
  • Aligning L&D initiatives with organisational goals and workforce needs
  • Accountability and transparency, taking responsibility for training methods and learning outcomes, and demonstrating the measurable impact of learning programmes
  • Upholding confidentiality and data protection, ensuring that learner privacy and safeguarding of personal and sensitive data is respected
  • Equity, diversity and inclusion; promoting and incorporating inclusive practices, challenging bias, protecting colleagues from discrimination and supporting underrepresented groups

If these principles and ethics are always uppermost in our minds, then integrating the use of AI into the L&D role shouldn’t be a problem. However, challenges can set in when presented with an AI solution that promises to help improve the quality of our outcomes, enhances efficiency and productivity, facilitates better decision-making, and improves overall employee experience.

Every L&D professional would almost certainly jump at the idea of such promises, and why wouldn’t they, I certainly would!

So, what do we need to think about to ensure that when we embrace AI in L&D roles, we don’t experience negative consequences, including a court case?

You’ve probably heard the term “results are only as good as the data”, which highlights the basic principle that the quality of the output is directly tied to the quality of the data input. Well, the same applies to AI, as AI models learn from the data they are trained on. Therefore, if the data they learn from is inaccurate, biased or incomplete the results will be flawed. This is probably one of the most critical things to remember when using AI.

Only a couple of weeks ago I was doing some research on legislation changes impacting L&D. AI provided me with details of a legislation I wasn’t familiar with, that was apparently coming into force this month, which was quite alarming! So, as I do with any AI results, I went away and did some further research, this time on the UK government website.

This told me that the legislation referred to was drawn up by the previous government, but that the current government had shelved the legislation almost a year ago.


Let’s take a look at AI’s use in L&D and the ethical and legal considerations:

Bias and Fairness

Many AI models use an algorithm. An algorithm is a set of step-by-step instructions or rules designed to perform a task or solve a problem. An example of an algorithm used in L&D might be to predict an employee’s future performance, based on their past performance patterns, to inform L&D activities.

The historical data inputs that could be used in this AI model are:

  • Employee ID: Unique identifier
  • Performance Ratings: Quarterly or annual performance review scores (e.g., on a scale of 1-5, or specific categories like “Exceeds Expectations,” “Meets Expectations,” “Needs Improvement”).
  • Performance Metrics: Quantifiable data related to performance (e.g., sales figures, project completion rates, customer satisfaction scores)
  • Tenure: Length of time with the company.
  • Role/Department: Current and past roles.
  • Training/Development History: Participation in training programs.
  • Demographic data: This is where implicit Bias can be embeddedIf historical performance ratings or metrics were influenced by conscious or unconscious biases against certain demographic groups (e.g., women receiving lower “leadership potential” scores despite similar objective performance), the algorithm will learn these biases.

Learning biases can create AI results that produce continuous bias, that aren’t based on an employee’s capability, but harmful historic inequalities.

For example, if certain groups are historically underrepresented in high-performing roles, the AI model might not have enough data to accurately predict their performance, or it might incorrectly associate their demographic with lower performance due to limited positive examples. AI can not only perpetuate these biases, but also amplify them which can lead to unfair outcomes in learning assessments, training recommendations, or career development opportunities.

If AI models are not trained on diverse and representative datasets, they may not perform equally well for all learners, potentially disadvantaging certain groups based on race, gender, or preferred learning styles.

This is because they often depend on third-party code from external AI system suppliers and/or current IT systems that integrate with new ones, incorporating cross organisational workflows, data flows and business processes. Some or all of which may contain personal or sensitive data.

For example, a potential risk of using an externally created or internal AI model, is that personal data of individuals, who the AI model was trained on, might be inadvertently revealed by the outputs of the model itself. Alternatively, staff data collected for L&D purposes could be misused for other AI purposes, for which the staff have not granted permission.

The key message is that L&D professionals need to know where an AI model is basing its sources, what data it has been trained on, have a good understanding of the AI outputs and their accuracy, and be able to identify whether or not data protection compliance has been compromised.

Accuracy and Reliability

As we’ve already said, AI-generated outputs, whether advice, suggestions, recommendations or specific content, such as training materials, can be inaccurate or misleading. As time goes on the AI model will keep learning from the inaccurate and misleading outputs, causing a loop of self-reinforcing errors.

Over time this dilutes the richness of human derived knowledge, context awareness and innovation. Hence, L&D professional’s expertise is required to scrutinise all AI outputs. Any AI tools should be rigorously tested and continuously monitored to maintain accuracy and quality in learning experiences.

Human Oversight and Accountability

There is a high risk of L&D professionals, as with anyone using AI, that they rely too heavily on AI generated outputs without critically assessing or taking accountability for them. When an error is identified it’s an easy excuse to say, “that’s what AI said”, rather than taking any responsibility. AI does not automatically add credibility to outputs that are used, especially when the data it’s used to learn from hasn’t been checked for accuracy or is contextually inappropriate.

The danger of not being accountable for AI outputs is that there could be an erosion of trust and damage to the L&D professional’s reputation.

Privacy and Data Security

AI in L&D often involves collecting vast amounts of personal and performance data on staff, so not just names, addresses, email addresses, and job titles and responsibilities, but also:

  • Demographics, e.g age, gender and ethnicity
  • Education and employment history
  • Online interactions, e.g. search queries, click patterns and app activity
  • Online communication, e.g. emails, customer support transcripts and social media posts
  • Pre and post training assessment results
  • Learning activity data, e.g. course enrolments and completions, learning hours log and training progress
  • Self-evaluation of training
  • Manager skills proficiency feedback
  • Skills gap and training needs analysis
  • Development plans and KPIs
  • Succession plans
  • Mentor programme participation
  • AI insights e.g. learning behaviour, personalised learning recommendations, training return on investment (ROI) and predictive analysis of future skills needs.

The storage of all this data raises obvious concerns about who has access to it , and is it stored securely to avoid unauthorised access, unlawful processing, accidental loss and data breaches. AI systems can worsen known security risks and make them more difficult to manage.

Transparency and Rational

Many AI algorithms are complex and difficult to understand how they arrive at specific decisions or recommendations. This lack of transparency can be problematic when AI outputs are used to make crucial L&D decisions such as providing assessment results or creating career development plans. Outputs that are flawed erode trust among staff and the L&D professionals acting upon them. In addition, those using AI outputs should be able to justify and understand the rationale behind them, otherwise it’s difficult to challenge or learn from the AI outputs.

Intellectual Property and Copyright

This is a hot topic globally for many industries. We’ve read in the press about legal cases where a variety of high profile copywrite holders are alleging that AI companies are using their work to train often highly lucrative and powerful AI models in a way that is tantamount to theft. In February this year, Thomson Reuters won a major copyright victory as a judge ruled that a competitor using Reuters work to train an AI tool was not fair use3.

The legal and ethical implications of ownership for content generated by AI are still evolving, with many questions being asked about who owns content used by AI. L&D professionals must consider whether, as AI models are often trained on data that includes copyrighted content, does their AI output contain it also, as using the AI output may well be non-compliant.

Summary

There is no doubt that AI offers a wealth of opportunities in L&D and over the next decade we are going to see sophisticated advancements in AI. However, it is absolutely critical that we don’t form an over-reliance on AI without human oversight. This will lead to a loss of human judgment, empathy, and contextual understanding. L&D professionals should continue to be the experts they are and monitor AI behaviour, intervene when necessary, understand and be transparent about AI models and be accountable for their outputs. Ultimately L&D professionals should continue to be guided by their principles and ethics.

Author: Carolyn Lewis: Business Development and Learning and Development Consultant
Published 12th June 2025

You may also like to take a look at:
The impact of e-Learning Growth on Workplace Learning
Drive Success with Employee’s Collective Intelligence
Learning and Development Strategy Guidance and Templates

Sources:
https://cognota.com/blog/learning-and-development-data-analytics-guide/
https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-should-we-assess-security-and-data-minimisation-in-ai
https://pressgazette.co.uk/media_law/ai-fair-use-copyright-thomson-reuters

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep up to date with what’s happening in the world of education, training and skills. Receive details of offers and newly launched courses, and tips on effective online and blended learning practise by signing up to our monthly newsletter. We guarantee not to sell or pass on your details and you can unsubscribe at any time.