ARRANGE A CALL BACK

The Ethics of AI in Supplier Risk Scoring: Biased Algorithms and Hidden Costs

The Ethics Of Ai In Supplier Risk Scoring Biased Algorithms And Hidden Costs

As AI becomes increasingly embedded in procurement processes, its application in supplier risk scoring promises greater efficiency, broader oversight, and predictive insight. But behind these benefits lies a growing ethical dilemma: can we trust the objectivity of these tools when they are built on opaque data foundations? For procurement leaders operating in a global and ESG-conscious landscape, the answer is far from straightforward.

AI in Risk Scoring: A Black Box?

AI-driven risk scoring systems typically use machine learning to evaluate suppliers on a wide range of factors, from financial health and geopolitical exposure to ESG compliance and reputational risk. These models process structured and unstructured data from third-party providers, internal ERP systems, news sources, sanctions lists, and social media. Yet despite the breadth of data, the models themselves often remain a “black box”, providing risk scores without clear rationale or traceability.

This lack of transparency is a growing concern. In a 2023 report by the World Economic Forum on AI governance, procurement professionals cited “explainability” and “data provenance” as two of the weakest areas in AI adoption across supply chains. Without knowing which data points influenced a decision, buyers may unknowingly rely on outputs that reinforce historical biases or miss emerging risks.

Biased Algorithms and International Suppliers

Bias can enter supplier risk scoring algorithms in several ways. First, through historical training data that reflects past procurement decisions, which may have underrepresented suppliers from the Global South or overemphasised financial criteria at the expense of social or environmental performance. Second, through data availability: smaller or non-Western suppliers may lack digital footprints, credit ratings, or news coverage, resulting in risk scores skewed by absence rather than evidence.

For example, an AI model trained primarily on English-language media may fail to capture reputational issues involving suppliers in Latin America, West Africa, or Southeast Asia, where local language press may report critical incidents that remain unindexed. This not only weakens the model’s global risk coverage but also embeds a structural disadvantage for non-Western suppliers.

A 2022 academic study published in Nature Machine Intelligence found that natural language processing (NLP) models used in corporate ESG screening often exhibited language and geographic bias, particularly against suppliers in non-OECD countries. This bias is rarely visible to end users but may influence key procurement decisions,  including supplier exclusion or increased scrutiny.

False Positives and Missed Red Flags

The consequences of algorithmic bias are not just ethical but operational. False positives — where a supplier is flagged as high risk without justification ,  can result in unnecessary audits, strained relationships, or missed commercial opportunities. Conversely, false negatives, where genuine risks are missed due to incomplete or skewed datasets,  can expose buyers to compliance breaches or reputational damage.

In one illustrative case, a European public buyer using an automated risk tool excluded an SME supplier based in Tunisia after it was flagged for “elevated reputational risk.” Upon manual review, the red flag originated from an outdated media report referencing an unrelated company with a similar name, an error arising from poor entity resolution in the AI model’s training data. Such incidents highlight the need for human oversight and due diligence in interpreting AI outputs.

By contrast, AI models have also missed critical human rights violations. In 2021, investigative journalists uncovered that several suppliers in Xinjiang province, flagged as “low risk” by commercial AI risk tools, had ties to forced labour, a red flag not surfaced by the systems due to lack of local data and censorship of digital information.

Efficiency vs. Transparency: A Strategic Trade-Off

The appeal of AI in procurement is undeniable: quicker onboarding, real-time monitoring, and scalable due diligence. Yet these efficiencies can come at the cost of transparency, especially when buyers lack visibility into model logic or are contractually bound to third-party data providers.

This is particularly problematic under evolving regulatory regimes. The EU’s Corporate Sustainability Due Diligence Directive (CSDDD) and Germany’s Supply Chain Due Diligence Act (LkSG) both impose legal obligations for proactive risk identification. If buyers rely on AI tools without understanding their limitations, they risk non-compliance, or worse, ethical blind spots.

Leading organisations are beginning to respond. Some now require that AI-based scoring tools offer explainable outputs aligned with ISO/IEC 42001: the forthcoming international standard on AI management. Others insist on regular audits of training datasets and independent bias testing as part of contractual clauses with AI vendors.

Towards Ethical AI in Procurement

Towards Ethical AI in Procurement

Embedding ethics into AI-based supplier risk scoring requires more than algorithmic adjustments. It demands a shift in procurement governance, one that blends automation with human judgement, and transparency with accountability.

Practical steps include:

  • Mandating model explainability: Require vendors to document scoring criteria and training data sources.
  • Diversifying data inputs: Prioritise local, multilingual, and independent sources alongside global datasets.
  • Embedding human review: Ensure all high-risk flags are reviewed by procurement officers, particularly for critical categories.
  • Auditing for bias: Conduct regular third-party bias audits and monitor scoring disparities across supplier geographies and profiles.
  • Contractual safeguards: Integrate ethical AI compliance into RFPs and supplier contracts, including audit rights and data traceability clauses.

For ESG-conscious procurement leaders, the goal is not to reject AI but to use it responsibly, as a decision support tool, not a substitute for ethical judgement.

Sources:

World Economic Forum (2023). Responsible Use of Artificial Intelligence in Procurement.

Nature Machine Intelligence (2022). Language and geographic bias in ESG screening algorithms.

OECD (2021). State of Implementation of AI Principles.

European Commission (2024). Corporate Sustainability Due Diligence Directive (CSDDD).

If you would like to discuss your requirements, you can arrange a callback here or email info@keystoneprocurement.ie
DATE
SHARE THIS ARTICLE

Request a call back