Call Sales +44 808 164 2563
 
< Back to Hiring Blog

How iCIMS built its responsible AI program

August 12, 2024
4 min read
Learn how iCIMS can
help you drive ROI

The beginning of iCIMS’ AI journey

iCIMS first adopted the use of artificial intelligence in our recruiting software after acquiring Opening.io in 2020. Opening.io applied sound and valid data science practices at the top of the recruiting funnel to tell recruiters which candidates to talk to first and why. Because the Opening.io team developed their AI technology years before regulations began to take shape, they did not have formal AI governance practices or a formal “Responsible AI” program. Still, they always engineered their AI technology with ethical principles in mind, ensuring that only relevant data was used for job matching and that potential bias was minimized.

Four years and a lot of innovation later, Opening.io’s original idea has transformed into the award-winning, responsible AI solution known as iCIMS Talent Cloud AI.

 

Using global standards to form iCIMS’ Responsible AI program

Shortly after the acquisition, iCIMS took steps to formalize its AI governance practices and build a Responsible AI program. At the time, there were very few AI-specific standards, frameworks or regulations around which to model an AI program, but there were some published guidelines and principles.

It was reported that the European Union was developing a comprehensive AI regulation that was closely aligned to the “Ethics Guidelines for Trustworthy AI” developed by the High-Level Expert Group on AI(AIHLEG), an independent expert group set up by the European Commission in 2018.

Working under the assumption that this EU AI regulation would become the global standard – similar to how the EU General Data Protection Regulation (GDPR) became the global standard for privacy and data protection – iCIMS decided to model its AI governance practices around the AIHLEG guidelines.

Around the same time those guidelines were published, OECD.AI published its AI Principles to promote the use of AI that is innovative, trustworthy and respectful of human rights and democratic values. Because these principles aligned nicely with the ones set forth in the AIHLEG guidelines, iCIMS incorporated them into our AI governance practices.

These guidelines and principles are a model for ethical and trustworthy innovation that puts people first, which was (and is) fundamental to iCIMS’ AI journey.

 

Putting best practice into action

iCIMS formalized our AI governance practices by developing internal policies to help guide our teams. Our “Artificial Intelligence and Machine Learning Policy” and “Artificial Intelligence and Machine Learning Guidance” documents are the results of this early formal governance. Additionally, the ethical AI standards set forth in our AI policy were condensed into the iCIMS AI/ML Code of Ethics, which provides the standards around which iCIMS develops its AI program:

  • Human-led
  • Transparent
  • Private and secure
  • Inclusive and fair
  • Technically robust and safe
  • Accountable

iCIMS has also developed an Acceptable Use of AI Policy, which sets forth a governance model for the use of generative AI and develops best practices to adopt and implement generative AI. The intent of this policy is to promote the responsible use of AI at iCIMS and to set out a framework for leveraging the power and benefits of AI to support iCIMS’ business operations while managing the related risks to levels that are acceptable to iCIMS.

These policies are continually reviewed and updated to align with new and evolving standards, frameworks and regulations, such as the NIST AI Risk Management Framework, ISO 42001 AI Management System, EU AI Act and revised OECD AI Principles.

 

Continuous innovation and accountability

The next phase of building our program involved establishing various committees to ensure our development and use of AI are aligned with our AI/ML Code of Ethics. These committees are formed by personnel from across the business to ensure we consider multiple interests and viewpoints.

  • The Responsible AI Committee comprises cross-functional personnel representing business departments that use, develop and/or provide relevant advice regarding the responsible use and development of AI. The Responsible AI Committee assesses the impact of all AI developed by iCIMS to ensure that it adheres to ethical AI principles.
  • The AI Governance Committee comprises cross-functional personnel from iCIMS’ AI, Legal and Privacy teams and is responsible for the governance of AI functionality in iCIMS products, including annual bias audits, Privacy by Design assessments and AI risk assessments.
  • The Generative AI Committee comprises cross-functional department leaders and is responsible for (1) learning about Generative AI and large language models, (2) making recommendations to iCIMS’ executive leadership team and board of directors about iCIMS’ use of Generative AI and the associated risks, (3) developing a governance model for the use of Generative AI, (4) developing the principles, best practices and processes to follow when adopting and implementing Generative AI and (5) defining key performance indicators for monitoring and measuring the impact of iCIMS’ use of Generative AI.

 

Looking toward the future

iCIMS’ AI technology has evolved by leaps and bounds in four short years. We continue to maintain and evolve our Responsible AI program to address the rapidly evolving AI landscape and ensure that we continue to develop and use our technology in a responsible, trustworthy and ethical manner that complies with the applicable global laws.

Stay tuned for our next post that will explore how iCIMS leverages its global privacy program to operationalize its Responsible AI program.

Until then, you can read more about how we’re using AI to help solve recruiting challenges here.

×

Learn how iCIMS can help you drive ROI

Explore categories

Explore categories

Back to top

Join our growing community
and receive free tips on how to attract, engage, hire, & advance the best talent.

Read more about AI & ML

Fall 2024 Release: Stronger marketing, communication and collaboration

Read more

Talent Experience Report: How TA pros can prepare for the future

Read more

How iCIMS is leveraging its privacy program to operationalize its Responsible AI program

Read more

About the author

Bill O'Connor

Bill leads the privacy and regulatory compliance programs at iCIMS. He is an experienced leader and privacy officer with a demonstrated history of providing strategic business and legal advice to manage compliance and legal risk in the areas of global data privacy, information security, artificial intelligence, data governance, commercial transactions, M&A, and corporate regulatory matters.

He builds and leads global privacy and compliance programs, and works with customers, partners, and internal stakeholders to integrate trust, compliance and business strategy. Bill holds a JD with multiple certifications, including the S-CDPO, PLS, FIP, CIPP/E, CIPP/US, CIPM, CIPT and CISSP.

Read more from this author >