Article

Navigating AI in Recruitment: Legal Considerations and Best Practices for Employers

Artificial intelligence (AI) is becoming increasingly integrated into recruiting and hiring processes in Canada and globally, as it is in all business sectors. Canadian employers may be interested in leveraging this technology for their benefit but in so doing must navigate a growing list of legal, ethical, and operational challenges.

“While AI offers clear advantages—streamlining recruitment and identifying top candidates—it also carries significant risks, particularly around issues of bias and discrimination, as well as privacy considerations.”

Certain jurisdictions are enacting legislation to address AI concerns and even where such legislation does not yet exist, existing law still presents potential hurdles. While employers may not wish to fall behind the curve in using AI, they should take proactive steps to ensure their hiring practices comply with both current law and evolving standards.

The Rise of AI in Hiring

From resume screening to video interview analysis, AI tools are rapidly changing the way employers assess job applicants. These systems often rely on historical data to predict a candidate’s fit for a role. In so doing, labour efficiency may be improved—no more asking an HR employee to sift through hundreds of paper resume. Suitable candidates may be identified more quickly, allowing employers to secure valuable talent before it is scooped up by another employer.

But what happens when the historical data relied upon by AI reflects past hiring biases—such as gender imbalances or racial disparities? Evidence suggests that the AI may replicate and even amplify those biases.

Real-world examples underscore these risks. Amazon famously abandoned its AI hiring tool in 2015 after it was found to downgrade resumes that included the word “women’s.” More recently, studies show generative AI systems like ChatGPT have demonstrated racial and gender bias in candidate scoring, reinforcing longstanding concerns about algorithmic discrimination.

Human Rights Risks and Employer Liability

Apart from reducing the effectiveness of the process (since the AI could be actively ignoring superior candidates based on its bias), this process might inadvertently cause the employer to breach human rights law and open the door to legal liability.

B.C. employers must ensure their hiring processes do not contravene the BC Human Rights Code, which prohibits discrimination in employment based on characteristics such as race, gender, age, and disability.

AI-driven hiring decisions—however efficient—do not exempt employers from these legal duties. If AI systems are not properly monitored, they may produce biased outcomes that expose employers to risk of breach—even if those outcomes were unintentional. This risk is particularly high when there is little to no human involvement in reviewing AI-generated results and decisions.

 

Privacy Considerations

Employers in B.C. must also ensure compliance with obligations under privacy rights laws.

The Personal Information Protection Act (PIPA) governs how private organizations collect, use, and disclose personal information, including employees’ personal information. PIPA will also apply to the manner in which employers collect job applicant information (which will naturally include data used by AI tools to evaluate candidates).

Generally speaking, an employer is permitted to collect (without express consent) such personal information as is necessary for the purpose of establishing an employment relationship. However, the employer must provide notice to applicants about the purpose of any collection, use or disclosure of such information.

In the case of a job application, the purpose is obviously to assess and potentially hire the candidate. Any collection, use or disclosure must relate to that purpose. In a traditional recruitment setting, this usually happens organically – applicant information goes to HR and/or a hiring manager and then onward to other decision makers in the hiring process as necessary. For example, someone’s application for a custodial position probably won’t be disclosed to the accounting team – and it really shouldn’t be under PIPA.

However, over-reliance upon AI in the recruitment process could risk this seemingly obvious expectation, particularly given AI’s shortcomings in applying context. Besides the risk of a PIPA breach, misuse of candidate information could cause a loss in morale or confidence in the hiring team from both candidates and current employees.

To reduce these risks, employers should ensure sufficient human oversight in AI-assisted recruitment and hiring decisions, maintain transparency about how applicant data is used, and ensure that internal policies align with privacy law and any other legal obligations.

New Disclosure Requirements in Ontario

To address some of these concerns, Ontario passed Bill 149, Working for Workers Four Act, 2024, making it the first province in Canada to legislate AI transparency in employment. Effective January 1, 2026, Ontario employers will be required to disclose their use of AI in screening, assessing, or selecting job applicants in publicly advertised postings.

“The stated goal of this legislation is to improve transparency and accountability in tech-assisted hiring. Although the application of this legislation is not yet known, it does seem likely that other provinces including B.C. will eventually follow suit with similar law.”

Federal Legislation on the Horizon

At the federal level, Canada has proposed the Artificial Intelligence and Data Act (AIDA), which would regulate “high-impact” AI systems used in trade and commerce. While the AIDA has not yet passed, let alone gone into force, it does signal a move toward national oversight of AI technologies, including those used in employment. The accompanying Consumer Privacy Protection Act (CPPA) would require organizations to provide plain-language explanations of how automated decision systems are used—further increasing transparency obligations.

Best Practices for Employers

Until more robust legislation is in place, employers should proceed with caution. This should include the following measures:

  • Disclose AI Use: Consider voluntary disclosure to build trust and reduce legal exposure.
  • Review AI Systems for Bias: Use diverse training data and apply tools like regularization or data preprocessing to reduce discriminatory outcomes.
  • Maintain Human Oversight: Ensure a real person reviews decisions flagged by AI, especially those that could affect a candidate’s eligibility.
  • Ensure Privacy Compliance: Align AI tools with PIPA (in BC), the Personal Information Protection and Electronic Documents Act (PIPEDA), and other applicable data protection laws.
  • Seek Legal Advice: Employment counsel can help review AI contracts, update job postings, and mitigate human rights risks.

Final Thoughts

AI is poised to play a growing role in how Canadian businesses recruit talent—but efficiency cannot come at the cost of equity or compliance. Ontario’s new law may be the first of many steps toward greater accountability across Canada. In the meantime, all employers should adopt transparent, bias-aware practices and remain alert to regulatory developments. When used responsibly, AI can support fairer, more inclusive hiring outcomes. But without the right safeguards, it may do just the opposite.