© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Your future career path will likely be determined by AI


The automation of numerous recruitment-related tasks has been growing at a frantic pace in recent years, with recruiters and HR managers increasingly striving to do more with less, whether that's filling vacancies faster, finding better candidates, or doing all of this with less money and manpower than ever.

It's understandable, therefore, that AI and other automation tools are being deployed to help manage these burgeoning expectations.

Data suggests that AI is commonly being deployed across the recruitment process, but most frequently to help recruiters source candidates, screen them, and then nurture possible leads. It’s perhaps not surprising that in a recent survey, 80% of executives said that thought AI was capable of significantly improving the performance and productivity of HR and recruitment teams.

Despite this general optimism, however, data from Gartner from a few years ago showed that just 17% of organizations were using AI in HR in some way, with this expected to grow to 30% by this year. If firms are to make greater use of AI in their recruitment, then it’s vital that they do so in an ethical way.

Ethical recruitment

Obviously, it goes without saying that traditional hiring methods are riddled with problems, whether it's extremely low success rates, selection processes that are undermined by prejudices and stereotypes, and various other factors that mean that recruitment today is often a slapdash affair. It's perhaps understandable, therefore, that recruiters hope that AI will be able to do so much better.

For instance, the use of AI during recruitment obviously has significant potential in terms of time saved, but it might also be able to better predict a candidate’s job fit than more traditional methods. Indeed, it might even be possible for AI to increase key characteristics of recruitment, such as consistency, transparency, and fairness.

These are by no means guaranteed, however, and it’s a topic that Deloitte’s Beena Ammanath focuses on in her recent book Trustworthy AI, which aims to provide “a business guide for navigating trust and ethics in AI.” She outlines a number of steps organizations can take to ensure that when they use AI, they do so in a way that ensures ethics and fairness are adhered to.

Trustworthy AI

  • AI policies and controls – The first step is to ensure that your organization has the right policies and controls specific to AI to avoid any possible biases or discrimination. You should also consider what controls you may need to ensure that whatever algorithms you deploy maintain fairness.
  • Does your algorithm/s contain discriminatory biases? Algorithms are often only as good as the data they’re fed on, and sadly many corporate datasets are anything but representative of the public. This can result in differential treatment of certain groups that are anything but justified by any underlying factors. It’s important that you’re able to test your algorithms on a regular basis to uncover any possible biases.
  • How do you monitor and evaluate your data? We’re all familiar with the maxim of “garbage in, garbage out,” and any deployment of AI really lives and dies on the quality of the data that it uses to train itself on. You will want to understand where your data comes from and be able to test whether it’s an accurate and fair representation of the relevant population.
  • What remediation policies are in place? If you manage to detect unfairness in your system, or indeed if someone else detects it for you, what processes are in place to remedy the situation? It’s vital that your stakeholders both inside and outside the organization are confident in the system and trust that the outputs are fair.
  • How would you defend your actions? A commonly used thought process to assess whether something is ethical or not is to imagine it became public knowledge or was broadcast in the media. A similar thought process can occur when assessing the use of AI in your organization. How would you defend the fairness of your system if asked to do so by elected officials, a regulator, a court, or even the general public?
  • Reputational harm - Leading on from this, has your organization thought through potentially worst-case scenarios that could arise from the use of AI in your organization? What possible reputational damage could accrue if the system proves to be biased or unfair?
  • How reliable are third-party developers? There are well-known skills shortages in tech-related domains, so it’s very likely that you will need to rely on third-party partners in some way to develop your system. How reliable are these partners and are there any risks to using them?

“Trustworthy AI is built on a variety of characteristics that must all be present to function ethically: fair and impartial; robust and reliable; respectful of privacy; safe and secure; responsible and accountable; and transparent and explainable,” Ammanath explains.

There are undoubtedly many potential benefits to using AI as part of your recruitment toolkit, but there are also some vital considerations to make to ensure that you do so in a fair and accountable way. Ammanath’s guidelines provide a good starting point to help you do just that.


More from Cybernews:

North Korean ransom gang undercuts competitors by charging low fees

US probing federal court records system breach

Neom or "a civilizational revolution": here is what Saudi Arabia's city of the future will look like

Google mimicked in email phishing scam

Apple network traffic went through Russia for 12 hour

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked