We now live in a world where AI and machine learning is being applied to more and more decision making.  Within recruitment and search the use of algorithms to pre-screen applicants or CVs will become increasingly common.  The challenge is in ensuring that unintentional bias or discrimination does not creep into the process.  And with machine learning it may well be that the mechanism inadvertently reveals a level of bias that was not previously obvious.

At a simple level, machine learning will automate the selection of potential candidates from those most likely to succeed based on past success.  If there is a certain type of candidate that tends to get selected, then the algorithm will filter more candidates with criteria that match those previous selections.  And whilst most people will convince themselves they are selecting on a basis of objective facts, there is likely to be an underlying bias that they are not fully aware of.

In the same way that many news applications are now feeding their readers with news based on the stories they were previously interested in, consequently polarising their views and news feeds to articles they are most likely to agree with, HR algorithms can generate a selection of lookalike candidates and the selecting managers may not be aware of the fact that they are being driven to select from a decreasingly spread pool. An effective system would include occasional candidates which go against the bias to avoid excessive filtering and these systems should be subject to regular checks and audits to understand what criteria are being used.  

Under GDPR, with the increased focus on Automated Decision Making and Profiling and the need to ensure that EU citizens are not unfairly treated through these processes, the systems using algorithms, machine learning and AI are likely to come under increased scrutiny.  Ironically, they may be subject to filters that select those systems that are best at the job without bias, with machines vetting machines.