Your browser version is outdated. We recommend that you update your browser to the latest version.

How Employers Misuse Testing to Screen Graduates

November 2019

Looking back over time, employers have tried different approaches to deal with the large volume of graduate applications. In the days when candidates mailed in hard copy applications (!), most employers embraced “credit grade average” as a screening criteria. Some even went a step further, only considering graduates who had attended leading universities. The rationale was strong academic results equated to better graduates. It also knocked out over a third of applicants.

That changed with the online world. The biggest impact was the introduction of online psychometric testing. Prior to that, testing was used (if at all) at a much later stage in the assessment process. Test results were carefully evaluated by an accredited professional, in conjunction with other candidate data collected during assessments.

With the affordability of low-cost online tests, employers could broaden the use of tests to evaluate graduates. To ensure the tests were used and interpreted correctly, the leading providers required HR users to undergo comprehensive product training, taking anywhere from two to five days.

But with time, things changed. An influx of testing providers and competition led to a relaxation of the training requirements for HR and how employers used the tests.

In graduate recruitment, testing became a convenient up-front screening solution. It could replace time intensive manual screening and online application questionnaires.

But there was a problem. Most employers (even today) had no data to correlate a test score with likely success in their organisation. They used minimum scores that in effect were nothing more than arbitrary benchmarks. And the test results were evaluated in isolation of any other assessment data. That was wrong.

Employers knew it was flawed. But they rationalised they didn’t have time to review every application. Testing they said, was their best alternative.

Testing providers cringe a little when they see how some of their clients use their products. But they say that’s up to the client. In a classic Australian study two years ago, a leading testing provider found that 50% of applicants rejected by its clients at the screening stage, possessed the interpersonal skills those employers were desperately seeking. In other words, the tests were rejecting a very large pool of strong candidates. Because testing wasn’t being used as it was intended.

Fast forward to today. Artificial intelligence is having the same impact as the introduction of online testing 15 years ago. From Pymetrics to Predictive Hire to Gradsift. They all offer a more valid way to screen graduates. And new technology brings substantially lower prices. Where online reduced the cost of a single psychometric test from $250 to $25 per applicant, an AI product like GradSift averages around $7.

Employers who still use testing to cull might well ask at what cost. Not just the budget but how many good applicants are rejected because of the way they screen.