Ok…so now we have the legal landscape…lets move immediately on to what the evidence might tell us for what works and what doesn’t in this new world
Abstract
The folklore about cv screening is of course rich. Even in days of yore some data-driven approaches were possible beyond cv gurus telling you ‘i have screened 20,000 cvs and this is my opinion….” In the AI age this is more relevant than ever. This blog proposes a number of AVOID or OPTIMISE recommendations based on a review of market discussions and analysis.
It then turns to recommending industry practitioners who advise companies on the hiring lifecycle. Rather than rely on ‘candidate coaches’ why not plug in to the people advising companies on how to hire. Ultimately they want to move from need to JD to successful employee, so they will be tuning their strategies with that in mind.
Finally there are some resources from LinkedIn
Recommendations
1. Hard knockout questions are evaluated first, before any ranking. [AVOID/OPTIMISE] Right-to-work, security clearance, registration body (NMC, GMC, SRA, ACCA), salary, notice period, location β all evaluated as Boolean filters. A “no” disqualifies before a human sees you. The 1:50 a.m. rejection in Mobley v. Workday was at this layer. Answer carefully β and if your CV says 7 years but you tick “no” on the 5+ years question, you’re out anyway.
2. Job titles are normalised against an ontology β match the public taxonomy. [OPTIMISE] Parsers map your title to nodes in ESCO (EU/UK), O*NET, or proprietary skills graphs. “Sr. SWE” only equals “Senior Software Engineer” if the alias is in the taxonomy. Title mismatch is the single biggest reason qualified candidates are downranked, per the HBS / Accenture Hidden Workers: Untapped Talent report (8,720 hidden workers + 2,275 executives surveyed). Use the standard public title in brackets if your employer’s internal title is non-standard.
3. Skills count more when evidenced in role, not listed in isolation. [OPTIMISE] Modern parsers (RChilli, Textkernel, Workday Skills Cloud) capture not just the skill name but the section it appeared in, the last-used date, and months of experience inferred from surrounding employment. Embedding rankers weight “demonstrated in role” above “listed in skills section.” Each skill should appear in a list and in a work-experience bullet that shows how you used it.
4. Years of experience is calculated from parsed dates, not your declaration. [BOTH] Mis-parsed dates therefore destroy the signal directly. Confirmed in the Alibaba SmartResume layout-aware parsing paper (arXiv 2510.09722). Use one consistent date format throughout β mixing “Jan 2021βPresent”, “01/21β04/24” and “March 2020 to date” causes Workday and Textkernel to drop tenure on the inconsistent entries. UK applicants writing DD/MM/YYYY into US-deployed systems sometimes have day/month silently swapped.
5. Certifications need exact-string matches against employer-supplied lists. [OPTIMISE] PRINCE2, AWS Solutions Architect, CIPD Level 5, CFA, PMP β write them as named in the JD, not paraphrased. This is one of the highest-precision signals because employers configure exact certification names. Documented in iCIMS’ parsing guidance.
6. Use both the acronym and the expansion for every skill. [OPTIMISE] “ML” without “Machine Learning”, “PMP” without “Project Management Professional”, “SEO” without “Search Engine Optimization” β you’ll miss whichever node the ontology indexed. Both forms cover both cases. Documented in ADP Research’s skills-ontology analysis.
7. Use standard section headers β non-standard ones cause whole sections to be skipped. [AVOID] Parsers segment on keyword headers (“Experience”, “Education”, “Skills”, “Certifications”). Creative headers like “Career Journey”, “What I Bring”, or “Highlights” are not in the dictionary, and the section gets misclassified or dropped entirely. Confirmed in iCIMS and Workday parsing documentation.
8. Multi-column layouts and text boxes corrupt parsing. [AVOID] The single highest-impact, fully-empirical formatting gotcha. Parsers flatten columns left-to-right, scrambling chronology. Confirmed in the Alibaba SmartResume paper, the iCIMS parser docs, and parser vendors generally. Use single-column. Always.
9. Image-based PDFs and graphics-only headers are invisible. [AVOID] The parser only sees what OCR can extract β and many enterprise ATS skip OCR for cost reasons or run it at lower confidence. If your contact details are in a graphic banner, they often don’t get parsed. Use a text-based PDF or .docx.
10. Headers and footers are stripped on most ATS. [AVOID] Putting your name, email, or phone in the Word header/footer means they may not be parsed. Confirmed in Workday and Greenhouse documentation. Put contact details in the body.
11. Combined-title entries lose promotion history. [AVOID] “Software Engineer, promoted to Senior Software Engineer, 2020β2024” β Workday’s parser captures only the first title and applies the full date range to it, costing you both the promotion and the senior tenure. Use separate entries per title with their own date ranges.
12. Don’t keyword-stuff a standalone Skills section. [AVOID] Embedding rankers (Workday, iCIMS, SmartRecruiters) penalise skills that appear only in a list and aren’t evidenced in any work-experience bullet. Confirmed by Wilson & Caliskan (AIES 2024, arXiv 2407.20371) on Massive Text Embedding ranking models. The “75% rejected by ATS” statistic is internet folklore β HiringThing’s debunking traces it back to a defunct vendor with no published methodology.
13. Keyword matching is now embedding-based, not pure TF-IDF. [OPTIMISE] The candidate’s parsed profile and the JD are both vectorised; cosine similarity drives the ranking. Mirror the JD’s terminology and phrasing, not just the keywords. Recruiters typically only review the top 25 ranked CVs, per Jobscan’s 2025 ATS Usage Report (97.8% Fortune 500 ATS use confirmed by reverse-engineering all 500 companies).
14. Education and institution are parsed and sometimes filtered. [BOTH] Degree level (UK levels 4β8), subject, awarding body β all parsed structurally. Some configurations check institution against a list. The Mobley v. Workday complaint specifically alleges “schools attended” was used as a proxy variable for protected characteristics.
15. Employment gaps over three months should be explained in plain text. [AVOID/OPTIMISE] The ICO’s AI Tools in Recruitment Audit Outcomes Report (Nov 2024) found tools using gaps as a filter β a discriminatory proxy for maternity, disability, or caring responsibilities. UK law protects you from that filter being used; your practical defence is a one-line plain-text explanation (“2021β2022: full-time caregiving”) so the AI categorises the gap correctly rather than penalising it as unexplained.
16. Inferred demographic features are happening β strip the proxies you can. [AVOID] The ICO November 2024 audit found tools inferring gender and ethnicity from candidate names; the ICO ordered it stopped. Wilson & Caliskan (2024) found ~85% preference for white-associated names and 11% for female-associated names across 3M+ comparisons β and near-100% disadvantage for Black male-associated names. You can’t change your name, but you can deny the secondary proxies: no DOB, no photo, no marital status, no school-leaver dates that let it infer age, no nationality (just “Right to work: Yes”).
17. Location is enforced as postcode-based geofencing. [OPTIMISE] Standard in UK ATS, especially Tribepad (which dominates the public sector β NHS Trusts, BBC, Tesco, ~1 in 7 UK jobseekers via GOV.UK Digital Marketplace) and Eploy for NHS/local-government roles. Configurable radius. Put a real postcode on your CV if you’re within commute of the role; “London” alone may fail a 25-mile filter from a specific office.
18. Application metadata is part of the score. [BOTH] Time-to-complete, source channel (Reed, Indeed, LinkedIn, employee referral), prior application history with the same employer, and whether you completed all knockout questions. Prior rejections at the same company are a strong negative signal in most enterprise configurations β applying repeatedly to the same employer can hurt you more than help.
19. CV / LinkedIn / cover letter must align β recruiters triangulate. [AVOID] LinkedIn Recruiter System Connect (RSC) syncs candidate data between most major ATS and LinkedIn, with email as the primary matching key. RSC doesn’t auto-validate dates against your LinkedIn profile, but recruiters do: LinkedIn’s Future of Recruiting puts manual cross-checking at 87β92% before interview. AI tools like Eightfold, HiredScore, and Brainner now also flag inconsistencies automatically. Title inflation, stretched dates, fabricated certifications get caught and create rejection on integrity grounds β not a protected basis you can contest.
20. VERY ESOTERIC, BUT: Know which AI architecture is screening you and tailor accordingly. [OPTIMISE] Greenhouse deliberately doesn’t algorithmically rank or auto-disposition (rules-based knockouts only) β narrative quality matters more there. Workday (37%+ Fortune 500 share, dominant in UK FTSE 100) plus its HiredScore acquisition, iCIMS Candidate Ranking, and SmartRecruiters SmartAssistant 2.0 all do embedding-based ranking β parser-friendliness and ontology alignment matter more there. The careers-page privacy notice is required to disclose AI use under UK GDPR Articles 13/14 β read it before you optimise.
A note on what’s not in this list: the “75% of CVs auto-rejected” statistic, the “ATS rejects you in 6 seconds” claim, and the various AI-resume-checker SEO blogs (Resumly, ResumeGyani, Mployee.me, etc.). All circulate widely, none have published methodology, and the actual HBS data shows the rejection problem is about employer configuration (88% of employers say their ATS rejects qualified candidates because the rules are wrong), not about a fixed automated rejection rate.
Practitioners
1. Dr John Sullivan β San Francisco State professor, called “the Michael Jordan of Hiring” by Fast Company. Has spent 40+ years arguing that recruiting should be quantified in dollars of business impact, with metrics tied to the CFO’s standards. Best known for the Stunning Performance Differential research β using revenue-per-employee comparisons (Nvidia $5.2M, Apple $2.5M, Dell $965K) to argue that hiring top performers is a balance-sheet event, not an HR cost line. Publishes weekly via the Aggressive Talent Management newsletter; writes regularly for ERE and Dice.
2. Dr Tomas Chamorro-Premuzic β Professor of Business Psychology at UCL and Columbia; Chief Innovation Officer at ManpowerGroup (was Chief Talent Scientist), former CEO of Hogan Assessments, co-founder of Deeper Signals. 150+ peer-reviewed papers and 10 books including The Talent Delusion: Why Data, Not Intuition, Is Key to Unlocking Human Potential and Why Do So Many Incompetent Men Become Leaders?. UK-based for most of his career; advises JP Morgan, Goldman Sachs, HSBC, Unilever, the British Army, the BBC. The closest direct analogue to Sullivan in profile, with much heavier academic weight.
3. Iris Bohnet β Albert Pratt Professor of Business and Government at Harvard Kennedy School; co-director of the Women and Public Policy Program; behavioural economist. What Works: Gender Equality by Design (Harvard University Press, 2016) and Make Work Fair (2025) lay out the empirical case for structured interviews, comparative evaluation, and de-biasing organisations rather than individuals. Special Advisor on the Gender Equality Acceleration Plan to the UN Secretary-General; member of the G7 Gender Equality Advisory Council. Her structured-interview protocol is the methodological backbone of evidence-based hiring redesign.
4. Peter Cappelli β George W. Taylor Professor of Management and Director of Wharton’s Center for Human Resources; D.Phil. (Oxford) in Labor Economics; Research Associate at the National Bureau of Economic Research. Talent on Demand (HBS Press, 2008), Why Good People Can’t Get Jobs (2012), and ongoing work on automated recruiting. The original critic of the “skills gap” narrative β argued years before the HBS Hidden Workers report that the problem is employer configuration of automated screens, not a shortage of qualified people.
5. Adam Grant β Saul P. Steinberg Professor at Wharton; co-director of Wharton People Analytics; organisational psychologist. Wharton’s top-rated professor for seven straight years. ~52,000 academic citations. Has advised Google, Pixar, the NBA, and served on the Defense Innovation Board at the Pentagon. Founded the Optimize Hire pre-employment test based on his I/O psychology research β i.e. he’s worked the chain from peer-reviewed research to deployed selection instrument.
6. Boris Groysberg β Richard P. Chapman Professor of Business Administration at Harvard Business School. Chasing Stars: The Myth of Talent and the Portability of Performance (Princeton University Press, 2010) β empirical analysis of 1,000+ Wall Street star analysts vs. 20,000 controls across 400 firms β established that “star” performance is far less portable than firms assume. The single most rigorous published challenge to the “hire stars” mental model. Still actively publishing HBS cases (most recent: skills-first talent management, 2025).
7. Wayne Cascio β Distinguished University Professor Emeritus at CU Denver; past President of the Society for Industrial and Organizational Psychology; PhD I/O psych from Rochester. Author of Investing in People: Financial Impact of Human Resource Initiatives (with Boudreau) and Costing Human Resources. The pioneer of utility analysis β i.e. the actual mathematics for working back from “what does an extra unit of job performance cost or earn the firm?” to “what’s a hire worth?”. Sullivan’s dollar-impact framing rests on Cascio’s foundations.
The practitioners with credible data
8. Laszlo Bock β Former SVP of People Operations at Google (2006β2016); credited with creating the field of “people analytics“. Work Rules! (NYT bestseller, 2015). Founded Humu (nudge engine, acquired 2023); now Advisor at General Catalyst, co-founder/Chairman of Gretel.ai, and founder of the Berkeley Transformative CHRO Leadership Academy. His Project Oxygen (do managers matter?) and Project Aristotle (what makes teams effective?) at Google are referenced everywhere β and unusually, the underlying methodology has been open-sourced.
9. Lou Adler β Founder of The Adler Group; creator of Performance-based Hiring, now in its fourth edition (Hire With Your Head, Wiley, 2021). Has trained 50,000+ recruiters and hiring managers. The methodology is exactly what the user’s question implies: define a job as 5β6 measurable performance objectives first, then design the entire screening, interview, and selection process backwards from those. Validated as legally defensible by Littler. Less “research” than the academics, but the closest match to “what science goes into JDs and how do you reverse-engineer hiring from outcomes.”
The market analysts producing primary data at scale
10. Josh Bersin β Founder of The Josh Bersin Company; previously Bersin by Deloitte. The largest single producer of survey-based primary research on hiring practice β the 2025 Systemic HR research ran across 107 HR strategies, 1,000+ companies, 26 million employees, 50+ CHRO interviews, and a LinkedIn dataset of 7.5M HR practitioner profiles. The 2025 Talent Acquisition Revolution report (with AMS) is currently the most-cited industry benchmark on how AI is reshaping hiring. Less peer-reviewed than the academics; far better at quantifying actual practice across the whole market.
One UK addition you should know about
Rob Briner β Professor of Organisational Psychology at Queen Mary University of London; Scientific Director of the Center for Evidence-Based Management (CEBMa). Named the Most Influential HR Thinker by HR Magazine. Spends most of his time arguing that HR uses almost no actual evidence β and methodically dismantling fashionable concepts (employee engagement, NLP, much of performance management) when the underlying science doesn’t support them. The closest UK-based analogue to Sullivan’s bracing “where’s the evidence?” stance, but academic rather than practitioner. If you want a London-based voice with the same instinct for puncturing HR orthodoxy with data, he’s it.
LinkedIn
LinkedIn Economic Graph β LinkedIn’s research arm. A “digital representation of the global economy” built from over 1 billion members, ~41,000 skills, 67 million companies, and 133,000 schools, in 200+ countries. This is the largest single dataset on hiring patterns, skill demand, and labour-market flows in existence β and unlike most analyst data, it’s behavioural (what people actually did) rather than survey (what they say they did). The team partners directly with the World Bank, World Economic Forum, OECD, and national governments β meaning their methodology is publicly defensible.
The two flagship outputs:
Workforce Reports β monthly real-time data on hiring rates, skills gaps, migration, and localised employment trends, broken out by country (including dedicated UK reports). The methodology (the LinkedIn Hiring Rate = members adding a new employer in a given month / total members in that country) is documented and stable across years.
The Future of Recruiting β annual report, currently in its 2025 edition. Combines billions of platform data points with a survey of 1,000+ talent professionals. The 2025 edition is built around AI’s effect on Quality of Hire and is the most-cited industry benchmark on how recruiting practice is actually shifting. The PDF is available directly. It introduced LinkedIn’s platform-level QoH proxy (combining demand, retention, and internal mobility) which is now widely used as a benchmark.
The named researchers
Karin Kimbrough β Chief Economist at LinkedIn since 2020; previously Head of Macroeconomic Policy at Goldman Sachs and the Federal Reserve Bank of New York. She’s the public voice for the Economic Graph’s findings and is regularly cited by the FT, WSJ, and Bloomberg on labour-market questions. Her author page on the Economic Graph is the cleanest single feed of LinkedIn’s primary research output. Her recent work focuses specifically on the JD-to-outcomes question β i.e. how skills-based hiring (versus credential-based) actually changes who gets hired and how they perform.
Aneesh Raman β Chief Economic Opportunity Officer at LinkedIn since 2024 (newly created role). Co-author with CEO Ryan Roslansky of Open to Work: How to Get Ahead in the Age of AI (HarperCollins, March 2026). Less of a researcher than Kimbrough, more of a strategic voice on the recruiting process itself β his public position is that “the labour market is perhaps the least transparent, least dynamic, least inclusive market that humans have ever created” and that AI plus skills-based hiring is what fixes it. Worth following because LinkedIn’s product strategy follows his framing.
Kory Kantenga, PhD β Senior Economist at LinkedIn, focused on equity and labour-market access. Less senior than Kimbrough but produces some of the most rigorous breakdown work β for example, on how remote/flexible-work policies change the demographic mix of who gets hired.
What LinkedIn data has actually proven about hiring strategy
A few empirically grounded findings from LinkedIn’s own research that are now widely cited (and that hold up under scrutiny):
- Skills-based hiring expands talent pools by orders of magnitude. LinkedIn’s own skills-first analysis shows the talent pool expands ~10Γ when employers filter on skills rather than degrees. This isn’t marketing β the underlying member-skill data is the basis.
- Companies whose recruiters use AI-Assisted Messaging are 9% more likely to make a quality hire (Future of Recruiting 2025). Notable because LinkedIn published the negative finding that adoption alone doesn’t drive quality β it’s the top-quartile users who get the lift.
- 26% of paid LinkedIn job postings in 2024 didn’t require a degree (up from 15% pre-pandemic). The clearest quantitative evidence of the skills-first shift actually happening at the JD level.
- Recruiter triangulation β the 87β92% figure I cited earlier in the CV advice β comes from LinkedIn’s own Future of Recruiting survey work.
The honest caveats
LinkedIn’s data has known biases and you should keep them in mind:
- Selection bias: it’s based on people who use LinkedIn. White-collar, English-speaking, professional roles are heavily over-represented; trades, hourly work, public sector, and roles in countries where LinkedIn has thin penetration are under-represented.
- Self-reporting: skills, titles, and tenure are member-declared. The Economic Graph team has done good work on de-duplication and normalisation, but the underlying data is what people put on their profiles.
- Marketing-research overlap: the Future of Recruiting report is genuine research but is also a sales asset for LinkedIn Recruiter and LinkedIn Talent Insights. The findings tend to flatter use of LinkedIn products. Read it alongside Josh Bersin’s company research and Aptitude Research for triangulation β between them, the three describe the same market with different incentives, and the consensus between them is what’s solid.
- It’s data, not science: this is descriptive (“what’s happening on the platform”) not predictive (“which selection methods produce better employees”). For the latter you still want Schmidt-Hunter, Cascio, Bohnet, Chamorro-Premuzic. LinkedIn tells you what the market is doing; the academics tell you what it should do.
The right way to use LinkedIn’s data is as the largest available real-time empirical baseline on hiring practice β and to combine it with the academic validity-of-selection literature to figure out which practices are actually worth adopting.
