The (In)Effectiveness of Psychological Targeting: A Meta‐Analytic Review

The (In)Effectiveness of Psychological Targeting: A Meta‐Analytic Review

Summary

This paper is an end-to-end meta-analytic review assessing whether psychological targeting — inferring Big Five traits from digital footprints and using those in personality‑tailored messaging — actually moves the needle. The authors apply machine-learning evaluation standards (including leakage checks), develop a design taxonomy that isolates true personalization effects, and combine prediction and targeting evidence in a Bayesian hierarchical framework to produce deployment‑relevant benchmarks.

Key empirical takeaways: when studies with data leakage are excluded, trait prediction models achieve a pooled out‑of‑sample correlation of roughly r ≈ 0.23. Most crucially, only the trait × treatment interaction design can identify a genuine targeting increment; restricting to such unbiased designs yields a pooled targeting effect of about r ≈ 0.009 (95% CI includes zero). After attenuation by realistic prediction accuracy, the end‑to‑end impact is essentially negligible (≈ r = 0.002).

The review finds pervasive methodological problems (data leakage, underpowered models, and many ineligible experimental designs) and concludes that, under current practices and data conditions, psychological microtargeting is a high‑variance, low‑yield strategy. The authors recommend prioritising ad quality, rigorous interaction designs, and stronger reporting standards.

Key Points

  • Data leakage is common: ~42.5% of studies showed clear leakage; only ~40% were confirmably leakage‑free.
  • Leakage greatly inflates reported prediction performance: unvetted studies pooled r ≈ 0.35 vs leakage‑controlled r ≈ 0.23.
  • Only one study design (trait × treatment interaction with randomised message assignment and measured traits) can identify the true personalization effect without confounding.
  • When restricting to unbiased designs, the pooled psychological targeting effect is near zero (r ≈ 0.009; 95% CI spans zero).
  • End‑to‑end effect after accounting for realistic prediction error is effectively nil (attenuated r ≈ 0.002).
  • Practical implication: improve creative/ad main effects and measurement reliability before betting on psychometric microtargeting.
  • Research guidance: adopt leakage checklists, pre-register interaction tests, ensure adequate observation‑to‑feature ratios and clearer reporting of evaluation splits and uncertainties.

Author take

Punchy: This paper should be required reading for marketers, platform teams and researchers who assumed psychometric targeting was a ready-made magic bullet. It does the hard yards: cleans the evaluation, isolates identification, and shows how tiny — or non‑existent — the real effect is when you control for method. If you care about evidence, read the methods section.

Why should I read this

Short version: if you’ve heard that ads can be magically tuned to your “inner self” using social data, this paper pretty much busts that myth — at least for now. It’s a rigorous, no‑nonsense audit showing prediction is modest and true targeting gains vanish under proper tests. Read it if you make decisions about ad spend, audience modelling or policy on data‑driven persuasion — it’ll save you time and budget by steering you away from hype and toward better creatives and experimental design.

Source

Source: https://onlinelibrary.wiley.com/doi/10.1002/mar.70073?af=R

Leave a Reply

Your email address will not be published. Required fields are marked *