Advancing Systematic Literature Reviews With Artificial Intelligence: A TCCM‐Based Synthesis of Over 50 Years of Double Jeopardy Research

Advancing Systematic Literature Reviews With Artificial Intelligence: A TCCM‐Based Synthesis of Over 50 Years of Double Jeopardy Research

Summary

This review demonstrates a practical, reproducible method for improving framework-based systematic literature reviews (SLRs) by combining artificial intelligence (AI) with a human-in-the-loop (HITL) workflow. Using the Theory–Context–Characteristics–Methodology (TCCM) framework to classify and validate knowledge, the authors benchmark several AI platforms for automated coding and show high inter-coder reliability (from 96% for ChatGPT-4o down to 83% for Gemini Advanced). The top-performing AI (Cohen’s Kappa κ = 0.95) performed initial coding steps; discrepancies were then resolved through HITL review. To illustrate the method, the paper synthesises 179 peer-reviewed publications spanning 50 years on the Double Jeopardy empirical regularity in marketing. The authors argue the AI+HITL approach—termed “smart bibliometrics”—is faster, cheaper and more scalable than traditional software-only workflows while still meeting standards for reproducibility and transparency.

Key Points

  • The study introduces an AI + human-in-the-loop protocol for framework-based SLRs using the TCCM classification.
  • AI coding dramatically reduces manual effort; inter-coder reliability ranged from 96% (ChatGPT-4o) to 83% (Gemini Advanced).
  • The top AI model achieved Cohen’s Kappa of 0.95 and was used for initial coding with HITL adjudication afterwards.
  • Method demonstrated on a synthesis of 179 articles across 50 years of Double Jeopardy research, producing a transparent, replicable knowledge map.
  • Authors position the method as superior to software-only bibliometrics and suitable for scalable, high-quality literature synthesis in marketing and beyond.

Context and Relevance

Systematic reviews are central to cumulative knowledge, but they are time-consuming and prone to subjective variation in coding. This paper sits at the intersection of two fast-moving trends: the adoption of generative AI tools for research workflows and the demand for more reproducible, transparent review methods. By applying the TCCM framework, the authors both preserve theoretical rigour and show how AI can speed up labour-intensive stages without sacrificing reliability. The approach will be of interest to academics, research teams, and organisations that run regular evidence syntheses, especially those working in marketing, consumer studies and applied social sciences.

Why should I read this?

Short version: if you run literature reviews and hate spending months on manual coding, this paper shows a working shortcut that doesn’t wreck quality. It gives concrete numbers on AI agreement rates, a tidy HITL workflow to fix errors, and a big example (179 papers over 50 years) so you can see how it scales. Worth a quick read if you want to shave time and cost off reviews without giving up replicability.

Author style

Punchy and practical: the authors don’t just theorise — they benchmark tools, report reliability stats, and walk through a real synthesis. If you value rigorous, repeatable reviews, the paper makes a strong case that this hybrid AI+HITL approach is essential reading.

Source

Source: https://onlinelibrary.wiley.com/doi/10.1111/ijcs.70126?af=R

Leave a Reply

Your email address will not be published. Required fields are marked *