top of page

Data Science 1 & 2 - Challenging the CRAAP test

  • Writer: Roger Kennett
    Roger Kennett
  • Feb 13
  • 6 min read

Teaching Data Science in NSW science classrooms
Teaching Data Science in NSW science classrooms

Skilling our students to evaluate competing claims to truth has never been more important and the NSW science modules, Data Science 1 & 2 provide a wonderful opportunity for this.

The CRAAP test is a golden oldie still used in many schools and has probably found its way into some Data Science teaching programs. I argue (challenge) that for this post-AI age, and especially for science education, CRAAP is no longer fit for purpose.


Critiques of CRAAP

  1. Privileges traditional authority:

    • Institutional authority can lead to the danger of accepting claims based on an appeal to authority. (Jaeger-McEnroe, E. 2025)

  2. Encourages checklist thinking:

    • Students often use it mechanically without investigating beyond the source, making it inadequate for modern internet fact‑checking. (Tardiff, A. B, 2022)

  3. Outdated for today’s information landscape:

    • It was designed before "the algorithm" dominated people's information feeds or bubbles (Bull, A. C., et al, 2021)

  4. Lacks metacognition:

    • It does not prompt students to reflect on their own biases or worldview, leading to shallow evaluation. (Tardiff, A. B, 2022)


While CCOW is a significant improvement as it encourages a deeper level of metacognition (helping students to think about their thinking), it is still not optimal for the kind of data and claim evaluation practiced by scientists.


CCOW improves on CRAAP by pushing students to investigate Credentials, Claims, Objectives and to reflect on Worldview, which is valuable for bias‑aware inquiry and aligns with NSW expectations that students critically engage with the social dimensions of science. Yet, even CCOW does not explicitly require students to run robustness checks on competing models, seek independent replication, robustly challenge their confirmation biases, or design falsification tests. These are core skills in the new Data Science modules and essential for truly imparting scientific analysis and judging between competing accounts of reality (especially causality).


As we are writing new-syllabus programs (Data Science 1 & 2) to enrich our students with scientific Data Science skills, I suggest a better place to start might be Carl Sagan's nine crap-detecting criteria.


Carl Sagan's 9 tests for Baloney

  1. Confirm independently

    • A trustworthy claim should be confirmed by others using the same or similar methods.

    • If no one else can reproduce it, treat it as unproven.

  2. Hypotheses - Spin one more

    • Before settling on an explanation, brainstorm ALL plausible hypotheses you can possibly think of.

    • A great place to emphasise the deeply creative activity that science is. To imagine multiple versions of invisible realities driving what we can observe, is at the pinnacle of human creativity

    • Comparing multiple hypotheses stops you from locking onto the first idea that seems to fit.

    • Monitor and challenge the word "theory" being used for hypothesis –this reinforces a common misconception of what a scientific theory is.

  3. Authority - sure, but evidence matters more

    • Expertise is important, but even experts make mistakes.

    • Sound evidence should always matter more than who is making the claim.

  4. Logic - check Check every step

    • A strong argument works from start to finish without gaps or leaps.

    • Rattle each and every premise in the argument chain –hard!

    • Look closely for hidden assumptions or reasoning steps that don’t follow.

  5. Least assumptions - use Occam’s Razor

    • Prefer the explanation that accounts for the evidence with the fewest added assumptions.

    • Simple doesn’t mean simplistic—it means avoiding unnecessary extras.

  6. Evidence - quantify it where you can

    • Measurable data lets you test and compare hypotheses clearly. Sure, it is true that clouds reflect sunlight and the Earth is exiting an ice age, but to evaluate claims that these refute human-induced climate.

    • change science requires quantification!

    • If something can’t be measured at all, it becomes harder to evaluate scientifically.

  7. Negate your favourite hypothesis

    • The hypothesis you prefer is the one you must test the hardest—your own bias is the biggest threat to good reasoning.

    • Feynman reminded scientists that we tend to fool ourselves first, so put extra effort into trying to break the explanation you like most. Confirmation bias is hard-baked into ALL of us, so we need to work extra-hard to overcome it!

  8. Genuine experts debating

    • Reliable ideas survive open discussion among people who understand the evidence - people with expertise. Students would be sceptical of netball coaching advice from someone who has never played netball competitively, it should be the same with cancer, or vaccines...

    • If a claim avoids critique or shuts down questions, that’s a red flag.

  9. Experiment possible? ask whether the claim Is testable

    • A scientific claim must make predictions that could, in principle, be proven wrong.

    • If no possible test can challenge it, the claim isn’t operating within scientific thinking.

    • A great place to bring in Popper's principle of falsification - have students categorise a set of truth-claims into the sets of falsifiable and non-falsifiable (horoscopes work nicely here)



Wow, that's a LOT to remember! So, I have adjusted Sagan's nine, so that they spell

C-H-A-L-L-E-N-G-E.

In reality I am not sure I could remember what each letter stands for in a week, and I'm concerned about reducing it to a mindless check-box activity anyway, so perhaps the acronym is more to help us recall that there IS a better alternative to CRAAP and COOW. If that helps our students (and teachers) recall there IS an approach to testing claims that is based on the accumulated wisdom of science, at least it turns an unknown-unknown into a known-unknown.


Helping Students Think Like Scientists, Not Box‑Tickers

What makes this approach more helpful than CRAAP or even CCOW is that it reflects how science actually works, not just how we check sources. Instead of asking students to rely on authority cues or tick through a list, it guides them to use the same habits scientists rely on: comparing explanations, checking the strength of the evidence, looking for assumptions, and asking whether a claim could be tested. This connects directly to Popper’s (1959) idea that scientific ideas should be open to being proven wrong, and that this is what makes them strong. It also fits with the broader culture of science described by Merton (1973)—where claims are judged by evidence rather than status, where scrutiny from others is encouraged, and where open discussion helps ideas improve. Seen this way, Sagan’s criteria give students a supportive structure for evaluating claims while also building their confidence in thinking like scientists. As Helen Longino (1990) reminds us, scientific objectivity grows out of open, constructive critique within diverse communities, which means our role as teachers is not to provide every answer but to help students engage in the kinds of shared reasoning that make scientific knowledge robust.


Reliability –should we use "trustworthiness" instead?

In science, "reliability" has a specific meaning and I wonder how much we unintentionally confuse students by using this word in the context of evaluating claims and sources. I posit that using the word trustworthiness in this context better conveys the goal and has the valuable extra benefit of reducing confusion when we later ask students to evaluate the reliability of a method. Connect a voltmeter in series and you will gather highly reliable data, but your conclusions will not be valid and your claims, therefore, untrustworthy. Yes, I know that semantics of "reliability" depend on context, but science is difficult enough for students, maybe avoiding one extra hurdle is something we can consider.

Food for thought... –or evaluation using whatever acronym you like 😀



References

Feynman, R. P. (n.d.). Quotes. The Official Site of Richard Feynman. https://richardfeynman.com/about/quote.html


Jaeger-McEnroe, E. (2025). Rethinking authority and bias: Modifying the CRAAP test to promote critical thinking about marginalized information. College & Research Libraries News. https://crln.acrl.org/index.php/crlnews/article/view/26634/34553


Longino, H. E. (n.d.). Helen Elizabeth Longino. Philopedia. https://philopedia.org/thinkers/helen-longino/ [simplypsychology.org]


Longino, H. E. (1990). Science as social knowledge : values and objectivity in scientific inquiry. Princeton University Press.


Merton, R. K. (1973). The normative structure of science. In The sociology of science: Theoretical and empirical investigations (pp. 267–278). University of Chicago Press. (Original work published 1942)


Popper, K. (1959). The logic of scientific discovery. Hutchinson.


Sagan, C. (1995). The fine art of baloney detection. In The demon‑haunted world: Science as a candle in the dark (pp. [chapter page range]). Random House.


Siegel, E. (2026, February 10). Carl Sagan’s 9 timeless lessons for detecting baloney. Big Think.


Tardiff, A. B. (2022). Have a CCOW: A CRAAP alternative for the internet age. Journal of Information Literacy, 16(1), 119–130. https://files.eric.ed.gov/fulltext/EJ1347324.pdf


Bull, A. C., MacMillan, M., & Head, A. J. (2021). Dismantling the evaluation framework. In the Library with the Lead Pipe. https://www.inthelibrarywiththeleadpipe.org/2021/dismantling-evaluation/

Comments


Subscribe to stay informed

Learning Forge Pty Ltd ABN: 34 666 197 486

  • Instagram
  • Facebook
  • LinkedIn
  • YouTube

tel: 02 7229 5606

Learning Forge acknowledges the traditional custodians of the land on which we live and work; the Darug people. We pay our respects to their elders, past, present, and emerging, and acknowledge their ongoing connection to the land, waters, and culture. We recognise that sovereignty was never ceded.

bottom of page