An Actual Neuroscientist's Guide for Adults Who Can't Science Good

And Who Wanna Learn To Do Other Stuff Good Too, Part 2.

Gidday Cynics,

First, a warm welcome to the new readers who’ve signed up after reading my Webworm guest post “An Insult To Life Itself” on AI. It was… interesting to write. AI is complicated and confusing, but I think it’s best viewed from a few steps back, where it becomes clear that it’s mostly just gas on our cultural garbage fire.

Why AI is Arguably Less Conscious Than a Fruit Fly
Hi, Thanks for all the feedback on the 3-Year Anniversary newsletter! Your comments warmed my cold dead heart! “I’ve been here since the beginning and Webworm has been a bit of mental refuge. I read it during the depths of covid, in the hospital while waiting for my son to be born, in the middle of dozens of boring work meetings. The eclectic mix of artic…

If you’ve read that piece, or my previous Cynic’s Guide piece “A Scientist's Guide To Self-Improvement Science (For Non-Scientists)” you’ll be familiar with Dr Lee Reid. He’s helping me out with a problem I’ve been perplexed by since I started this newsletter: how can normal people tell good advice from bad, or good science from suss?

The last newsletter was a really deep and quite dense dive into stuff like the philosophy of science, but this one is all practical. Here’s how you — whether you’re a layperson with a casual interest in scientific topics, a die-hard gym-bunny, a dedicated psychonaut, a journalist, or just an easily-distracted dilettante like me — can apply some of the tools scientists use to the big claims we’re so used to seeing all over news and social media.

"Galaxy Brain" - an image of a computer-generated person with a bright blue brain emitting rays of light. The person is probably dead.
If your brain looks like this, see a doctor urgently.

Dr Lee “Actual Neuroscientist” Reid’s Guide for Adults Who Can't Science Good And Who Wanna Learn To Do Other Stuff Good Too

Books

Books are not where reputable new science is published. If a book appears to makes new claims, or new leaps in understanding of something, leave it on the shelf. If a book aims to make published science understandable, this might be for you... but see if other scientists who work in that area stand by it. What do the quotes say on the back cover? Some examples:

Toss it:

"This book revolutionizes our understanding of..."

"Dr X provides creative insights into..."

"... digs into X to reveal..."

Consider it:

"Does a great job of summarizing..."

"... clear writing style provides an accessible overview"

"... cuts through the jargon with straightforward..."

Peer Reviewed Journal Articles

All reputable new science is published in these. Non-reputable science is as well. These are split into review articles and original findings.

Go straight for the review articles. The author has done the reading for you. Google Scholar and PubMed (health only) are the best places to search.

Find the primary (first-listed) author's bio on Google Scholar. Ask yourself: before this article, did they publish many things on this topic that have citations? If so, it's likely to be a high-quality review. It not, double check that the bio of the most senior (last-listed) author looks OK.

What's the journal? Journals get ranked. Generally, the better-ranked the journal, the more fierce the peer review. For most niche topics there are fewer than 10 top journals, but hundreds of journals available to publish in. If it's not a Q1 (top 25%) journal for this topic, then abort. You can find Q1 lists online.

Skim read. If it's covering what you want to know, read it again more carefully. If it doesn't have enough depth, take note of some of its citations and look at them.

If there are not enough publications in a new area for a review, this probably means there's not enough evidence to make a financial or life decision on. If you want to move ahead anyway, dig into the original research. Reading too much of this in a day can melt your brain, so getting through it is all about efficiency. There are plenty of guides for this, but most are for new graduate students. Have a read through a guide like that, taking special note of the order to read the article's contents in. As you're probably without much academic background in the topic, some added advice:

  • You're going to need to Google jargon as you go and note down what words mean. That's normal. Don't get too in-depth as some things take a long time to grasp.
  • Recall articles are broken up into Abstract (a summary), Introduction (background information), Methods, Results (results without interpretation), and Discussion (interpretation of results).
  • Before tackling these, try to first find an "accessible abstract" or "plain language summary" on the article website. Famous articles also sometimes have a commentary that sums them up well.
  • If this is one of the first few articles you’ve read, DO read the introduction. Most articles will provide a mini literature review to get you started.
  • You're not likely to understand the methods section or even much of the results - skim read them at best.

Before trusting what you read, make sure the results have been replicated multiple times by multiple groups. Anything short of that is interesting but frankly inconclusive. Most importantly, look for red flags:

LinkedIn never fails to disappoint. Posts that look like this probably count as big red flags.

Big Red Flags:

  • Authors:

    • Work in industry (check for disclosures), politically-interested institutions, or a non-reputable institution.

    • Are from a non-scientific field like Law or Economics.1

  • Methodological issues:

    • No statistics, or not mentioning the statistics.

  • Misrepresentation

    • Any limitation that seems clear to you as a layperson, and yet is not discussed.

    • The sample size is small - say, 1-10 people - and they make a strong conclusion or advice-like suggestions to the general population.2

    • The study doesn't mention other papers that you know contradict this study.

    • Cherry-picking their own results by only discussing those that support the conclusion.

  • Reputation

    • Not a Q1 journal

    • The article is 5+ years old and it has only been cited 2 - 3 times. It's likely other scientists have simply ignored it. (Note that a high citation count can mean the article is important or it's controversial.)

    • Being rubbished in the media by multiple scientists.

Borderline Red Flags:

  • Authors:

    • Are sponsored by industry.3

    • Are all from a mismatched scientific department, like the Psychology Department when the topic is Cellular Biology.

    • Are fronting a study on thousands of people, that does not have an epidemiologist, public health expert, or statistician as the first or second listed author.

    • All lack PhDs. This includes all-MD publications. MDs are very skilled but rarely have equivalent scientific/analysis experience.

  • Methodological issues:

    • Lots of statistical values (e.g. > 10 p-values) when the sample size is not in the thousands.

    • The work relies entirely on the honesty and good memory of people via surveys.4

    • Populations studied do not match the population being compared to. A study on the mental health of Orkney Islanders, or hormones of lobsters (yeah, that's a dig), is unlikely to have much relation to people living a bustling lifestyle in New York.

  • Weak Peer Review:

    • Publishing occurred very quickly after submission5

    • Methods sections seem too short for another scientist to assess the work.

    • Any discussion using words like "groundbreaking". This is rarely true and suggests peer review was weak.

    • Any result that just sounds off, and the authors don't discuss it as such.

Also, before changing your life based on what you read, there are some real scientific language and statistical gotchas that trip people up:

  • "Significant" means reliable, not "big amount". Things need to be significant and represent a big change or difference to matter.

    • i.e. If someone says a new pillow design results in "significantly more sleep," read that as "reliably more sleep", then ask "how much more?"

    • If someone says their new pillow design gives an extra hour more sleep per night, but this is not significant, take that as meaning that there's no good evidence you'll get that extra hour of sleep.

  • When people talk about risk or odds, look up the exact term they use. A 10% increase in risk sometimes can mean your chance increases by one-in-ten, and sometimes means something else.6

  • Scientific graphs can be more complicated than what is taught in school. Instead of looking at the graph, base your understanding on the text description of results, unless you feel you really understand every squiggle, dot, and bar on that chart.


There you go. You now know how, in the words of astronaut Mark Watney, to “science the shit out of this.” You’ll probably note that the methods Lee outlines are often both difficult and time-consuming. Welp, that’s science for you! It’s no wonder that a lie can race around the world when the truth not only takes several months to lace up its boots but first has to go through several cycles of intense peer review on the best ways to tie them.

Thank you for reading The Cynic's Guide To Self-Improvement. This post is free, so you’ve found it helpful in any way, please share it.

In personal self-improvement journey news, sleep week is going well. Ish. My watch tells me I got 8 hours sleep the night before last, which is a very rare thing. The following day was unusually productive, which might be a clue to how helpful getting more sleep might be for me. Let’s see if I can do it more than once. I’m also getting a lot more exercise than before. Art is still languishing, but I have an idea on how to deal with that. I’ll talk about it next time.

Also, thanks again to the new subscribers. It’s great to have you here — feel free to introduce yourselves in the comments!

— Josh


  1. Josh note: if the author is an economist, don’t walk away. Instead, consider running. Economists are notorious for inflicting themselves on other fields that they (incorrectly) assume to have expertise in. Here is my example of what happens when an anti-vax crank (but still highly-placed!) economist tries their hand at epidemiology. It’s also a good lesson in why “peer reviewed” doesn’t necessarily mean “credible,” and how easily even prestigious journals can be hoodwinked.

  2. Josh note: Small sample sizes are a bigger problem than they might seem. To understand why — and how junk studies are boosted by a credulous media — read this astonishing account of a benevolent hoax perpetrated by a science journalist that fooled news outlets all over the world into reporting on the benefits of a “chocolate diet.

  3. Josh note: This is a contentious topic so I’ll tread carefully, but industry sponsorship is a big part of the thinking that gifts us not-even-wrong-tier things like “health star ratings” on food, and advertising food as healthy because it’s low-fat, despite the fact it’s stuffed with sugar.

  4. Josh note: This is a big one. For a multitude of reasons, people are often dishonest in surveys, and memories can be notoriously unreliable.

  5. Josh note: Publishing too quickly is a big part of the reason why there’s so much bad COVID science floating around.

  6. Josh note: I see this one trip people up all the time, including me. Let’s make up an example: “Eating bees while pregnant increases the existing risk of birth defects by 10 percent.” Sounds terrifying, right? If that were an overall birth defect increase of 10 percent it’d terrible. But if it’s increasing an existing risk factor, which might be tiny — say, 0.007 percent — by only 10 percent, then the actual impact is likely to be sweet fuck all, and you can eat all the bees you like.

    I made that example up. Please do not eat bees. They’re too spicy.