Science, Truth and the Social Sciences

The next time you see an experiment in psychology – or any controversial claim that purports to prove something, don’t concern yourself about whether you support the conclusion or not. Look at the data, the procedures and possible motivations of the scientists doing the study. Assume it is wrong. Look for loose threads in the fabric of the science and pull on them. If you don’t find an error, you can ascribe a degree of confidence but still be open to other possibilities.

I’d do this to any kind of unusual “hard science” results as well. Use these principles for anything. Vaccinations for example. Or the shape of the Earth. Or whether or not a gay person can be “converted”. Or if eating large amounts of colloidal silver is a good thing for you or if planet Nibiru is about to destroy us. (Getting a look at the data set of some more “speculative” assertions may be difficult.)

Anecdotes don’t count. A good scientist does not a base a conclusion on such a small data set. Nor does the number of awards or degrees someone may have gotten.

Now you are a scientist!

mad scientist
Hi there! I am a scientist.

Any time a physicist performs an experiment and discovers something important, it has a grueling road ahead of it. Before being published it will be analyzed in fine detail by people looking to poke holes in it. Then the experiment will be duplicated by other physicists to determine its accuracy and correctness. If you claim to find the Pikachu particle, it isn’t accepted as “truth” until it has been hammered on every possible way.

Should someone come up with a peer-reviewed proof to the contrary, you are dead in the water unless that study, in turn, can be verified or refuted by the same process.

You see, in real science, a hypothesis can never be proven beyond questioning. It can be sustained by experiments that fail to disprove it. Confidence can be greatly enhanced by accurate predictions it can make but that doesn’t make it fact. Euclidean geometry was considered an accurate reflection of the world and was sustained over and over daily for millennia until we became able to measure with sufficient accuracy. And then Einstein upset the apple cart and blew away a lot of Newton at the same time.

Euclid. Close enough to be a useful approximation in everyday life. But still wrong.

Einstein is probably correct as verified by many many experiments trying to prove otherwise. (Mass and energy are convertible, there is a maximum speed for causality and mass-energy curves space and slows time.) Barring equally extraordinary proof to the contrary he will continue to be accepted. But he might not be completely correct, even though your GPS would be dead in the water if he weren’t damned close. Scientists have not yet found a way to unify General Relativity (the theory of astronomical space-time) with Quantum Theory (the theory of subatomic scales) – so we keep theorizing and probing ever deeper. There may be something beyond them.

But if you really want to understand the greater universe, Einstein is the winner. So far.

“Soft” sciences like psychology, sociology, economics, even moderately hard sciences like climatology, don’t work that way. The researchers are rarely disinterested. Very often these realms are deeply intertwined with public policy. Public policy is subject to the political factions involved as well as what kind of money is available from differing factions to research it.

There is also no real control set. No second copy of Earth on which to tweak the CO2 content of the air to see what happens. The best we can do is create a computer model. If the assumptions that go into different models are even slightly off, we can end up with completely different results. To be reliable we need to approach the problem with many different studies and computer models, subject them to high levels of suspicion and scrutiny and then take the most common results of those that survive the process.

Science that cannot be conducted via an experiment is conducted by observational studies. We look at the world, take notes and reach conclusions that way. This is the easiest science for bias and observer error to leak in to. They are often not replicable.

Observational study that lacks a control set just isn’t as good as experimental science that has one. (As distinct from theoretical science which is mostly logic and numbers. Experiments confirm or deny theories.) I can understand that it is not always possible to have a control set but the scientists working on such problems need to accept that their science is just not as reliable as science that does. It requires more repetitions and more confirmations and needs to be attacked from many different directions.


In the social sciences, the models vary wildly from one researcher to the next. The model is based on whatever school of thought you happen to embrace. And, say you were studying whether or not something increases violent behavior, You can’t legally or ethically test it. You can’t risk creating more violence or ruining lives. So you set up proxies that may or may not reflect what you are actually looking for. But a proxy for violent behavior isn’t necessarily the same as violence.

The next step ought to be proving causality – or at least a really tight correlation – between the proxy, (children being excited after a violent video game) and the behavior (adult assault and murder). This is the step that always gets skipped.

Nobody in high office really cares if the Pikachu particle exists. If it did, they would fund lots of research that would amazingly prove that it did. Their opponents would try to do the opposite. That is what passes for science when politics get involved. We might never learn anything reliable about poor Pikachu!.

What happened to my particle?

OTOH, if you are trying to determine if violent games increase violent behavior, if you have a strong personal feeling about it either way, you will unconsciously design an experiment that confirms your preexisting biases. The observations and interpretation of the results will again be biased. It will likely be paid for by interest groups with the same biases.

When the government attempted to reproduce the results of experimental medicines, many times it couldn’t. Science that has a direct profit motive behind it tends to be a bit on the optimistic side. A scientist who is not disinterested produces results that favor an employer’s needs. The expense of repeating an experiment discourages replication attempts by third parties.

“…since the launch of the registry in 2000, which forced researchers to preregister their methods and outcome measures, the percentage of large heart-disease clinical trials reporting significant positive results plummeted from 57% to a mere 8%”

This does not mean you stop taking your heart medication.

“Imagine a study that examined whether an advertisement for a “color-blind work environment” was reassuring or threatening to African-Americans. We assumed it would make a difference if the study was conducted in, say, Birmingham, Ala., in the 1960s or Atlanta in the 2000s.”

Beware of any article that begins with “scientific study proves”. “Studies” show things are possible or even likely. They don’t prove a particular thing is fact and the lack of a control set renders them less reliable. But people hate to live in the world of greys and probabilities so a headline has to be assertive.

Science that deals in areas of political controversy is automatically suspect, regardless of whether you agree with the outcomes. It happens in many areas of the various psychological and social sciences because how to control or change human behavior is a fundamental aspect of any political doctrine. It is far more vulnerable to observer bias. It is vulnerable to bias from what the faction in control of funding is willing to fund. It is vulnerable to the preexisting beliefs of those who enter a field.

This is what happened when psychologists tried to replicate 100 previously published findings

If you want to publish you’d best have something positively proven or something surprising. That’s what journals look for so that what researchers provide. Editors also like having their biases confirmed.

The more sensitive the context of the question, the less reliable that body of study is. Very sensitive issues like race and gender tend to be less replicable. That’s bad science. If three scientists doing the same thing come up with different results, you can’t have confidence in any of them. And it has to be exactly the same methodology or it is neither a confirmation nor a refutation. Doing things differently just adds variables and nothing is learned. One scientist doing something and reaching a conclusion offers one no confidence in at all until it is subjected to peer review and replicated by others.

Do not doubt that people of a particular persuasion are more likely to enter a profession that is already a comfortable fit to their belief system. (People who don’t agree go elsewhere.) This creates an echo chamber. It profoundly affects the conclusions that “experts” in the field draw.

The null hypothesis regarding testing a fertilizer.

One tests a hypothesis by trying to prove it wrong. One tries to prove the null hypothesis, the hypothesis, that what you see isn’t really caused by what your hypothesis suggests.

If you love a hypothesis, this is not easy to do so. If the powers that be won’t fund it, it may be impossible to do. Simple as that. Even a hard science can be crippled or abused under the right conditions. Genetics was used to justify Hitler and most of the western eugenics movement. It’s polar opposite, Lysenkoism, may well have killed just as many in the Soviet Union by famine.

Now think about how politics has messed up the science of climatology and the debate over climate change. Hardly a respectful and objective discussion of a difference in opinion. Truth – or as close as we can get to it – cannot easily triumph in an intensely political environment.

See what I’ve done here? I’ve linked to several different articles of varying scientific merit that support my proposition. It is up to you, the reader, to see if they do and if another explanation could be proposed that fits the data.

Very few people go into psychology if they don’t believe psychology can provide an answer to problems of human behavior. If a particular behavior does not have a psychological solution, it will be a very long time before anyone in the field notices. In the meantime, many untested hypotheses will be advanced to doctrine status and various conclusions reached and those conclusions will be implemented on real human beings.

So we have the behaviorists and the Cognitive Behaviorists and Freud and Jung and Adler, and a host of other schools some of which may offer insights and few of which bother with statistics – let alone testing against the null hypothesis or using large sample sizes or control groups. (CB seems to be the best of the lot in this.)

Geneticists are finally coming on line with genetics markers for certain psychological conditions – and where there are a few there could be many more. But we must ever be on guard against slipping off into eugenics.

Psychopharmacology seems to be driven by the notion that pills are the way to fix our unhappy brain waves. They seem to have statistics to support some of their ideas but the underlying biological science is weak. Current pills are shotguns where a scalpel would be better.

800px-Blind_monks_examining_an_elephantAll these different psychological principles remind me of the parable about the five blind men and an elephant. They’d never encountered an elephant before but each now was feeling a different piece. Each had an idea what a small piece of the elephant was like but once separated into different doctrines they could not agree since they all thought their own experience was the totality.

Hopefully, psychology can become useful in a scientific way. It has never been so in the past, being much closer to religion.


  1. consumerbrains

    Neuroscientist here (primarily cognitive and neuroeconomics) and I agree with your premise. Do you think that part of the issue of misunderstandings in the research practices in the social sciences may be related to a lack of scientific literacy?

    Liked by 3 people

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.