Psychobabble: Bad Science on IQ

By Keir Liddle

IQA recent study conducted into IQ has gained a great deal of attention in the press. It’s authors make the rather grandiose claim that they have finally debunked the concept of IQ. Needless to say they haven’t.

The paper, available here, by Adam Hampshire, Roger R. Highfield, Beth Parkin and Adrian M. Owen explores the “controversial” topic of IQ asking “what makes one person more intellectually able than another? Can the entire distribution of human intelligence be accounted for by just one general factor?” Highfields mind is clearly made up as you can see from interviews he has given but how fair is his analysis? Is it Nobel prize winning research that debunks the work of hundreds of researchers or simply over sold and over hyped science PR?

Before we continue it seems prudent to explain what IQ is and how it came about. An intelligence quotient, or IQ, is a score derived from one of several standardized tests designed to assess intelligence. The many different kinds of IQ tests use a wide variety of methods. Some tests are visual, some are verbal, some tests only use abstract-reasoning problems, and some tests concentrate on arithmetic, spatial imagery, reading, vocabulary, memory or general knowledge. Well-known modern IQ tests include Raven’s Progressive MatricesWechsler Adult Intelligence ScaleWechsler Intelligence Scale for ChildrenStanford-BinetWoodcock-Johnson Tests of Cognitive Abilities, and Kaufman Assessment Battery for Children.

The psychologist Charles Spearman in 1904 made the first formal factor analysis of correlations between the tests. He found a single common factor explained the positive correlations among tests which he labeled g, and a large number of narrow task-specific ability factors. Today’s factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the g factor, which represents the variance common to all cognitive tasks.

It is important to note that the G factor explanation arose from the data it was not a definition imposed from above or constructed by psychologists a misunderstanding common in debates about IQ.  G was discovered using factor analysis a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. If we think of the (in)famous example oft quoted as an example of correlation not being causation: price of rum and numbers of Presbyterian ministers we would probably find similarly strong positive correlations between other consumable goods and occupations. Were we to put these all into a dataset and perform a factor analysis we would likely uncover a latent variable (not in our model) that explained all these correlations. In this case the latent variable would be the economy.  A buoyant economy means more people can be employed, including ministers, and more people can afford to purchase consumables like rum.

In the case of intelligence the G factor is our “buoyant economy” is is a statistical factor that explains variation across a vast number of cognitive tests. Suggesting there is an underlying reason that some people are better at a range of tasks than others.

The much hyped coverage of the paper by Hampshire et al has seemingly mistaken the G-factor for proof of a single cognitive system or brain area that controls for or determines someones intelligence. In this sense the coverage of Hampshire et als paper and indeed the paper itself are founded on a straw man and the hype dedicated to their findings is largely undeserved. Particularly when a reading of the papers results shows that the three factors uncovered by the researchers are all positively correlated with each other. In a sense the paper is less a “debunking” of IQ and more an exploration of Godfrey Thompson’s 1916 ‘bonds model’ which is pretty much statistically indistinguishable from Spearmans model statistically.

In short the claims that the study debunks IQ are very overstated. The study suggests that a three factor model may offer a better explanation but that the three factors distinguished are also positively correlated. Charitably the paper could have made the case that the explanatory power of the G factor is less than was previously thought and their three factor model is more grounded in biological and neurological structures. But it doesn’t debunk IQ not by a long chalk.

Claims that the paper presents the largest ever study of IQ are similarly dubious. The authors of this paper which looked at a cohort of over 1 million Swedish men might wish to point out that no matter what someones IQ is most would be able to assess that 110,000 is less than 1,000,000

If anyone is interested in the study of IQ then I recommend this paper as a decent starting point.

Advertisements
This entry was posted in Featured, headline, Scepticism, Science and tagged , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s