West vs Asia education rankings are misleading

by MacGregor Campbell

MacGregor Campbell is a science and technology journalist, MIT graduate, and Correspondent at New Scientist Magazine.

Western schoolchildren are routinely outperformed by their Asian peers, but worrying about it is pointless.

MATHEMATICS and science are as essential to modern economies as coal was to the industrial revolution. So when the results of international tests show Western schoolchildren lagging behind their peers in countries like Singapore and Japan, alarm bells start ringing.

The latest results to cause consternation are from a comparison of mathematical and scientific knowledge called TIMSS, or Trends in International Mathematics and Science Study. This is given every four years to 9-10-year-olds and 13-14-year-olds from more than 50 countries.

The results, released last month, show that students from the UK, US and Australia continue to perform disappointingly. In maths, for example, English, American and Australian 13-year-olds were outperformed by their peers in South Korea, Singapore, Chinese Taipei, Hong Kong, Japan and Russia. It was a similar story in science.

Cue much wailing and gnashing of teeth. US secretary of education Arne Duncan lamented that “a number of nations are out-educating us today… If we as a nation don’t turn that around, those nations will soon be outcompeting us in a knowledge-based, global economy.”

Australia’s Education Standards Institute director Kevin Donnelly said the results proved that the country’s education system had gone “pear-shaped”.

However, there are reasons to think that such worries are misplaced.

First of all, although the results are not world-beating, they are far from terrible. All were above average, and better than many other developed nations. For the US at least, they continue a trend of long-term improvement. In the first international mathematics survey, conducted in 1964, the US finished second from bottom.

Second, the common-sense connection between test scores and future economic success doesn’t necessarily hold up. For developed nations, there is scant evidence that TIMSS rankings correlate with measures of prosperity or future success. The same holds for a similar test, the Program for International Student Achievement (PISA).

In 2008, Christopher Tienken, then at Rutgers University in New Jersey, compared 1995 TIMSS scores with the 2006 Growth Competitiveness Index. This index was devised by the World Economic Forum to measure a nation’s future economic health. Tienken found that for developed countries there was no statistically significant relationship (International Journal of Education Policy & Leadership, vol 3, no 4).

Tienken, now at Seton Hall University in South Orange, New Jersey, has since done a similar analysis of the 2003 PISA mathematics rankings and two measures of economic success: per-capita GDP in 2010, and the 2010-2011 Growth Competitiveness Index. The study, to be published in April, again found no statistically significant relationship.

These findings make TIMSS and PISA rankings seem irrelevant. But it could be worse than that. In many cases, high test scores correlate with economic failure.

Japanese students, for example, have always been near the top of the TIMSS. You might expect those high-flying students to be driving a high-flying economy. Yet the Japanese economy stagnated throughout the 1990s and 2000s.

There may be no causal connection, but the same negative correlation is seen elsewhere.

In 2007, Keith Baker of the US Department of Education made a rough comparison of long-term correlations between the 1964 mathematics scores and several measures of national success decades later.

Baker found negative relationships between mathematics rankings and numerous measures of prosperity and well-being: 2002 per-capita wealth, economic growth from 1992 to 2002 and the UN’s Quality of Life Index. Countries scoring well on the tests were also less democratic. Baker concluded that league tables of international success are “worthless” (Phi Delta Kappan, vol 89, p 101).

A more recent analysis of 23 countries found a significant negative relationship between 2009 PISA scores and ranking on the Global Entrepreneurship Monitor’s measure of perceived entrepreneurial capabilities. This counts the percentage of people in a country who feel confident that they could start a business.

With so many indicators showing a negative relationship, perhaps we need to reconsider how we interpret success – or failure – on international education scores. “If we believe that these tests actually tell us how well a kid or a country is doing, and then we hold people accountable for that, those people are going to focus on what’s most likely to be tested, and they’re going to cut out everything else,” says Tienken.

This is especially relevant to the UK, where the education secretary Michael Gove has justified some of his controversial reforms by referring to the country’s performance on the international educational stage.

We might instead consider that in a global economy, where the answers to almost any standard question are a few smartphone taps away, skills like creativity and initiative will be the true drivers of prosperity. None of these traits can be measured easily by tests. When testing consumes precious educational time, focus and money, they get squeezed out.

“Standardised tests reward the ability to find answers to pre-existing questions, but finding the question is more important,” says Yong Zhao, an education researcher at the University of Oregon in Eugene who found the negative relationship between PISA scores and entrepreneurship.

We must, of course, continue to promote the importance of mathematics and science, but fixating on international tests as a way to achieve this could prove counterproductive.


Download the article here.

Comments are closed.