Captain Cyborg: Computers are alive, like bats or cows

Self-harming attention-seeker Kevin Warwick has admitted to snooping on the public in a previous life. Warwick made the creepy confession on Radio 4, recalling an earlier job as a GPO engineer:

“I remember taking ten different calls and plugging them all together; one call would continue, the other nine would listen in. Then I’d patch everything back again.”

In a 30-minute interview with Michael Buerk, Warwick compared his cat-chipping operation a decade ago to Yuri Gagarin’s first space flight. They were both scientific pioneers.
Continue reading “Captain Cyborg: Computers are alive, like bats or cows”

Greatest Living Briton gets £30m for ‘web science’

As an alliance of the desperate, this one takes some beating. The Greatest Living Briton (Sir Timothy Berners Lee) has been thrown £30m of taxpayers’ money for a new institute to research “web science”.

Meanwhile the Prime Minister waxed lyrical today about the semantic web – how “data” would replace files, with machine speaking unto machine in a cybernetic paradise.

It’s really a confluence of two groups of people with a shared interest in bureaucracy.

Computer Science is no longer about creating graduates who can solve engineering challenges, but about generating work for the academics themselves. The core expertise of a CompSci department today is writing funding applications. And the Holy Grail for these paper chasers is a blank cheque for work which can be conducted without scrutiny for years to come. With its endless committees defining standards (eg, “ontologies”, “folksonomies”) that no one will ever use, the “Semantic Web” fits the bill perfectly.

Of course, most web data is personal communication that happens to have been recorded. Most of the rest is spam, generated by robots, or cut-and-paste material ‘curated’ by the unemployed or poor graduates – another form of spam, really. The enterprise is doomed. But nobody’s told the political class.

Continue reading “Greatest Living Briton gets £30m for ‘web science’”

Mystic Met Office abandons long range forecasts

Tea leaves

The Met Office has confirmed it is to abandon long range weather forecasts, finally acknowledging criticism. The most recent forecasts were so inaccurate, that even the BBC is reconsidering whether to appoint an alternative supplier, such as Accuweather, after 88 years of continuous service from the 1,700-strong MoD unit.

The Mystic Met predicted a barbecue summer for 2009, and the third washout in a row, with the wettest July since 1914, duly followed. A mild winter was then given a high probability, only for the UK to suffer its coldest winter for 30 years. Yet Met Office staff received performance-related pay bonuses worth over £12m over 5 years, it was revealed last week, in response to a Parliamentary question.
Continue reading “Mystic Met Office abandons long range forecasts”

Nu Lab’s favourite boffin

New Labour’s favourite boffin has lost her job – for a very New Labour reason – and has responded with a classically New Labour riposte.

Oxford neuroscientist Susan Greenfield was made redundant from her post as the Director of the Royal Institution after failing to balance the books. The full-time post itself is being abolished. In return, the Life Peer and WiReD magazine UK star is the suing the science charity for sex discrimination.

Greenfield’s £22m refurbishment of the Institution’s HQ saw it go into the red by £3m, and it had to sell property to balance the books. The refurbishment saw a new cafe bar and restaurant open at Albemarle Street.

 

Read more at The Register…

The BBC, Thermageddon, and a Giant Snake

a giant snake

Listeners to BBC World Service’s Science in Action program got a nasty surprise last week. In the midst of a discussion about the large snake fossil, a scientist dropped this bombshell:

“The Planet has heated and cooled repeatedly throughout its history. What we’re doing is the rate at which we’re heating the planet is many orders of magnitude faster than any natural process – and is moving too fast for natural systems to respond.”

Hearing this, I did what any normal person would do: grab all the bags of frozen peas I could find in the ice compartment of my refridgerator, and hunker down behind the sofa to wait for Thermageddon.

Hours passed. My life flashed before my eyes a few times, and a few times more. But then I noticed that the house was still there, and so was the neighbourhood. And so was I!

Continue reading “The BBC, Thermageddon, and a Giant Snake”

Climate Models vs. Reality: Anton Wylie

Climate Modes vs Reality

Climate models appear to be missing an atmospheric ingredient, a new study suggests.

December’s issue of the International Journal of Climatology from the Royal Meteorlogical Society contains a study of computer models used in climate forecasting. The study is by joint authors Douglass, Christy, Pearson, and Singer – of whom only the third mentioned is not entitled to the prefix Professor.

Their topic is the discrepancy between troposphere observations from 1979 and 2004, and what computer models have to say about the temperature trends over the same period. While focusing on tropical latitudes between 30 degrees north and south (mostly to 20 degrees N and S), because, they write – “much of the Earth’s global mean temperature variability originates in the tropics” – the authors nevertheless crunched through an unprecedented amount of historical and computational data in making their comparison.

For observational data they make use of ten different data sets, including ground and atmospheric readings at different heights.

On the modelling side, they use the 22 computer models which participated in the IPCC-sponsored Program for Climate Model Diagnosis and Intercomparison. Some models were run several times, to produce a total of 67 realisations of temperature trends. The IPCC is the United Nation’s Intergovernmental Panel on Climate Change and published their Fourth Assessment Report [PDF, 7.8MB] earlier this year. Their model comparison program uses a common set of forcing factors.

Notable in the paper is a generosity when calculating a figure for statistical uncertainty for the data from the models. In aggregating the models, the uncertainty is derived from plugging the number 22 into the maths, rather than 67. The effect of using 67 would be to confine the latitude of error closer to the average trend – with the implication of making it harder to reconcile any discrepancy with the observations. In addition, when they plot and compare the observational and computed data, they also double this error interval.

So to the burning question: on their analysis, does the uncertainty in the observations overlap with the results of the models? If yes, then the models are supported by the observations of the last 30 years, and they could be useful predictors of future temperature and climate trends.

…Read more at The Register.

With Horizon, the BBC abandons science

creepy

BBC TV’s venerable science flagship, Horizon, has had a rough ride as it tries to gain a new audience. It’s been accused of “dumbing down”. That’s nothing new – it’s a criticism often leveled at it during its 42 year life.

But instead of re-examing its approach, the series’ producers have taken the bold step of abandoning science altogether. This week’s film, “Human v2.0”, could have been made for the Bravo Channel by the Church of Scientology. The subject at hand – augmenting the brain with machinery – was potentially promising, and the underlying question – “what makes a human?” – is as fascinating as ever. Nor is the field short of distinguished scientists, such as Roger Penrose, or philosophers, such as Mary Midgley, who’ve made strong contributions.

Yet Horizon unearthed four cranks who believed that thanks to computers, mankind was on the verge of transcending the physical altogether, and creating “God” like machines.

“To those in the know,” intoned the narrator, “this moment has a name.” (We warned you it was cult-like, but it gets worse).

It’s not hard to find cranks – the BBC could just as readily have found advocates of the view that the earth rests on a ring of turtles – and in science, yesterday’s heresy often becomes today’s orthodoxy. But it gets there through a well-established rigorous process – not through unsupported assertions, confusions, and errors a five-year old could unpick.
Continue reading “With Horizon, the BBC abandons science”

Junk science – the oil of the new web

There’s a case to made that James Surowecki’s The Wisdom of Crowds is the most influential book of the decade – The Selfish Gene for the noughties. Both have something else in common: the title of each book is profoundly misleading. Crowds aren’t wise, nor can genes be selfish – as one critic famously wrote, any more than atoms can be jealous.

Just as the young polemicist Dawkins paved the way for the social darwinism of the Reagan and Thatcher years, Surowecki’s discussion of futures markets and “collective intelligence” provides the flimsy premise for a spending splurge on junk technology. It’s the common thread that unites several of the disparate “Web 2.0” start-ups we wrote about yesterday, in our must-read roundup.

Both authors were the catalyst for entire schools of junk science – yet both can justifiably claim to have been misrepresented to some degree. While Surowecki is clearly as bewitched by “collective intelligence” as Dawkins was by a gene-eyed view of evolution, he also warns that the crowd only picks winners in very specific circumstances, where the collective guess work acts as a kind of risk hedging. If these factors aren’t present, then the market falls victim to the inevitable: gaming.

But even when this appears to work, so what? Seth Finkelstein notes that in some situations, throwing darts at a dartboard produces excellent results. Citing the Wall Street Journal Dartboard Contest, he writes,

“People are fascinated by ways in which data-mining seems to represent some sort of over-mind. But sometimes there’s no deep meaning at all. Dartboards are competitive with individual money managers – but nobody talks about the ‘wisdom of darts'”

And today, Canadian hockey fans are rejoicing in the return of Maggie the Macaque. The simian (on the right) out-performed the experts in predicting the results of key games during the 2003 season. Could it be Maggie’s diet of crabs, or could it be – “The Wisdom of Monkeys”?

One need only look at the composition of the internet to understand why the “Wisdom of Crowds” will never apply: the internet isn’t representative of society, and even amongst this whiter-than-white sample, only a self-selecting few have any interest in participating in a given pseudo-market.

While Wisdom of Crowds was self-consciously written with the purpose of restoring the public’s faith in the market, after the dot.com bubble burst – it was titled after Charles Mackay’s Extraordinary Popular Decisions and The Madness of Crowds – it’s had the opposite effect.

The self-selecting nature of participation in computer networks simply amplifies groupthink. Facts that don’t fit the belief are discarded. The consequences abound, wherever you look.

The great Wikipedia experiment is already over, says Nick Carr, the inevitable result of an open editing policy.

He cites what may prove to be the 21st Century’s equivalent of the 1948 newspaper headline, “DEWEY WON”, Time magazine’s declaration that,

“everyone predicted that [Wikipedia’s] mob rule would lead to chaos. Instead it has led to what may prove to be the most powerful industrial model of the 21st century: peer production. Wikipedia is proof that it works, and Jimmy Wales is its prophet.”

Praise be!

But to buy into this world view, one must disregard all evidence to the contrary. Veteran Wikipedia administrator ‘Skippy’ of Wikitruth.info – a site strangely absent from Wikipedia’s “sum of all human knowledge” – mailed us his summary yesterday:

“Wikipedia is proof that an encyclopedia that ‘anyone can edit’ doesn’t mesh with the reality of human nature.”

A harsher summary from the Village Voice recently declared:

“No true believer in the democratic promise of the Web can fail to gladden at the very mention of this grand experiment – the universal encyclopedia ‘anyone can edit’!—or fail to have noticed, by now, what a fucked-up little mockery of that promise it can sometimes be.”

It’s no surprise to discover that Time magazine’s puff piece was written by WiReD magazine editor Chris “Long Tail” Anderson. Three years ago, Anderson bet your reporter that by today Wi-Fi chipsets would outsell GSM or CDMA chipsets. This was on the occasion of an Intel-sponsored edition of his publication, and Anderson was in the grip of the religious mania about Wi-Fi. His prediction has fallen short by around a billion units.

(If you want faith-based economic theory, Anderson’s your man.)

We’ve written about groupthink on so many occasions – particularly after the collapse of the Howard Dean presidential run – we won’t bore you with repetition. But a golden rule of internet companies is that the more faith they place on the “new wisdom of the web”, the more inevitable their demise.

For Google, which buys into the junk science more than any other Silicon Valley company, this is very bad news indeed. The “democracy of the web” was short-lived, and the company devotes most of its brainpower resources not to developing new products, but trying to rescue its search engine from “Grey Goo”. Faith-based junk science can be a real handicap.

Where does all this affect us? Wherever their advocated bad ideas waste money and resources. For those of us who want better technology, the mini splurge of capital investment in fatuous companies is more than troubling. A dollar spent on a doomed web site is a dollar that could have been spent on solving some real, overdue infrastructural problems.

Seth Finkelstein points out an immediate consequence which is already taking place. Wisdom… gained such traction on the net, because of its cultural distrust of expertise. This stops where the net stops, however – it’s hard to envisage even the most militant Wikipedia fan choosing to be operated upon by amateur heart surgeon. But it’s accelerated the process of deskilling, and the new flood of cheap (but wise!) amateur labor promises to depress wages even further.

The media, and Time is a great example, espouses the rosy view that our public networks are in rude health. I’m confident that this utopian view carries little weight with a public frustrated with pop-ups, viruses and spam.

So to return to our original question. If the public so wilfully buys into sloppy thinking, are the authors themselves responsible? In the case of both Dawkins and Surowecki, who mistitled their books, they may protest too much

People more drunk at weekends, researchers discover

A parody from 2000

It’s open season on Wikipedia these days. The project’s culture of hatred for experts and expertise has become the subject of widespread ridicule. Nick Carr christened it “the cult of the amateur”.

But what has professional academia done for us lately? Here’s a study from the University of Amsterdam to ponder.

New Scientist reports that researchers for Professor Maarten de Rijke at the Informatics Institute have been recording words used by bloggers, in an attempt to find interesting or unusual patterns. What revelations did the team’s MoodViews software unearth?

The team discovered that the LiveJournal label “drunk” becomes increasingly popular each weekend. And around Valentine’s Day, “there is spike in the numbers of bloggers who use the labels ‘loved’ or ‘flirty’, but also an increase in the number who report feeling ‘lonely’.”

It gets better.

The team also noticed that on the weekend of the publication of the most recent Harry Potter book, bloggers used “words like ‘Harry’, ‘Potter’, ‘shop’ and ‘book’,” PhD student Gilad Mishne reveals.

This work really should put the Nobel Prize Committee on Red Alert. Alongside the existing scientific prizes for Chemistry, Physics and Physiology and Medicine, the Laureate Committee should design a new category for the “Bleeding Obvious”, or the “Dying Ridiculous”.

More seriously, let’s look at what this episode teaches us.

Two things are immediately obvious: Mishne’s study was considered worthy of academic funding, and it was considered worthy of an article in a popular science magazine.

The study doesn’t tell us anything we didn’t know before: unless you’re surprised by the revelation that people get more drunk at weekends, or people talk about Harry Potter books more when a new Harry Potter book goes on sale. The study is really considered funding-worthy and newsworthy because of what’s unsaid – the implication that the aggregation of internet chatter will reveal some new epistemological truth.
Continue reading “People more drunk at weekends, researchers discover”

Nature journal cooked Wikipedia study

They want to believe, too

Nature magazine has some tough questions to answer after it let its Wikipedia fetish get the better of its responsibilities to reporting science. The Encyclopedia Britannica has published a devastating response to Nature‘s December comparison of Wikipedia and Britannica, and accuses the journal of misrepresenting its own evidence.

Where the evidence didn’t fit, says Britannica, Nature‘s news team just made it up. Britannica has called on the journal to repudiate the report, which was put together by its news team.

Independent experts were sent 50 unattributed articles from both Wikipedia and Britannica, and the journal claimed that Britannica turned up 123 “errors” to Wikipedia’s 162.

But Nature sent only misleading fragments of some Britannica articles to the reviewers, sent extracts of the children’s version and Britannica’s “book of the year” to others, and in one case, simply stitched together bits from different articles and inserted its own material, passing it off as a single Britannica entry.

Nice “Mash-Up” – but bad science.

Continue reading “Nature journal cooked Wikipedia study”