Marketing Lens

A close-up look at marketing ...

VIEW ARTICLE:

 

The social media lie detector

By EMMA COAKES Published 24th Feb 2014
Article Image

Researchers at the University of Sheffield have been given an EU grant to develop a tool to measure the truthfulness of statements made in social media.

According to a news release, the tool will "classify online rumours into four types: speculation, such as whether interest rates might rise; controversy, as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it's done with malicious intent".

They also aim that it will be able to "automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated bots. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information."

I am not going to speculate about whether the system they develop is going to be reliable or not. We don't know what the technology they are going to develop looks like yet. I could speculate that it will depend on some sort of semantic algorithm, but perhaps they have more than that up their sleeve.

Google's algorithm is, as we know, fairly sophisticated and takes around 200 factors into account. I was interested to see that Sheffield's tool is going to look at history and make an assessment of what type of 'witness report' the information is based on (eye, expert etc). In other words social media profiles are going to be ranked with a reputation score, which is what Google has been doing with websites for a long time now.

One significant problem with the online world is that it is based on a self publishing model. There are no editors to check quality before material is published. The unfortunate end result is that the vast majority of content is rubbish or, being generous, not useful or helpful.

Google's mission, quite rightly, is to put the quality at the top of search results and they are doing this by pushing poor quality down.

What if the University of Sheffield succeed in doing the same in social media? Instead of having a stream of rubbish and content that is neither useful nor helpful you have only credible information?

Many will question whether an algorithm will be able to do that. The most effective way of doing it is for humans to make their own judgments on the quality of a source. Google can measure whether web pages are read and, even more impressively, such factors as how far down a page a reader scrolls.

The tools for users to filter their social media streams exist. In Twitter it's called lists and in Google it's called Circles. I'm watching this story with interest, because maybe Sheffield can come up with something innovative that will add to the game. The worst case scenario is that they produce something which replicates what Lists and Circles are already achieving, but which does so less effectively.

Luckily it's only EU money which is ... hang on a minute ... it's mine!

Unique Views: 1440 | Total Page Views: 1726
There are currently no comments to display... Please Login or Sign Up to comment.
comments powered by Disqus

Latest Activity


Show More Reveal

Recent Rants & Raves

Marketing professionals let off steam ...

Show MoreReveal

Submit an Article

Please Login or Sign Up to submit an article. We will be accepting articles from non-members in due course.