I blogged a few months ago about the differences between academic and real-world research. I referred indirectly to two pretty textbook-ish terms in that blog: iteration and triangulation.
It goes without saying that the technological paradigm shifts of the last 30 years have had, and continue to have, a massive impact on the way we live our lives. No part of our day to day existence remains untouched by the digital revolution.
A lot of the research we’ve been doing this year has touched upon cloud computing in one way or another. This made me think: what exactly is cloud computing? I, for one, struggle to define it in just a sentence or two.
Of course it depends on how you look at it, but, for me, one of the very few upsides to the financial doom and gloom these days is that business decisions are being placed under ever more scrutiny – gone are the days when companies can base their strategies on hunches and whims, gone are the days of commissioning research for research’s sake.
Because I have a lot of time on my hands (joke), I’ve spent two and a half years of weekends studying towards an MSc in research methods. I’m hardly the first person to draw attention to the different worlds that academic and commercial researchers inhabit. But since I live in both of them, here’s how I’d summarise the main differences:
A regular party piece at industry conferences for the last few years has been one or other of the major web panel providers demonstrating that their sample is consistent with equivalent samples drawn using traditional (telephone or face-to-face) methods.
1,966,514,816… far too many digits for my qualitative brain to process! One billion, nine hundred and sixty six million, five hundred and fourteen thousand, eight hundred and sixteen! This is how many people are connected to the internet worldwide!!
I came across a great web site recently – Flowing Data. It pulls together examples of innovative, interesting and effective ways of visualising data – so, required reading for researchers and marketers whose job it is to neatly summarise and interpret large quantities of data.