Recently I received a coupon to use Google’s survey research capability. I would get $75 off the cost of a survey. For one question among 200 respondents, $75 off meant I would be paying $25. Who could argue with that? And I thought: time to get a new career!
Well, it’s not all clear cut. Google’s surveys only allow you to run one question, plus one overall screener, for each respondent. You can run multiple questions in the same project, but each question will be asked of a different set of respondents. You can’t crosstab results. You can’t find out, for example, if people on a given drug are more satisfied with their physician. I didn’t see the value in running more than one question if I was paying for it out of pocket – especially if I couldn’t crosstab it. For the questions I look to answer, I would have to run my own survey using more traditional methods. Frankly it won’t cost me all that much money anyway, maybe about $5,000 in operational expenses like patient sample and programming, and then my time writing the survey and analyzing it.
Google’s sampling is based on what they call a “surveywall”: respondents answer the screener and a single survey question in order to gain access to premium content on selected websites.
Well, I tested Google, and for $25 I got my 200 respondents as promised. I had thrown a wrench in the works by screening to see if I could get ‘doctors’. Over 6,500 people were screened, and 4.4% qualified themselves as doctors. This incidence translates into about eight million physicians in the general population: obviously a number of respondents are fibbing about their profession. This isn’t surprising since I didn’t ask for an ME number or other verifiable identifier.
Google uses “inferred demographics” to understand characteristics like age, gender, income. Inference is based on IP address and DoubleClick cookie information. These two sources are supposed to provide information on behavioral patterns to allow them to infer demographic characteristics. The inferred demographics don’t cut it for me –not a single respondent in my data makes more than $100k per year. If this is true, then very few, if any, of the respondents are doctors: that is, they are all lying about their profession. If it’s not true, then what good are the inferred demographics?
Setting aside sampling method and demographic inference, the Google consumer research format has a lot of structural limitations. For a researcher with knowledge of the field and training in research methods, it may be hard to accept them.
- There are only 12 question types to choose from when writing a Google survey. Three of them are open end questions. Six involve respondents assessing a picture (logo, ad…). Three are scale questions and all of those allow only a 5-point scale. So if you like to use any other size scales…. Well, no luck here.
- I’ve already mentioned the fact that respondents can be asked only one question each. This destroys any possibility of performing in-depth analyses. For a single punch question, only five responses will be shown to any respondent. When you have more than five responses in your list, five out of six answers are randomly shown to each respondent. The six responses are covered, but only by a partial sample.
- Multi-punch questions can only have five responses total. If there are more than five drugs in your category and you want to see all of the ones people have been on in the past, well, you just can’t. Other specify’ is never an option, so you can’t understand what people think if they don’t think within the five responses you pre-programmed. If they select ‘other’, you won’t know what it means.
I am not an enemy of Google. I was Director for Advertising Research at DoubleClick, which is now a division of Google. I am a devoted user of many Google products, like their online music service, their photo storage service, Google documents/drive, and I own an Android phone and two Android tablets.
In a 2011 survey of corporate market researchers, Cambiar Consulting found that one in five thought Google could be the leading research company by 2020. I am not trying to bash Google here. But for now at least, Google has missed the boat on this research product. The people leading the Google research initiative are software engineers, not researchers. They got the engineering down, as usual, but they missed the behavioral science aspect of the model.
I can think of one possible use for this survey tool. Sometimes ad agencies approach me with a quick deadline, low budget need for a short survey to feed into a pitch to a new or existing client. This might be the perfect tool for that: for a couple hundred bucks you get your answers. However, when I ran my survey it took nearly two weeks for field to complete, so the quick aspect that agencies on deadline need isn’t necessarily covered.
If you’re looking to do some relatively quick turnaround research that gives you insightful feedback to a problem, is designed by serious professionals and can help impact your bottom line, there are a lot of companies out there to help you. Stone Arabia Consultants among them.