louisrosenfeld.com logotype

Home > Bloug Archive

Feb 08, 2006: Surveys to Assess Search Experience

A colleague asks:

I'm looking for a list of survey questions that can be used to help assess users' search experiences. I need guidance on how to develop a formal, more quantifiable survey.

At first glance, this seems to be such a sweet, innocent question. And if you know of any good examples, please post some links below. I couldn't come up with any off the top of my head.

Beneath the surface lies some gnarly, messy stuff that information scientists have been arguing about for years. What constitutes search success? Finding stuff that answers your question, right? Yes, but not all questions are alike. On one extreme, we've got known-item searches, where there's a "right" answer to your question (e.g., "what was George W. Bush's birthplace?"); you've just got to find it. On the other end, we often are searching when we're not even quite sure what our question is, or how to articulate it in words (e.g., "what am I going to do this weekend?").

I've bitched about this subject before. So it might be a good time to ask long-timers out there: is this point catching on? It used to be the source of many headaches per year for me, especially when dealing with folks from the database world. But I haven't worked with them in a long while. Have you?

email this entry

Comment: Peter (Feb 8, 2006)

Perhaps adjust the (lately vey popular) "would you recommend this to a friend?" question like this:

"If a friend called you right now, and they were looking for the same thing you were looking for, would you recommend to them to use the search engine? (You can't tell them what words to use.)"

Followed up by a textfield asking for general comments.

Something like that.

Comment: Jennifer Whalen (Feb 8, 2006)

I'd also be very keen to hear what others have done. We've used surveys but kept them extremely short to get the best response rate. We've asked users to rate overall satisfaction with search as well as with different elements such as relevancy and display of results (scale of 1-5). We also ask 1-2 open-ended questions in hopes of insightful commentary (e.g., what one thing would you change about how search works, or give us an example of a recent search you did and whether you found what you needed). We've supplemented this with user interviews and quantitative analysis of the search logs. For us the key is comparing survey results over time - but this also gets difficult given that expectations are ever increasing - satisfaction could stay contant even when your search is improving. Still looking for the perfect metric as to successfulness of search. What have others found to be effective? Please feel free to drop me a line - would love to compare notes!

Comment: Jennifer Whalen (Feb 8, 2006)

Addendum: My email address is jewhalen@deloitte.com

Comment: Jennifer Whalen (Feb 8, 2006)

Addendum: My email address is jewhalen@deloitte.com

Comment: RTodd (Feb 8, 2006)

I suppose you could survey the search environment from three different perspectives. Search survey for the producers of the content, survey from the librarian point of view, and the search from the end user experience. I assume by the note, that your friend is more concerned for the end user angle. However, I think of it as three circles intersecting: P (Producers), L (Librarians), and U (Users). The ideal would be a fifth level maturity in all three. I would also be willing to bet that they all have a strong correlation with each other since how could how have a great user experience (U) but have really crappy search metadata (L)? As far as questions for the end user and without knowing the dependent, independent, and control variables….

1. Was the search interface intuitive?
2. Was the search request and result set easy to navigate?
3. How confident / satisfied are you with result set?
4. Was the result set relevant to the search topic?
5. Did you find what you were looking for and how many objects required review?
6. Was the search function easy to find?
7. Was an advanced search available? Boolean? Instructions?
8. How often do you access the search feature?
9. Is there a softlink (Classification or Taxonomy) option available?
10. How much time did you spend reviewing results before giving up or entering new search phrases?

Ok, I’ll stop at ten. You could add many more on Trust, Usability, and Performance.

Comment: James Robertson (Feb 8, 2006)

Hi Lou, as it happens, our new "Improving Intranet Search" report comes out in the next 24-48 hours. Maybe we're cheating, but this is what we said about surveys:

Compared to the other techniques, surveys are a much less effective option. While they can be used to reach a large number of staff, it is difficult to construct questions that gain meaningful information about search behaviour.

Most problematically, staff will find it extremely difficult to recollect or describe how (or how often) they search, or the types of information they were looking for.

Without this information, anything collected during a survey is unreliable at best, or misleading at worst. For this reason, the use of surveys is not recommended as a method of understanding search behaviour.

---

In general, we would always recommend field research (interviews, contextual inquiry, workplace observation)...

Comment: Alexandra Proserpio (Feb 10, 2006)

We hold annual user interviews (tailored and open ended questions), categorize search feedback (to help prioritize future development projects) and analyze search logs. For us developing a formal search survey has been a challenge. We allow users the ability to search up to nine repositories (e.g., internal websites, photos, A-Z Index) via a tabbed user interface (yes, like G). We can only affect the quality of results in two of the nine repositories so user satisfaction could vary depending on which repository results they view and/or the quality of data in a particular repository. Has anyone developed a search survey for a mixed search repository environment? Feel free to email me. Thanks!

Comment: Alexandra (Feb 10, 2006)

my email is amprose@sandia.gov

Comment: Lou (Feb 19, 2006)

I'm with James; I'm dubious as to the value of surveys. I like Jen's and Alexandra's approach; if you're going to use them, do so in combination with other evaluation methods. But really, expectations are always changing, making measurement a huge challenge. And it still comes down to the lack of a single perfect metric for retrieval success. I'm not optimistic one will suddenly appear, although if you're in a commerce environment, you needn't worry about this issue; you've already got your metrics in place.

Speaking of which, here is a link to the Apple Store's search success survey:

http://www.apple.com/r/store/search/

Other than question #4, I'm not sure the respnoses are really of much value to Apple.

Add a Comment:

Name

Email

URL (optional, but must include http://)

Required: Name, email, and comment.
Want to mention a linked URL? Include http:// before the address.
Want to include bold or italics? Sorry; just use *asterisks* instead.

DAYENU ); } else { // so comments are closed on this entry... print(<<< I_SAID_DAYENU
Comments are now closed for this entry.

Comment spam has forced me to close comment functionality for older entries. However, if you have something vital to add concerning this entry (or its associated comments), please email your sage insights to me (lou [at] louisrosenfeld dot com). I'll make sure your comments are added to the conversation. Sorry for the inconvenience.

I_SAID_DAYENU ); } ?>