I saw a talk last night by Jared Spool, and if you have the chance to see him speak, by all means do it (he was hilarious). It was pretty good, presenting some of his user testing data that followed people using e-commerce sites. My only problem with the talk was his presentation of results as scientific fact, oftentimes with conclusions like “42% of users couldn’t purchase something when they tried to.” Sure, it’s possible to show two significant digits based on the data you collected, but is it really reliable? I know the act of usability testing isn’t one of pure experimental design and absolute findings, getting a sense of what works and what doesn’t in general is enough to improve a site, but presenting to an academic group at an academic university made me expect more.
That’s not to say there aren’t real, science-with-a-capital-S usability studes being done. This one from Kansas for instance, blows everything I’ve ever read Jakob write (as his work is most often based on his own speculation and back of the envelope calculations). The Kansas study provides real data, real results, and gives everyone practical findings that anyone can apply to future projects. [thanks to the Veen for the Kansas link]