Well thats an option….

This week we are evaluating different usability testing options for to be used on … dum dum dum…mobile devices.  To say that this poses some unique challenges would be a colossal understatement.  Usability test designing is tough enough between the options of in a usability lab (yes there are still such places), and or remote usability testing via software of some sort. Now we are throwing in devices that are by their nature (and name) mobile…the complications and various options are enough to make your head spin (or if your the nervous sort your stomach churn).

After about the fifth website / how to guide I looked at this week I discovered some vitally important facts…that are obvious in retrospect.  Mobile devices are in a constant state of change (as with most tech toys).  What this means to usability testers is that your testing design that you come up with this week may not work next week…or even later this week on a different device.  So cross your fingers that no “critical updates” come out between your test design and your test date.

Also from every site, source, UX expert a similar trouble causing phrase was uttered (or written) … There is no existing usability testing platform for mobile devices that is equal to what exists for desktop / laptops…. Well as it turns out there are a few that exist, but they don’t quite do everything we need them to do yet, and they cost a lot ($$$$).  So In designing this mobile usability test I feel like I’m choosing the least of many evils.  Not a good starting place in my opinion.

Hopefully by next weeks post / completed test design assignment I’ll have better news and some new insights into how to go about doing this.

Quick bit about presenting study findings.

This week I created a presentation of the findings I discovered during the remote usability test (the one I discussed a bit in my last post).  As usual things did not go as easily as I thought they would.  Having done usability reports before I thought it would be easier to do a quick summary presentation (via the often dreaded power point) then to create a full report…not quite the case.  In a presentation you have to be quick, to the point, and still cover all the bases without overwhelming the audience members with to much information.  It took me quite a while to sum up my findings and refine my presentation (slides & script) in order to achieve this goal.

Even after all that work I still found places I could improve.  Then again that’s the point of taking classes right?  🙂

Tips:
1.  Practice your script (if you have one) a lot before the presentation…even if it is just for a audio / video recording.  The more you have this prepped the less tongue twisting moments you will have in front of the audience, and the less editing you will have to do for any recording you choose to do.  (Bonus: If living audience members are involved it also allows you to be better prepared for questions too.)

2. Practice your public speaking presentation techniques too.  I know I have the bad habit of breaking eye-contact with my audience far to often for comfort.  Again see tip #1.  Best bet is to practice with your team members / friends before the actual presentation and ask for any constructive criticism they can give you.  Tell them not to hold back the negatives as that is what you need to work on.  I actually had a boss in the past join Toast Masters just to work on this, and it worked!

I hope these tips help you a bit too.

Happy Easter. 🙂

Remote usability testing, Loop 11, and the orange page.

Hello again everyone.

This week I had the experience of creating a remote usability test for weather.com for a class assignment by using the Loop11 program.  This was an interesting experience and I learned a lot that I feel like sharing.

1.  Be prepared for things to go wrong!  I discovered that setting up the tasks and questions in Loop11 is easy, and even changing the order and adding questions (before making the test live) is as simple as drag and drop.  But apparently some websites have codes or other add on bits (particularly Google Maps in my case) that may cause problems when the tests are being run.  So come next week (tomorrow) when I study all the results of my test I’ll get to see just how badly this effected the test.

2. Don’t give away the answer in the question.  I was considering having a task for folks to find the weather in Disney World, and then I realized that Disney World is a suggested search already shown in weather.com’s search box…oops.  In my opinion tasks should have a least a little challenge to them to get folks to try different ways of getting to the answer.

3. Taking other peoples usability tests can shed some light onto the pros and cons of your test design.  I learned a lot of interesting tips and styles by taking all of my classmates usability tests. (for various websites)  Some of them put the demographic / screening questions at the end of the test, others included far more open ended question boxes, and some developed interesting and challenging questions for their chosen sites that I would never have thought of testing.  In other words become a online survey / useability test / questionnaire taker.  The more you take the more you see the more you learn what works and what doesn’t.

4.  The orange page in this weeks readings for my class has a simple, straightforward, and accurate statement written on it that is great for any useability experience person to remember.  “Shut up. Listen. Watch.  …   And take good notes.”  (1) Orange page, huge font, simple statement, easy to remember, and absolutely vital for UXD folks to keep in mind.  🙂

(1) Remote Research: Real Users, Real Time, Real Research.  By Nate Bolt and tony Tulathimutte. 2010 Rosenfeld Media LLC

Thanks for reading that’s all for today folks.

Usability II begins with the battle of Ethnio vs. The Turk.

This week marks the beginning of a brand new class (Usability II), and all the adventures that go along with that.  The class started off as normal with a intro lecture, assigned readings, and websites to peruse over at our leisure.  One of which was “Ethnio’s list of remote tools.” which I found has pretty good (if sometimes expensive) tools for conducting remote usability studies.  It’s always nice to add a few new tools to my UXD toolbox.

Then came this weeks assignment decide which of two products was best for recruiting usability test participants.  Ethnio vs. Amazons Mechanical Turk.  This wasn’t a challenge after a mere glancing over of each products web sites.  Ethnio was the clear choice, but I decided to dig a bit deeper and see if maybe I’d missed some important tidbit in the Turks favor.  The answer was not that I could find.  I found that the Mechanical Turk was indeed designed for remote work & testing however it’s so broad in scope, and variable in purpose that you kind of loose track of the specific trees in this massive forest.  Not to mention that I had to dig down through six pages of explanatory text before I found a sole small paragraph hidden there stating that you can indeed recruit test participants via the Turk. This isn’t a good sign when the instructions are so unusable and you are hoping to use the product to conduct a usability test.  Ethnio held it’s win hands down.  Not only is the products singular purpose to recruit, screen, and schedule test participants, but it even offered me a test participant survey to take upon entering the site.

The only potential down side a fellow classmate of mine discovered is that Ethnio’s participant survey form may appear unprofessional to some as it is a fill in the blank style form vs. a classic survey question form.  Guess that will be round two of this challenge for me to figure out.

If anyone reading this has actually used either the Mechanical Turk or Ethnio I’d be interested in hearing (or reading) about your experiences.  🙂

Reporting Usability Test Results

The final project for my Usability 1 class was a usability test report.  In other words I had to watch test participant videos note their comments and difficulties during their tasks, and then find a sensible way to report these observations to others. As it turns out watching and analyzing videos is a very … very…time consuming task.  I knew it would take a while to pull out the important bits, but I had no idea it could take as long as it did. (about 2 hours per 20 min video.)  Then again I’m new to this field and not practiced so maybe it just took me longer than most.

Honestly though that was the hardest part of this report.  Because once you have all that information it’s really easy to pull together a report explaining the things that just jumped right out at you and screamed “fix me now!”.  Since those “fix me now” things are the ones that can easily be handed to the folks who can actually fix them it’s really easy (and important) to add them into the report.

Some things I learned, and feel I should share about this process.

1.  Know where your pause button is!
This makes it easy to take notes of the time stamp in your videos for important quotes or events.
2. Take screenshots as you go.
They can be easily pasted into a single document for later editing while the video is paused. (Be sure to note which video & what the time stamp was for each)3.  Trying to take down exact quotes while they are being spoken takes practice, or a lot of pause & rewinding.
I’ll have to use a speech to text program next time.  (I’ll be sure to let you all know how that goes.)

I hope this helps my fellow newbies in UXD a bit.  🙂

Meditating on Moderating

This week I conducted a usability test (recorded for study) and I learned a few things about myself along the way.

1.  Moderating (hosting) a usability test can be a bit nerve-racking for both the test participant and the moderator.  This showed up, at lest to me, in the speed of my talking in the intro, and my volume. (Apparently I speak loud and fast when nervous) Oddly I don’t think this is because of public speaking as the test taker is a friend I’ve known for years.  I think for me it was more of a new process, with a canned script.  When reading aloud (even something I’ve read many times before) I always get nervous I’m going to mess it up.  Guess I should practice reading for audiences a bit more to help overcome this trait.

2.  Probe for more in-depth answers when possible.  Yes or no answers are fine and all, but they don’t really get to the heart of any usability problems that may be occurring.  Personally I don’t think I asked enough follow up questions.  The information the participant shared did answer the task questions, as written, but I don’t think they really got much useful feedback when the (admittedly few) problems did occur.  I need to be better prepared with follow up questions for each task in the future.

3. Be prepared for the distractions of life.  We’ve all been doing something intently focused and really in the zone to get things done, and the phone rings, or the cat pounces you, or the dog decides that now is the time to go out for a walk, ________ feel free to insert your most common distraction into this list.  My distraction in this process was to not laugh out loud at the antics her family members were trying to use to distract us from just off camera view.  At least we all had a great sense of humor about the whole process.  So I guess this leads to two ideas…first: when possible use a more controlled environment than someones living room, second: always be prepared for distractions…they will occur.

To answer some specific questions for my class:

  • What happened during your session that surprised you?   =  See #3
  • Where you better or worse than you thought you would be? =  About where I thought I’d be, but I see room for improvement.
  • Where you able to remain unbiased?  =  I think I did pretty good at this…we’ll see what my group thinks later.
  • Did you let the participant speak?  =  Yes…possibly didn’t prompt enough for more though.

All in all this was a fun assignment, and a useful experience.

Numbers Aren’t Everything…But They Can Help.

Numbers numbers everywhere.  Everywhere you look there are numbers claiming all kinds of things from the number of hits a web page gets (hit counters), to the number of hamburgers McDonald’s says it has sold.  This week I had an assignment to choose one kind of numbers based (quantitative) usability test and figure out the pro’s con’s and possible reporting styles for it.  I choose “Number of Clicks” which, in my opinion, can be seen in two ways: first the number of hits a pages gets…clicks to it, or secondly the number of clicks it takes to drill through a web page to get to a certain bit of the site.  Either bit of numerical info can be very important to know…one gauges traffic, the other gauges site depth & ease of navigation.  Either way these numbers can prove useful, and as any web site designer will tell you they have.

But, with a capital B, as any statistician will tell you – numbers lie -.  For example if the numbers tell me that my contact page is getting 3000 hits a day I may think everyone is looking to contact me. “Hurray!”  But if everyone is getting to my contact page only because some poor web site navigation misdirected them there…my number of clicks / hit counts aren’t giving me the whole picture, and I may not realize the problem for quite some time.  This is bad for usability which makes it bad for me. “Boo!”

When all was said and done the conclusion I came to in dealing with numbers and usability is this:  Numbers (quantitative data) should always be paired with user based (qualitative data) information this way we get the whole picture.  As a bonus with the two kinds of data we can easily confirm or disprove our findings.

Screeners & Task development – Benefits of a team effort.

This week I got to work with a couple of great classmates on a project to develop a participant screener (which is a way to weed out folks who are not in the target bracket for product testing), and a task list (A set of things for product testers to do).  I found it very challenging developing the screener questions so that we got the answers we needed but not give the potential test participant any hints as to what we were actually looking for.  This is because we don’t want people answering us with what they think we want to hear based off of previous questions just to become a test taker.  Most usability test participants are paid for their time, and we don’t want “pros” involved.

Working on a team was also interesting, and frankly it was amazing how similar many of our initial questions and tasks were.  This is great for team moral but not so hot for developing a broad range of questions and tasks *chuckles*.  We did manage to achieve our goals in case you were wondering.  By working together we were able to: fill in each others gaps, strengthen our wording, improve our task & question order, and hopefully develop a much stronger final product.

I’d happily work with them both again any time.

Thanks Marc & Terry! 🙂

Usability Thoughts & Assignment

This week we learned about the differences between in production usability tests and end of production usability tests.  (Called Formative & Summative tests respectively by usability folks).  This is an interesting difference as it makes you stop and think when you should use either test type, and the benefits or drawbacks of each depending on the situation.  Which when I stop and think about it is almost like conducting a usability test on usability test techniques.

This weeks assignment was also interesting so I’ve decided to include it here.  Our fictional audience for the project were the CEO and CTO of a online pizza company.  I attempted to make the report as usable for them as possible while still making my case for which usability test type to use.  Needless to say you have to think a lot about usability when you want to be a usability professional. 🙂

summative-vs-formative-presentation (PDF)

What I think I really discovered during this project was that if you have the time, budget, and ability you should do both kinds of usability tests…or more.  The more information you can gather on usability the better you can tune your product to serve your customers and that is always a plus.

Usability 1 – Week 1 – Thoughts

This week in my class on usability we learned about usability tests with some specifics on the number of participants to involve.  Some of what we learned was frankly, in my opinion, fascinating and other parts seemed to contradict each other.

I’ll start with a bit of context for clarity’s sake.  If you have ever bothered to fill out one of those pop-up surveys that many corporate web sites seem to have then you’ve participated in a usability (or market research) study.  When you see websites that are asking for feedback on their new site then they are conducting a usability study, and they want you to volunteer some time and participate.  Realizing this I connection I can now say I’ve been an unwitting test participant more times than I can count.

Increase in proportion of usability problems found as a function of number of users tested

Graph from: Nielsen, J. (March 19, 2000). Why You Only Need to Test with 5 Users. Retrieved from: http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

What I learned from some of my readings (Jakob Nielsen: “Why you only need to test with five users.”) this week was that these companies don’t (usually) need to involve many people at all.  That on average a mere six (yes 6) participants will find 90% of the problems with any product whether it be a website, app, or physical product. This is a concept that is amazing to me because I know in the past most research usually involves hundreds if not thousands of participants to get results, but with usability problems a mere six or so will do.

This idea does have its competition though.
“In a recent study, we decided to put the widely held belief that “eight users is enough” to the test. …
When we tested the site with 18 users, we identified 247 total obstacles-to-purchase. Contrary to our expectations, we saw new usability problems throughout the testing sessions. In fact, we saw more than five new obstacles for each user we tested.
Equally important, we found many serious problems for the first time with some of our later users. What was even more surprising to us was that repeat usability problems did not increase as testing progressed.” (Perfetti & Landesman: “Eight is not enough.”)

Needless to say this is certainly a contradiction of ideas that needs to be noted, at least in my opinion, by myself and other new recruits to the usability field.

Looking forward to learning and sharing more with you all, and more than likely learning from you also.