The UX version of connect the dots.

Last week and this week I learned a lot (whole lots) on eye-tracking and how to read the results from such equipment.  I even learned how eyes focus and move which helps understand reading the “gaze plots” the eye-tracking software generates.  But if I must say putting it a bit more basically than my text-book did; It’s sort of a productive version of connect the dots.  Each time you stop and look at a point on the screen you are generating a dot (a fixation point if you want to use the jargon) on the map the software generates.  Each dot is numbered in order of creation, and is often (not always) sized based on how long you looked at that point (bigger = more time spent).  Each dot is connection by lines and the entire mess of dots and lines forms a pattern which when overlaid on an image of the website / app screen shows the reader exactly where the test participant was looking, when, and for how long.  That just screams useful information to me; not to mention being a bit techno creepy mind reading-ish…then again mind reading would certainly help to develop usable designs wouldn’t it.

Heat maps, fixation cluster charts, opacity plots, gaze point plots / maps, participant video & audio recordings, moderator comments & tasks.  No matter how you look at it eye-tracking studies generate mountains of data to go through.  These mountains are useful in so many ways to your usability study…just don’t get caught in a land slide. Develop a good file naming & organizational system, and then stick to it no matter what.  You don’t want to realize half way through you study that you’ve mixed up the videos from one participant with the gaze point maps from another…or worse yet have nothing match up.

Either way if you get the chance to preview or better yet use a eye-tracking system for anything jump at it.  Even just seeing one in action may give you ideas on how to improve your next usability study.

Have fun connecting the dots between participant data and usable design.

Eyes are windows into our choices?

This week we are learning more about eye tracking studies, and how they can give insights into how usable a website (or app) is.  Even our teachers lecture was given via a video recording of him using an eye-tracking system.  Getting to see a live video of how this system works was enlightening.  You can actually see everywhere a person is looking at, follow their view path, and in the end see a map of where their highest points of focus were.  If that alone can’t sell you on the benefits of eye-tracking I don’t know what could. (Mind you the equipment price is still a bit steep….so that’s a drawback.)

To put it another way if you are a website / app developer and you could get inside your customers head as they worked their way through your product…wouldn’t you want to?  That is exactly the ability eye-tracking systems give you.  Mind that you have to interpret the results which can I’m sure takes a bit of practice, but the potential benefits make it worth it (at least in my opinion). All that being said I think I’ve got my work cut out for me in learning more about how to understand the results from such studies. Then again there is always something new to learn. 🙂

Intertwining parts of my reality.

For those that read my last post you know that I was weighing options for mobile usability testing.  (if you didn’t read last weeks post..well now you know. *smile*)  My assignment is now complete and to say I gained some astounding insight into the world of mobile UX (usability experience … in case you didn’t know the acronym) testing would be a total exaggeration.  What I learned was simply this … usability testing is a relatively new idea and the tools are still changing.  Toss in mobile devices, which are just under a decade (ish) old and you are mixing two things together that most people weren’t ready to see mixed.  Oops we did, we aren’t alone, and we want better results!  I’m in total agreement with Jennifer Aldrichs’ assessment of the current state of usability testing tools. Check her “Mobile Usability Testing Tools” post to see what I mean.  Jennifer’s last paragraph certainly describes what I would love to see in a UX testing tool.

To add an interesting twist to my reality this week my library hosted a “Technology Petting Zoo” from the state library.  We have tablets of various makes and sizes, e-readers (both kindle & nooks), and we even got to see a 3D printer in action.  So here I am trying to study about testing mobile devices for usability while at the same time helping folks play with some for the first time.  Usability testing meet usability issue discovery!

I can truly say that at least it was an educational week.

PS.  3D printers are a bit slow but really cool. 🙂

Well thats an option….

This week we are evaluating different usability testing options for to be used on … dum dum dum…mobile devices.  To say that this poses some unique challenges would be a colossal understatement.  Usability test designing is tough enough between the options of in a usability lab (yes there are still such places), and or remote usability testing via software of some sort. Now we are throwing in devices that are by their nature (and name) mobile…the complications and various options are enough to make your head spin (or if your the nervous sort your stomach churn).

After about the fifth website / how to guide I looked at this week I discovered some vitally important facts…that are obvious in retrospect.  Mobile devices are in a constant state of change (as with most tech toys).  What this means to usability testers is that your testing design that you come up with this week may not work next week…or even later this week on a different device.  So cross your fingers that no “critical updates” come out between your test design and your test date.

Also from every site, source, UX expert a similar trouble causing phrase was uttered (or written) … There is no existing usability testing platform for mobile devices that is equal to what exists for desktop / laptops…. Well as it turns out there are a few that exist, but they don’t quite do everything we need them to do yet, and they cost a lot ($$$$).  So In designing this mobile usability test I feel like I’m choosing the least of many evils.  Not a good starting place in my opinion.

Hopefully by next weeks post / completed test design assignment I’ll have better news and some new insights into how to go about doing this.

Quick bit about presenting study findings.

This week I created a presentation of the findings I discovered during the remote usability test (the one I discussed a bit in my last post).  As usual things did not go as easily as I thought they would.  Having done usability reports before I thought it would be easier to do a quick summary presentation (via the often dreaded power point) then to create a full report…not quite the case.  In a presentation you have to be quick, to the point, and still cover all the bases without overwhelming the audience members with to much information.  It took me quite a while to sum up my findings and refine my presentation (slides & script) in order to achieve this goal.

Even after all that work I still found places I could improve.  Then again that’s the point of taking classes right?  🙂

1.  Practice your script (if you have one) a lot before the presentation…even if it is just for a audio / video recording.  The more you have this prepped the less tongue twisting moments you will have in front of the audience, and the less editing you will have to do for any recording you choose to do.  (Bonus: If living audience members are involved it also allows you to be better prepared for questions too.)

2. Practice your public speaking presentation techniques too.  I know I have the bad habit of breaking eye-contact with my audience far to often for comfort.  Again see tip #1.  Best bet is to practice with your team members / friends before the actual presentation and ask for any constructive criticism they can give you.  Tell them not to hold back the negatives as that is what you need to work on.  I actually had a boss in the past join Toast Masters just to work on this, and it worked!

I hope these tips help you a bit too.

Happy Easter. 🙂

Remote usability testing, Loop 11, and the orange page.

Hello again everyone.

This week I had the experience of creating a remote usability test for for a class assignment by using the Loop11 program.  This was an interesting experience and I learned a lot that I feel like sharing.

1.  Be prepared for things to go wrong!  I discovered that setting up the tasks and questions in Loop11 is easy, and even changing the order and adding questions (before making the test live) is as simple as drag and drop.  But apparently some websites have codes or other add on bits (particularly Google Maps in my case) that may cause problems when the tests are being run.  So come next week (tomorrow) when I study all the results of my test I’ll get to see just how badly this effected the test.

2. Don’t give away the answer in the question.  I was considering having a task for folks to find the weather in Disney World, and then I realized that Disney World is a suggested search already shown in’s search box…oops.  In my opinion tasks should have a least a little challenge to them to get folks to try different ways of getting to the answer.

3. Taking other peoples usability tests can shed some light onto the pros and cons of your test design.  I learned a lot of interesting tips and styles by taking all of my classmates usability tests. (for various websites)  Some of them put the demographic / screening questions at the end of the test, others included far more open ended question boxes, and some developed interesting and challenging questions for their chosen sites that I would never have thought of testing.  In other words become a online survey / useability test / questionnaire taker.  The more you take the more you see the more you learn what works and what doesn’t.

4.  The orange page in this weeks readings for my class has a simple, straightforward, and accurate statement written on it that is great for any useability experience person to remember.  “Shut up. Listen. Watch.  …   And take good notes.”  (1) Orange page, huge font, simple statement, easy to remember, and absolutely vital for UXD folks to keep in mind.  🙂

(1) Remote Research: Real Users, Real Time, Real Research.  By Nate Bolt and tony Tulathimutte. 2010 Rosenfeld Media LLC

Thanks for reading that’s all for today folks.

Usability II begins with the battle of Ethnio vs. The Turk.

This week marks the beginning of a brand new class (Usability II), and all the adventures that go along with that.  The class started off as normal with a intro lecture, assigned readings, and websites to peruse over at our leisure.  One of which was “Ethnio’s list of remote tools.” which I found has pretty good (if sometimes expensive) tools for conducting remote usability studies.  It’s always nice to add a few new tools to my UXD toolbox.

Then came this weeks assignment decide which of two products was best for recruiting usability test participants.  Ethnio vs. Amazons Mechanical Turk.  This wasn’t a challenge after a mere glancing over of each products web sites.  Ethnio was the clear choice, but I decided to dig a bit deeper and see if maybe I’d missed some important tidbit in the Turks favor.  The answer was not that I could find.  I found that the Mechanical Turk was indeed designed for remote work & testing however it’s so broad in scope, and variable in purpose that you kind of loose track of the specific trees in this massive forest.  Not to mention that I had to dig down through six pages of explanatory text before I found a sole small paragraph hidden there stating that you can indeed recruit test participants via the Turk. This isn’t a good sign when the instructions are so unusable and you are hoping to use the product to conduct a usability test.  Ethnio held it’s win hands down.  Not only is the products singular purpose to recruit, screen, and schedule test participants, but it even offered me a test participant survey to take upon entering the site.

The only potential down side a fellow classmate of mine discovered is that Ethnio’s participant survey form may appear unprofessional to some as it is a fill in the blank style form vs. a classic survey question form.  Guess that will be round two of this challenge for me to figure out.

If anyone reading this has actually used either the Mechanical Turk or Ethnio I’d be interested in hearing (or reading) about your experiences.  🙂

Reporting Usability Test Results

The final project for my Usability 1 class was a usability test report.  In other words I had to watch test participant videos note their comments and difficulties during their tasks, and then find a sensible way to report these observations to others. As it turns out watching and analyzing videos is a very … very…time consuming task.  I knew it would take a while to pull out the important bits, but I had no idea it could take as long as it did. (about 2 hours per 20 min video.)  Then again I’m new to this field and not practiced so maybe it just took me longer than most.

Honestly though that was the hardest part of this report.  Because once you have all that information it’s really easy to pull together a report explaining the things that just jumped right out at you and screamed “fix me now!”.  Since those “fix me now” things are the ones that can easily be handed to the folks who can actually fix them it’s really easy (and important) to add them into the report.

Some things I learned, and feel I should share about this process.

1.  Know where your pause button is!
This makes it easy to take notes of the time stamp in your videos for important quotes or events.
2. Take screenshots as you go.
They can be easily pasted into a single document for later editing while the video is paused. (Be sure to note which video & what the time stamp was for each)3.  Trying to take down exact quotes while they are being spoken takes practice, or a lot of pause & rewinding.
I’ll have to use a speech to text program next time.  (I’ll be sure to let you all know how that goes.)

I hope this helps my fellow newbies in UXD a bit.  🙂

Meditating on Moderating

This week I conducted a usability test (recorded for study) and I learned a few things about myself along the way.

1.  Moderating (hosting) a usability test can be a bit nerve-racking for both the test participant and the moderator.  This showed up, at lest to me, in the speed of my talking in the intro, and my volume. (Apparently I speak loud and fast when nervous) Oddly I don’t think this is because of public speaking as the test taker is a friend I’ve known for years.  I think for me it was more of a new process, with a canned script.  When reading aloud (even something I’ve read many times before) I always get nervous I’m going to mess it up.  Guess I should practice reading for audiences a bit more to help overcome this trait.

2.  Probe for more in-depth answers when possible.  Yes or no answers are fine and all, but they don’t really get to the heart of any usability problems that may be occurring.  Personally I don’t think I asked enough follow up questions.  The information the participant shared did answer the task questions, as written, but I don’t think they really got much useful feedback when the (admittedly few) problems did occur.  I need to be better prepared with follow up questions for each task in the future.

3. Be prepared for the distractions of life.  We’ve all been doing something intently focused and really in the zone to get things done, and the phone rings, or the cat pounces you, or the dog decides that now is the time to go out for a walk, ________ feel free to insert your most common distraction into this list.  My distraction in this process was to not laugh out loud at the antics her family members were trying to use to distract us from just off camera view.  At least we all had a great sense of humor about the whole process.  So I guess this leads to two ideas…first: when possible use a more controlled environment than someones living room, second: always be prepared for distractions…they will occur.

To answer some specific questions for my class:

  • What happened during your session that surprised you?   =  See #3
  • Where you better or worse than you thought you would be? =  About where I thought I’d be, but I see room for improvement.
  • Where you able to remain unbiased?  =  I think I did pretty good at this…we’ll see what my group thinks later.
  • Did you let the participant speak?  =  Yes…possibly didn’t prompt enough for more though.

All in all this was a fun assignment, and a useful experience.

Numbers Aren’t Everything…But They Can Help.

Numbers numbers everywhere.  Everywhere you look there are numbers claiming all kinds of things from the number of hits a web page gets (hit counters), to the number of hamburgers McDonald’s says it has sold.  This week I had an assignment to choose one kind of numbers based (quantitative) usability test and figure out the pro’s con’s and possible reporting styles for it.  I choose “Number of Clicks” which, in my opinion, can be seen in two ways: first the number of hits a pages gets…clicks to it, or secondly the number of clicks it takes to drill through a web page to get to a certain bit of the site.  Either bit of numerical info can be very important to know…one gauges traffic, the other gauges site depth & ease of navigation.  Either way these numbers can prove useful, and as any web site designer will tell you they have.

But, with a capital B, as any statistician will tell you – numbers lie -.  For example if the numbers tell me that my contact page is getting 3000 hits a day I may think everyone is looking to contact me. “Hurray!”  But if everyone is getting to my contact page only because some poor web site navigation misdirected them there…my number of clicks / hit counts aren’t giving me the whole picture, and I may not realize the problem for quite some time.  This is bad for usability which makes it bad for me. “Boo!”

When all was said and done the conclusion I came to in dealing with numbers and usability is this:  Numbers (quantitative data) should always be paired with user based (qualitative data) information this way we get the whole picture.  As a bonus with the two kinds of data we can easily confirm or disprove our findings.