Tuesday, September 2, 2008

What is Good Quality elearning?


This post is written by Taruna Goel.

How many times do you wonder if the elearning course you are making is of good quality? All the time? Most of the time? Atleast sometime?

I am sure there are enough matrices and checkpoints with the Reviewing/Quality Assurance/Testing teams that also give you the quality figures - defects/hour is the norm. The general belief is Zero Defects = Good quality. More the number of defects, poorer is the quality.

But what is a defect? Is it those grammatical mistakes? Is it the text-graphic mismatch? Is it the two-pixel shift? Or is it something more? Are we focusing on what really matters?

Here's an interesting article around what is quality and a list of guidelines on how to evaluate it.

"We tend to judge quality only from the perspective of our own domain. Consider the views of all the stakeholders: the training manager; the designer/developer; the system administrator/IT manager that will host the application; and, of course, the end users. In some cases quality measures are of no concern to one stakeholder while of considerable importance to another. Learner-centered design would propose that you make all decisions exclusively for the learners' benefit. Yet, all stakeholders must be partners if success is to be achieved. Since development and delivery are a team effort, one must weigh all viewpoints on what constitutes quality. "

The article/site provides a list of factors (quality measures) that you can use to evaluate elearning. Since it’s a list of 22 (guidelines), you can assign weightage and create a scorecard of the most important quality measures that matter to you!

For each factor, you enter a score (scale of 1-5), multiply it with the weightage, and obtain an adjusted score. Add up the adjusted scores for all factors to obtain a total score. Use this total score to compare one elearning course to another or better still, develop a target score for your team and evaluate it against your goal!

This is a fresh perspective on quality - it still may have the much-dreaded 'rejections' - but this time, you reject - because YOU evaluate the quality of your course before the Reviewing/Quality Assurance/Testing/Customer teams do!

5 comments:

Manish Mohan on January 8, 2010 at 2:53 PM said...

Submitted on 2008/09/02 at 7:49pm

Reminds me of two earlier posts here: Who is the boss - QC or Client? by Anamika and Who is your Customer - Client of the End User? by Sonali.

Extending the concept explained above, this post reinforces the concept of “Quality Plan” used in our organization. The purpose of the ‘QPlan’ is to identify all the stakeholders and identify each stakeholders’ success factors and critical to quality parameters. And if done correctly, it will cover the 22 factors and more. And the developers wouldn’t get hassled when grammatical mistakes or two pixel shifts are pointed out, coz hey those are also stakeholders’ parameters… although we may argue that the two pixel shift may have a lower weightage than some other parameters.

Taruna Goel on January 8, 2010 at 2:53 PM said...

Submitted on 2008/09/02 at 9:48pm

Thanks for your comment Manish. Yes, I agree. The Quality Plan is a multifaceted tool that allows us to view the ‘expected quality’ from a variety of perspectives.

What I found interesting in this article was the focus on the more ‘implied’ or implicit requirements of an (e)learning course. For example, is the course learner-centred, does it use media effectively, does it demonstrate good usability etc.

While we usually have a customer requirement saying something like “course should be SCORM 1.2-compliant” and then we add this requirement to the Q-plan and ensure that this requirement is met – there are other implied requirements – the ones that customers expect but don’t verbalise as often. For example, the course should meet its objectives. We hardly ever document such requirements in the Q-plan though they are core to the quality of the course that we make. We assume that if we follow our design philosophy and the development process, this requirement will be met. Maybe but maybe not. Or we check such items during some phases (design/storyboarding) but not necessarily during Quality Assurance testing. Maybe, we need to look at what is really ‘Quality Assurance’ and then include appropriate parameters of evaluation accordingly.

Sandipan said...

Submitted on 2008/09/06 at 8:00pm

Thanks for this highly thought-provoking post, Taruna. Yes, I do agree with you that we (e-learning professionals) need to have a fresh look at what “quality” actually means. For me, its about meeting the implied but the sole objective of any training: to help the learners learn.

While a spelling error (which may happen in even the best of books ever written) and a “two-pixel shift” may be little irritating, its is NOT a defect that will hamper learning seriously. However, learning would not happen AT ALL if the instruction becomes too complex for the learner to understand. I think one way to test a course is to throw it a diverese group of people unconnected to e-learning industry, who meets the basic criteria for taking the course, and have them take a cursory glance at the course. Then, you will have an unbiased opinion about the course. Am I talking something like beta-testing?

Taruna Goel on January 8, 2010 at 2:54 PM said...

Submitted on 2008/09/07 at 7:19pm

Thanks for your comment Sandipan. Yes, the role of Beta-teach is quite critical though often overlooked especially in the context of WBTs. The much needed UAT phase is never planned! To take the first step, we need to try and put ourselves in the learners’ shoes and then design and develop the learning program. Knowing your audience and understanding what they want is the first step towards quality.

Chaitali said...

Submitted on 2008/12/18 at 1:47pm

A very thought-provoking post. This question has often risen in my mind when a course of mine has headed towards the QA/QC department. Who are we intending the course for? Are we designing the course so that our client (who mostly are doing it because they need to have a learning program online to meet their projected training budgets) are satisfied that there are no typos or “2-pixel” shifts in the image? Or are we directing the design towards the intended learner? As a designer, my first and foremost concern should be my learner, are they getting what they are looking for? Have I managed to teach them or provide them the information/knowledge they were looking for? Yes, coming from a language background, a typo is a serious mistake but the point to see is whether this has an impact on the learning. Sometimes if the typo is regarding an important term crucial for the learning, then yes it is a seriously important mistake or “issue” as we would like to state to be “politically correct”! I remember in one of my previous courses the term was “HSA” but Microsoft Word constantly kept correcting it to “HAS” which if overlooked would have actually impacted the learning. But coming back to my point, is a typo or some pixel shift or a different color really important to my learner? Most of the time, the quality check people are not the ones who have any background on ID or graphics. They purely come in from a technical or language background who typically look for technical mistakes, whether in writing or developing, and point it as an error. In such cases, a beta test becomes very crucial where a part of the actual intended audience is exposed to the training and their feedback is important as they give us an idea on how effective our course is. I think what we mostly concentrate on is how effecient our course is and not how effective! Look forward to more thoughts on this.

Post a Comment

Followers

News

Suggested Reading

 

Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com. Some icons from Zeusbox Studio