Tuesday, March 11, 2008

Who is the boss- QC or Client?


There are two types of courses created in elearning practices:

  • Zero Defect courses from QC test but are entirely rejected from the client.
  • Courses rejected at QC test plan, but are accepted by the client with minor concerns.

Which is a better situation than other? I am sure none of them is an ideal situation, but both are real and practical.

This generally happens with legacy clients and projects; the role of an instructional designer seems very restricted. Templates, standards related to graphics, language and QC are already established. For all good reasons neither the project team, nor the client is ready for change. In such cases, an ID can only innovate various techniques of introducing content, and assessments to achieve the defined objectives. For a content developer who is working on such projects for long , standards related to formatting, graphic and QC are not a big challenge. In most of the cases, these experienced content developers produce Zero Defect courses for QC testing.
I have seen such courses being entirely rejected from the client. Each rejected course from client is a bad experience left with the client and of course de-motivating for entire team working on it. I have also experienced that courses which get rejected at QC test plan, are accepted by the client with minor concerns.

This leads to two inter related questions –

How relevant and successful is the QC test plan?
In which situation, do you think the ID should be held responsible for the rejection?


While debating on these issues, I have come across two very strong opinions:

First opinion says ID in first situation should not be held responsible as his course has passed QC test with Zero defect. He had followed all the defined standards religiously.

The second opinion supports the ID in the second situation. ID should not be held for the QC defect. The ID has done complete justice with his role, in understanding content and designing assessments and activities as per the client’s expectations.It says that to follow defined standards is secondary if we compare it with the understanding and creating a course up to the client’s expectations. QC test plan is led out to define standards as basic general rules to maintain consistency across the entire project. Whereas, understanding the content for each new and different course and designing it to the client’s expectation is a bigger challenge and responsibility, which in this case has been handled successfully.

I am leaving this debate open to all, would like to know your interpretations and opinions, before I conclude.

5 comments:

Sandipan on March 12, 2008 at 9:24 AM said...

Thanks for posting such a relevant topic, Anamika! As for me, I would any day go for the second option "Courses rejected at QC test plan, but are accepted by the client with minor concerns."

I think ID is all about creating a better learning experience, and not a ZD course. In fact, the very term "ZD" is relative. "ZD" in what sense? Its high time we properly define the term "defect" as applied to "e-learning". Let's admit that "defect" in an e-learning course is not similar to a defect in a sw application. Any day, as an end-user, I would love to design an e-learning course that makes learning easier for the user, even if that course contains QC defects of standard violations.


The day we align our QC test plan to check for Instructional shortcomings rather than "cosmetic issues such as standard violations", this dilemma will cease to exist. And for this, I strongly believe that the best QC person would be an ID.

Anonymous said...

I agree with Sandipan.

QC these days seems more about ATD then Instructional Design. Attention to standards and detail is important, but it is not the most important thing. And when we give too much importance to QC reports, we neglect to show appreciation for people who are using innovative and creative solutions but failing the test of whether or not the link should be underlined, and whether or not the Note should be bold and whether or not there should be a comma before and...

Moon on March 13, 2008 at 4:10 PM said...

Nice point Anamika, but you must remember that when a course reaches QC it is assumed that all ID, learning strategies, customer requirements, and aesthetic issues have already been addressed.
The purpose of QC is to ensure that there's no hygiene issues left out in our attempt to address the key requirements. QC typically checks for functionality show-stoppers, progress hindering issues like page sequencing, simulation errors, inconsistencies that are irritants and impact credibility.
Now, it is up to you to either lump these together as QC fetishes that are 'ATD' issues or look at the QC report as a symptom - of a greater malady.
To give you an example, typos in my son’s NCERT curriculum make me wonder at the credibility of the courseware no matter how well researched the content might be.
And when his history book says Eiffel Tower where the picture is a shoddily modified Qutub Minar most of my faith in our education system is lost.
When I see a badly embossed Nike logo on a T-shirt – I know that it’s a fake. Clearly, someone did not invest sufficiently or even care adequately to ensure that the product is of good quality.
All these instances are clubbed as standards. Or as we say ‘ATD’. Now wouldn't you want that your well-thought out strategies with learning enhancements et all. are played out in a Zero-Defect environment that adds to the credibility of your learning product?

Anamika B on March 14, 2008 at 9:09 AM said...

The purpose of QC is to ensure that there're no hygiene issues left out in our attempt to address the key requirements. In other words, a course with Zero Defect QC report ensures client's acceptability regarding the standards and hygiene issues.

QC does not have any set parameter or criteria for checking the accuracy of Instructional Design strategies. In other words, the validation of accuracy of Instruction Design is outside of QC‘s scope.
It leads to another question- What are the parameters of a good Instructional Design? Or what are the best practices in Instructional Design that ensures a good learning experience?
How should these be checked / measured?

Anonymous said...

Interesting discussion!!
One thing I will like to clarify is that we, in our usage, have made Quality Control (QC) synonymous with test center (TC).

It is important to understand what QC means before I put my point forward.

QC is a system for verifying and maintaining a desired level of quality in a product or process by careful planning, use of proper equipment, continued inspection, and corrective action as required (dictionary definition). Given this, you will be able to relate with the fact that activities, such as ID reviews, editorial reviews, and SME reviews, at different stages in the development process are planned to control the quality of the final product. So, QC does not mean just checking hygiene issues.

Each QC role has its specific focus area. During reviews, an ID reviewer has instructional design and learner experience in mind and an SME has content accuracy in focus. TC too is not supposed to check for hygiene issues. They have a bigger role in life (read development process), which is user validation. It is unfortunate that we use TC to catch all ATD issues in our work.

A strategy that doesn’t work for the learner, inaccurate content, and grammatical errors - all are issues that impact learning. Therefore, per me, the key expectation from any QC role should be to enhance the overall experience of using a product and not weed out ATD issues.

Post a Comment

Followers

News

Suggested Reading

 

Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com. Some icons from Zeusbox Studio