Using TBA to Design a TEI

Using the TBA Guidelines to Design a Technology-Enhanced Item (TEI): A Hypothetical Case Study 

This blog article by Belinda Brunner, Assessment Systems Corporation, and Cynthia Parshall, PhD, Touchstone Consulting, is part of a series supporting the ITC/ATP Guidelines for Technology-Based Assessment. These authors contributed to Chapter 1 of the Guidelines.

I used to play Wii Sports, the video simulation game, and I became quite good at Wii Tennis.

But I’m a horrible tennis player in the ‘real world.’

In assessment terms, the simulated game is not a good measure of tennis-playing ability. The simulated tennis game didn’t assess the breadth of the construct needed to play tennis well. For instance, Wii Tennis didn’t measure skills needed for moving across a tennis court. And the action of swinging a tennis racket using the Wii remote is not a good analogue for the skill required to actually swing a racket.

The same challenges can be presented when trying to assess skills in less than totally authentic ways in high-stakes assessments. And yet, it can be advantageous, and perhaps even necessary, for some licensure and certification assessments to measure practical skills. 

Through the use of technology, technology-enhanced items (TEIs) have become an attractive format. TEIs show promise, but they need to be well designed to avoid certain risks. These types of assessments can be affected by construct underrepresentation (CU), meaning the test doesn’t adequately assess all aspects of the targeted construct, and/or particularly, construct irrelevant variance (CIV), the introduction of extraneous variables that may affect assessment outcomes. For example, using a computer mouse as a substitute for an action that in the real world uses a different instrument can introduce CIV for those test takers less adept at using a mouse. 

(Hypothetical) Case Study in Phlebotomy 

Let’s take a look at the design stage for a hypothetical case study for a new test to assess phlebotomy skills. Phlebotomists are the healthcare professionals who draw blood from patients. Our case study will walk through a well-crafted TEI design process and highlight how this process benefits from the guidelines in the Guidelines for Technology-based Assessment.

In our purely hypothetical case, a job task analysis was conducted to identify tasks performed by phlebotomists, along with the knowledge, skills, and abilities needed to perform these tasks. While some of the knowledge needed for this job role may be adequately tested using more traditional item types, such as the multiple choice, drawing blood is, of course, a hands-on skill. Therefore, the test sponsor decided to explore technology-enhanced items to more fully assess the skill component -- and to use the Guidelines for Technology-based Assessment in order to ensure well-designed TEIs.

The Guidelines have a section specifically on TEIs. These TEI guidelines start with Guideline 1.5 whichrecommends that the development of TEIs begins with an analysis of construct needs. In the phlebotomy case study, this guideline was addressed by a team of subject matter experts (SMEs) who reviewed the test blueprint in order to identify tasks that could not be adequately tested solely using text-based Item types. With this step, the test sponsor was able to focus use of TEIs on constructs that cannot be easily tested through more traditional item types.

In this hypothetical situation, the SMEs identified that the actual task of drawing blood, so central to a phlebotomist’s role, could not be adequately assessed using a text-based item type. Brainstorming and storyboarding techniques were used by the SMEs to determine potential item types that would more fully assess the skill involved in drawing blood. A cost benefit analysis (recommended in Guideline 1.7) was conducted by the test sponsor on each storyboard idea. Costs considered in this analysis included factors such as implementation costs, the complexity of authoring and delivering each item type, the number of measurement opportunities provided by the item versus the time needed to respond to the item, and whether the item type could be used for multiple areas on the test blueprint.

Based upon this analysis, the sponsor decided to explore how use of a computer mouse might be used to simulate inserting a needle to draw blood. A key issue to address in this exploration was how to make sure that construct-irrelevant factors, such as the degree of familiarity with using a mouse, were minimized as much as possible (in accordance with Guideline 1.6). To achieve this, the test sponsor followed a systematic process for designing and developing the simulated item type. Key parts of the process were the development of prototypes and the use of iterative user-centered research. This research used the TEI prototypes and think-aloud protocols to refine the item type and the instructions provided to test takers on how to interact with and respond to the items (Guideline 1.8). 

Using a think-aloud protocol, a small sample of phlebotomists were asked to interact with the prototype items and verbalize their thoughts while doing so. The think-aloud method is a cost-effective and simple way to gain user feedback. Getting this type of feedback during the item development phase will ultimately save time and expense. Although pilot testing is still necessary (Guideline 1.13), addressing issues earlier in the development process means a greater opportunity for success in piloting the item type. This ‘pre-emptive’ research could also mean that if item type is infeasible, this can be addressed earlier on so that development efforts can be directed toward other item types.

Once the rounds of iterative prototyping and usability testing were completed, a go/no-go decision was made about whether to continue the development process with pilot testing. 

Using SME input and knowledge gained from usability studies, tutorials and practice materials (Guideline 1.14) and on-screen instructions (Guideline 1.15) were written. SME input was particularly helpful in creating item writing guidelines (Guideline 1.11). 

Conclusion

Although this was a hypothetical case study, it is based upon our experience with implementing TEIs for a variety of assessment programs. The design stage for new item types is just the first step; we haven’t addressed the development step (see Guidelines 1.091.101.121.16, and 1.17). But laying the groundwork for TEI implementation through a systematic design process is an all-important first step. Careful planning, implementation and evaluation should also be carried out throughout the development process. Using the TBA Guidelines will help guide both the design and the development stages, leading to sound item types and successful items.

The Guidelines may be downloaded at no charge from ATP’s website: www.testpublishers.org/atp-white-papers

 

Share this post:

Comments on "Using TBA to Design a TEI"

Comments 0-5 of 0

Please login to comment