- May 11
Top 10 Avoidable Mistakes Made by Credentialing Programs
1. Weak Value Proposition – Before creating a new credentialing program, it's critical to conduct market research to ensure that the program is targeting the right audience, has value for all stakeholders (e.g., the sponsoring organization, certificants, employers, public/consumers) and has value above or different from established certification programs for the occupation.
2. Weak Business Model – A credentialing program is a business and should be run like one. Your program is subject to the forces of supply and demand, competition and external factors like legislation and regional, national and global events. Use the right forecasting model, budget realistically and treat your testing program like a long-term business investment.
3. No Clear Roadmap – To develop a new program as efficiently as possible, make sure there's a shared understanding among stakeholders about the purpose of the program and target audience for the credential. There must also be agreement about the structure and interrelationship between different certification tracks, the interpretation of test scores and the definition of the minimally qualified candidate. Eligibility requirements and recertification requirements should also be determined and agreed upon. Changing focus or direction after test development has begun can be expensive in terms of money, time, resources and lost momentum.
4. Unclear Definition of Content Domain – A job task analysis is required for the legal defensibility of a testing program. It is also essential for defining the body of knowledge to be measured on the exam. A good job analysis (and the test blueprint that it informs) should clearly define the exam content for item writers and reviewers so that a valid determination can be made from the exam scores regarding minimal competence in the certified occupation or job role.
5. Not Following Professional Testing Standards – To ensure that the certification program is high quality and earns credibility among stakeholders, it's critical to be aware of and follow professional testing standards throughout all phases of the initial exam development.
Common examples of mistakes in this area include programs that develop an exam without a job-task analysis or test blueprint and programs that follow professional standards for test-development activities, but then set an arbitrary cut score. Additionally, bringing programs up to professional standards after they are operational is costly and inefficient.
6. Beta Testing – Identifying and recruiting beta test takers who are not representative of the exam's target audience can skew the standard of competence for the exam (i.e., cut score). Frequently, new certifications make the mistake of recruiting established, experienced professionals for the beta-test administration, thus making the exam appear easier that it would be for candidates with less experience. This can cause lower future pass rates and bring the validity of the exam pass/fail decisions into question.
7. Recruiting the Wrong SMEs – If the SMEs recruited to write and review test items are not representative of the target audience for the certification (e.g. do not have experience performing the job, have too much experience or have expertise that is too specialized), the exam items developed can be too difficult. There can also have too narrow of a focus to be fair and effective for the target audience. Additionally, the dynamics of the SME group (e.g., dominant group member, personality conflicts among SMEs) can impact the quality of the exam items when all members do not have an equal opportunity to contribute to the item-review process.
8. Maintenance Plan Not Implemented – Once created, the certification exams and program need to be maintained. A sufficient budget is required for annual exam maintenance activities like item writing and review, exam form revision and item analysis. Provisions should also be made for reviewing cut score determination and equating. A new job task analysis should be performed approximately every five years.
9. Using the Wrong Tools – There are many tools available to streamline test development and maintenance activities. These include the ability for item writers to enter items directly into an online item bank and the ability to audit a trail of edits. There should be historical item information tracked with each item and real-time access to information on exams and candidates. There should also be an ability to update exams or items quickly. Some of these tools have the added benefit of increasing test security.
10. Insufficient Test-Security Measures – Security should be incorporated into every step of the test-development and test-administration activities. Security measures include, but are not limited to, non-disclosure agreements from anyone with access to test materials like SMEs, staff, contractors, volunteers, and candidates. Access of the test materials should be restricted to only the material that is needed for the period of time that it is needed. For example, SMEs should only have access to items that are being reviewed during the review phase.