Conducting national assessment is a massive task involving a number of activities. It requires a dedicated and well-coordinated research team to successfully implement it. National assessment is by nature not cheap hence the exercise should not be undertaken unless all those concerned agree its results are likely to have an impact on policymaking, Thus, a strong government commitment to its implementation is required at the outset. Failure to do that may result in a lot of useless information being collected at high opportunity cost. The purpose of national assessment determines how the results will be reported and used. Consequently, the purpose and intended uses of assessment results are central in making decisions about a number of activities.
A decision as to what information is to be collected at what level of education is fundamental. When conducting the survey for the first time, the number of subjects to be included needs to be fewer. Once established, they may be increased and information collected may encompass other aspects of the education system.
All learners at the selected level form what is called the population of survey. But not all will participate in the survey, due to costs and logistical issues The best and cost effective practice is to take a representative sample from the population. It is important that every learner from the targeted population, irrespective of their ability, background, ethnicity or gender, stands an equal chance of being included in the sample.
There are a number of techniques employed to achieve a representative sample, but we won’t go into those technicalities. In as much as the examination test is a representative sample of the subject content material that has been learnt, learners do not write everything that they learnt during their programme. This explains why it is important to include continuous assessment marks in the determination of learners’ final grade, because it enhances the assessment validity.
Tests administered in the national assessment could be paper-and-pencil tests, particularly selected response items such as multiple choice, matching, or true/false items. They could also be constructed response items and performance items. Constructed-response items require learners to create their own responses or products rather than choose a response from an enumerated set On the other hand, performance items include a wide range of assessment tasks that are product- or process-based, designed to emulate real-life contexts or conditions in which specific knowledge or skills are actually applied.
Each of these test formats assesses different knowledge and skills set, as such they are all important to be included in the national assessment tests. However, deciding on the type of format and how much of each to include is largely dependent on the purpose and intended uses of the test, the time available for testing scoring and report preparation and cost implications.
Developing test items that will be able to give accurate information about the state of the education is not an easy task. The tests must be reliable, fair, free from bias, and valid. To ensure this, the development should include a number of stakeholders, and the tests then piloted and field tested. Piloting is done with a smaller sample to check whether items are working as originally intended and to refine procedures. Field testing on the other hand is administering the camera-ready tests to a much bigger sample more or less in a real world environment to check for practicability. In addition to administering standardized tests, questionnaires are also administered to learners, parents, teachers and school administration to collect background information about the learning environment to help explain learners’ achievement and how best they can be helped to attain more.
Yes, it’s possible!