Understanding Different Types of Scoring Systems
Whenever you take a psychometric test either as part of the selection process or as a practice exercise you will usually see your results presented in terms of numerical scores. These may be; raw scores, standard scores, percentile scores, Z-scores, T-scores or Stens.
These refer to your unadjusted score. For example, the number of items answered correctly in an aptitude or ability test. Some types of assessment tools, such as personality questionnaires, have no right or wrong answers and in this case, the raw score may represent the number of positive responses for a particular personality trait. Obviously, raw scores by themselves are not very useful. If you are told that you scored 40 out of 50 in a verbal aptitude test, this is largely meaningless unless you know where your particular score lies within the context of the scores of other people. Raw scores need to be converted into standard scores or percentiles will provide you with this kind of information.
How Scores are Distributed
Many human characteristics are distributed throughout the population in a pattern known as the normal curve or bell curve. This curve describes a distribution where most individuals cluster near the average and progressively fewer individuals are found the further from the average you go in each direction.
The illustration above shows the relative heights of a large group of people. As you can see, a large number of individual cases cluster in the middle of the curve and as the extremes are approached, fewer and fewer cases exist, indicating that progressively fewer individuals are very short or very tall. The results of aptitude and ability tests also show this normal distribution if a large and representative sample of the population is used.
Mean and Standard Deviation
There are two characteristics of a normal distribution that you need to understand. The first is the mean or average and the second is standard deviation, which is a measure of the variability of the distribution. Test publishers usually assign an arbitrary number to represent the mean standard score when they convert from raw scores to standard scores. Test X and Test Y are two tests with different standard score means.
In this illustration Test X has a mean of 200 and Test Y has a mean of 100. If an individual got a score of 100 on Test X, that person did very poorly. However, a score of 100 on Test Y would be an average score.
The standard deviation is the most commonly used measure of variability. It is used to describe the distribution of scores around the mean.
The value of the standard deviation varies directly with the spread of the test scores. If the spread is large, the standard deviation is large. One standard deviation of the mean (both the plus and minus) will include 66% of the students' scores. Two standard deviations will include 95% of the scores.
You may also be interested in: Aptitude Tests Introduction, Question Types & Scoring, The Difference between Speed & Power Tests, Verbal Ability Tests, Numerical Ability Tests, Abstract Reasoning Tests, Spatial Ability Tests, Mechanical Aptitude Tests, Data Checking Tests, Work Sample Tests, Interpreting Aptitude Test Results, Different Types of Scoring Systems, Standard Scores, Percentiles & Norming and Using the Results to Make Selection Decisions.