What's Next? (WN) - Updated Norms as of 29 January 2024

This table has been updated on 29 January 2024. It presents the equivalencies between raw scores and standard scores for adults. These equivalencies are derived from the relationship established through Z-score equating between the Jouve-Cerebrals Crystallized Educational Scale (JCCES) Crystallized Educational Index (CEI) and the 'What's Next?' (WN) Test. Z-score equating is a statistical method used to standardize scores from different tests onto a common scale (Kolen & Brennan, 2014). By transforming scores into Z-scores, which represent the number of standard deviations a score is from the mean of the distribution, it allows for direct comparison of scores from different tests.

The correlation of .87 (N = 22) between the JCCES CEI and the WN Test highlights a strong linear relationship in their measurement of cognitive abilities, making Z-score equating an appropriate method for establishing score equivalencies. However, there are limitations to this method that must be acknowledged (Dorans, Pommerich, & Holland, 2007).

Z-score equating assumes that the distribution of scores on both tests is normally distributed. If this assumption is not met, the equating might not accurately reflect the relationship between scores. If either test has a skewed distribution of scores, it could lead to inaccuracies in the equated scores.

Another limitation is the sample size used for correlation. While a correlation of .87 is indicative of a strong relationship, it is based on a relatively small sample size (N = 22). This raises questions about the generalizability of the equating to the entire population of test-takers (Thorndike, 2010).

Additionally, Z-score equating does not account for potential differences in difficulty level or content between the JCCES CEI and the WN Test. If one test is inherently more difficult or covers different aspects of cognitive ability, equated scores might not fully capture these nuances.

Despite these limitations, Z-score equating provides a valuable tool for comparing scores across different tests, especially when direct comparisons are necessary, as in the case of JCCES CEI and WN Test.

Standard scores are used as a means of quantitatively assessing an individual's cognitive abilities relative to a normative population. According to the Stanford–Binet Fifth Edition (SB5) classification (Roid, 2003), a standard score of 100 is identified as the average range, with each standard deviation being 15 points. The classifications for the WN Test, based on the correlation with JCCES CEI, are as follows: scores of 140 and above are categorized as 'Very gifted or highly advanced'; scores from 130 to 139 are deemed 'Gifted or very advanced'; scores from 120 to 129 are classified as 'Superior'; scores from 110 to 119 are considered 'High average'; scores from 100 to 109 fall into the 'Average' range.

Table 1
Correspondence between Standard Scores (Mean = 100, SD = 15) and WN Test Raw Scores
Standard Score (JCCES CEI)
WN Raw Score
Qualitative Description
140+
45+
Very gifted or highly advanced
130 – 139
34 - 44
Gifted or very advanced
120 – 129
23 - 33
Superior
110 – 119
12 - 22
High average
100 – 109
1 - 11
Average

Note. The presented equivalency between the WN Test scores and the JCCES CEI, with a correlation coefficient of .87 (N = 22), serves as a preliminary guide. Users are advised to interpret these equivalencies cautiously, as they are subject to modification based on future data collection and analysis.

The 'What's Next?' (WN) Test exhibits notable internal consistency, as evidenced by a Guttman's Lambda-6 coefficient of .96, based on a sample of 85 participants. Such a level of reliability is significant for interpreting the test's scores and affirming its validity (Allen & Yen, 2002).

As outlined by Guttman (1945), Lambda-6 serves as an index of internal consistency, akin to Cronbach's Alpha. It evaluates the extent to which a test uniformly measures a specific construct. In the context of the WN Test, a Lambda-6 of .96 indicates a high degree of consistency among test items in assessing cognitive capabilities, mirroring the principles established by Cronbach (1951).

This robust reliability of the WN Test has several important implications:

1) Score Interpretation Confidence: The high Lambda-6 score underlines the reliability of the WN Test scores, allowing for confident interpretation of an individual's cognitive abilities.

2) Test Consistency Across Samples: The WN Test's reliability, as indicated by its Lambda-6 score, suggests consistent test performance across diverse demographic samples, enhancing its generalizability (Anastasi & Urbina, 1997).

3) Utility in High-Stakes Contexts: Given the test's high internal consistency, it becomes a reliable tool for critical decision-making processes, such as educational placements or professional assessments (Messick, 1989).

4) Comparative Validity: The high internal consistency of the WN Test, paired with its .87 correlation with the JCCES CEI (= 22), supports the comparability and interpretative value of scores across these measures.

References

Allen, M. J., & Yen, W. M. (2002). Introduction to Measurement Theory. Waveland Press.

Anastasi, A., & Urbina, S. (1997). Psychological Testing (7th ed.). Prentice Hall.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555

Dorans, N. J., Pommerich, M., & Holland, P. W. (2007). Linking and Aligning Scores and Scales. Springer. https://doi.org/10.1007/978-0-387-49771-6

Guttman, L. (1945). A Basis for Analyzing Test-Retest Reliability. Psychometrika, 10(4), 255–282. https://doi.org/10.1007/BF02288892

Kolen, M. J., & Brennan, R. L. (2014). Test Equating, Scaling, and Linking: Methods and Practices (3rd ed.). Springer. https://doi.org/10.1007/978-1-4939-0317-7

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed.). American Council on Education and Macmillan.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill.

Roid, G. H. (2003). Stanford-Binet Intelligence Scales, Fifth Edition. Riverside Publishing.

Thorndike, R. L. (2010). Measurement and Evaluation in Psychology and Education (8th ed.). Pearson.