As an LD educational consultant, I work with students who have learning or attentional challenges. Most of these students have neuropsychological testing that measures cognitive, academic, and attentional skills. I always ask to review these test reports, and I always brace myself before reading them.
Why? While the testing results that I receive are typically based on well-regarded tests (e.g., WISC-V, WIAT, Woodcock-Johnson), they are explained in vastly different ways based on the assessor’s training and personal style.
Much like medical tests, such as x-rays or blood work, well-done neuropsychological or educational tests can uncover why a student struggles—and, alternatively, reveal areas of significant strength. When the testing is done well and the report is thoughtfully written, it reads like a well-crafted story. The student comes to life as their strengths and weaknesses are conveyed in meaningful, logical ways. The reader, even without special education training, can learn who this student is as a learner and what type of specific support or remediation would be most beneficial if needed.
Unfortunately, not all assessments fall into this category. Sometimes, I wonder if the person administering the tests truly has their intended audience in mind: namely, the teachers, parents, tutors, and school admission professionals. We know some readers of these reports only glance through key scores and then head to the summary, while others read every word. In either case, readers can be frustrated if the narrative is too technical or too brief. In general, simply describing what a certain test measures is not enough. The reader needs the whole story—specifically, how the student performed the tests.
For instance, sometimes troublingly low scores on key tests are summed up with a quick line such as “lower than expected score.” The reader sees the low percentage or standard score and knows the score is concerning. However, they might not know why. To illustrate the importance of providing more context regarding the student’s performance, consider the processing speed tests (Coding and Symbol Search) on the WISC-V. Low scores on these two subtests could signal a serious lag in how quickly a student can process and “output” information. However, low scores could be due to challenges in fine motor skills or attention, too. The best reports provide a description, in easy-to-understand language, of how the student performed the tasks. In this case, did they work too quickly and make careless mistakes (indicating attentional factors) or did they grasp their pencil awkwardly and use great effort to form each symbol? The latter indicates possible fine motor issues. These types of details help inform the reader on what, exactly, made the score low.
I always take a deep breath before looking at the word decoding and passage reading tests, too. It’s important that the tester provide specific examples of the types of words or questions that were difficult. Sometimes students read single words and passages quite smoothly but can struggle when asked about what they read. This could signal weak reading and listening comprehension skills. If the student masters inferential or predictive questions but can’t recall the main character, that might signal inattention weaknesses.
A higher-than-expected reading comprehension score (when a student’s decoding skills are weak) indicates a student who is able to craft a picture in their mind even when having trouble with the actual decoding of words when reading. The tester should indicate something to this effect, not simply make the obvious statement that “passage comprehension was above average.”
The Issue with Percentages
Keep in mind that scores in the 15th to 84th percentile will be reported as “average,” including low and high average. This is a wide range! Students with scores in the average range, especially those with high cognitive potential, may still need support. And, if the student is from a community of highly educated parents, “average” student scores might be closer to 70% or higher. The 50th mid-range is based, as we know, on students from all parts of the US.
Also, on the all-important WISC-V, there can sometimes be concerning percentage differences between the 10 subtests. For example, the student scores in the superior range (91%) in tests of verbal comprehension (VCI), but low average (9%) in tests for processing speed (PSI). It is critical that the student’s performance on both PSI tests be detailed.
How to Get Better Evaluations
If testing has not been done yet, always ask the neuropsychologist (directly or through your clients) to report, in detail, the type or pattern of errors made by the student. Gently suggest easy-to-understand language when they write up the report—too many technical terms and the benefits of a strong assessment can be lost. Whether the performance is conducted privately or through a public school team, you should request that understandable details of the student’s performance be provided. Keep in mind these are professionals, who might bristle if your request is too forceful. It’s helpful to provide a specific example from a well-done report of what you are requesting–keeping your tone light and collegial.
If the testing is less than a year old, it can be helpful to contact the neuropsychologist and ask specifically about the student’s performance when the scores are lower than expected. The neuropsychologist should have notes from the days of the testing.
For example, with tests that measure fluency (timed conditions), you can ask them to report: was the student slow, but accurate? Inattentive and/or moving too quickly? Also, if processing speed might be a concern, they should use testing that measures speed of processing, not using a pencil.This could help determine if the slow speed is due to weaker output versus poor fine-motor skills.
For tests of decoding and reading comprehension: If a student gets lower-than-expected scores on decoding or passage reading, you need to know exactly why.
Here are some reasons for lower-than-expected reading scores:
• Poor accuracy (decoding) skills: trouble accurately reading words. You need to know exactly what was inaccurate (e.g., phonological or words that follow no rules, “cattle” for castle, “none” for known).
• Weak “fluency” skills: the student reads the words accurately but has a very slow reading pace.
• Poor accuracy and fluency: both accuracy and rate of decoding are compromised; this is sometimes referred to as double-deficit dyslexia syndrome.
• Poor attentional skills: the student misreads or skips smaller words or skips sentences.
• Weak ocular-motor skills: rare but usually due to a lack of convergence when the eyes must coordinate left-to-right reading.
A Boston-based school admissions director recently told me she was troubled by the “bad” testing she was seeing from potential candidates. These reports didn’t allow her team to clearly understand the student’s strengths and weaknesses. Thus, if they accepted the student, they were unsure what type of specific support was needed. A simple request for more detailed reporting in non-technical language can make for very insightful and valuable reporting. It can make all the difference in helping meet each student’s learning needs.
By Carol A. Kinlan, MEd, MBA, IECA (MA)