Friday, September 25, 2020

Where's the evidence for "evidence based" interventions and approaches?

 Following up on a question on so-called evidence-based strategies, I went down the "validity" rabbit hole. The question had to do with the validity of the Woodcock Johnson assessment when given to autistic students. I read the section within the WJ technical manual that deals with the details of the creation of the instrument. It is a general overview and discussion. It does not mention the sample used to "norm" the instruments.

I found this reference that describes the sample used to create the "norm": https://www.ux1.eiu.edu/~glcanivez/Adobe%20pdf/Publications-Papers/Canivez%20(2017)%20WJ%20IV%20Review.pdf 

Missing from the discussion of the demographics is any language that specifies that those that we would currently describe as neuro-diverse, as coming from the SPED population, or disabled people were included in the sample. In reading the dialog about how the "norm" was created, this population would have to be delimited and noted as a CV in the process. It wasn't.

There's also no discussion about the results applying across populations. They created a model of "the middle of the US" to create their norm. How many STDs from the median is my population? Don't know. As usual, it wasn't factored.

Given the above, there is ample evidence that the WJ is valid in assessing those within the normal distribution of the human population (the middle), but little to no evidence that it is valid or reliable in assessing those outside of that population. This paper outlines why: Keen, D., Webster, A., & Ridley, G. (2016). How well are children with autism spectrum disorder doing academically at school? an overview of the literature. Autism : The International Journal of Research and Practice, 20(3), 276. doi:10.1177/1362361315580962. An interesting takeaway from that paper is that one of the studies examined had WJ administered to autistic students within a Gifted And Talented Education program / general ed setting, the others have the WJ (III and IV) being administered to an unspecified population, "Of the 19 studies reviewed, eight did not provide any information on educational enrollment or placement of participants." "The lack of studies involving participants in the adolescent years was
disturbing, although it is consistent with a more general trend that has identified a lack of research relating to outcomes, strengths and needs of people with ASD in adolescence and adulthood."

Jennifer Kurth seems to be the only researcher that is focused on this area - the lack of studies in this area. In Kurth, J. A., Hagiwara, M., Enyart, M., & Zagona, A. (2017). Inclusion of students with significant disabilities in SWPBS evaluation tools. Education and Training in Autism and Developmental Disabilities, 52(4), 383-392, the authors examine the exclusion of disabled students from interventions - a discussion that I've had recently within the other spaces in my life.

Summary: there's no real evidence that these evidence-based practices work on the population of learners in the special education space ... and ample evidence that they don't work and are quite harmful. Nevertheless, the establishment often argues, this is the system as it is and we must deliver these assessments in order ___ fill in the blank ___. Given my background, it's hard to take the activist hat off.

In all of this, I'm reminded of a quote from an old professor of mine at the Univ. of Toronto - "Do we assess what we value, or value what we assess?" The WJ brings that quote to the front of my mind - seemingly valuing the assessment process and the data it produces with little consideration of the validity of the results or the harm caused by it's administration.

Now ... take any "evidence-based" assessment or intervention. Do your own research. Find the sample and the discussion of the tests of validity and reliability. Look into this data and see if there's any mention of the inclusion of the neuro-diverse community. I'll wager that you'll find nothing. We weren't included in the sample. And ... if we weren't included in the sample, the results can't be applied to our unique population. What does this mean? There's no evidence to support many of these approaches being used on us. None. Zero. Zip. Nada.

Thus, when "experts" use this term, my radar activates. I can use my university library access to quickly assess their claims. 99% of the time, I've found that their use of the term "evidence based" is a logical fallacy - false appeal to authority. They either hope we won't check .. or ... worse ... they haven't checked themselves and are simply parroting something they've heard / read.

I'll be revisiting this theme often, exposing the many ways that this term is used to marginalize our voices and sideline our advocacy. Stay tuned ... or join me. :)

2 comments:

  1. Stumbled upon your Instagram and found this highly interesting post. I'm studying to be an Ed. Diag currently and will be taking your advice to heart.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete

The Solitary Forager

 I'm jotting this down from an overnight subconscious data dump. It's a stream of consciousness entry. It will be edited and polishe...