A humorous and educational look at speech pathology.

Posts tagged ‘research’

Follow-up on Critical Reasoning Post re: CNN Article About AAC and iOS

Hey all,
I promised that I’d post an update if I heard back from AssistiveWare about the figures cited in the CNN article, and I heard back this afternoon. Short version: AssistiveWare’s white paper was misquoted, and their survey was deliberately designed to be a self-selecting report from those interested in AAC on the iOS (and clearly reported by AssistiveWare as such. David at AssistiveWare also was very helpful in providing a reference to their actual survey, which, from my reading, describes its methods well and reflects figures that seem far more reasonable with my expertise and other consultations.

I hope this provides a fair picture of what was reported by CNN and what AssistiveWare actually found and emphasizes the importance of critical thinking when reading news reports on AAC and speech pathology (or topics in general).

John

Better Speech and Hearing Month: Speech Pathology and Critical Reasoning

Update: I heard back from AssistiveWare about a day after I e-mailed them. In brief, CNN misquoted their report, and the accurate report (which presents a survey with methods clearly described and results that make sense) can be found here.

As speech pathologists, we hear about the need for good evidence and data tracking on a consistent, if not constant basis. As we review data, we develop a sense for when the data is inconsistent and/or unreliable. For example, if a student taking a standardized language test scores 40 points higher on the expressive portion of the assessment than on the receptive language portion, something is likely off, because the student is supposedly expressing language they’re unable to understand (my experience indicates that this frequently indicates either a culturally biased assessment, environmental interference with testing, or the presence of an attention disorder, although I always get the latter verified through other disciplines). Ideally, we should also be applying these same principles of “that doesn’t follow” to research and other presented data that we encounter related to our profession and others. I recently came across such a case reading this article about iPads and autism. The article makes a number of vague and questionable implications about the “revolution” in AAC the iPad created (to be fair, the iPad innovated access and interface significantly, but the media sometimes seems to present the iPad as inventing AAC or mentions the existence of previous devices in paragraph 10), but I found the following statement particularly interesting:

“David Niemeijer, founder and CEO of Amsterdam-based AssistiveWare, creator of Proloquo2Go, said that 90% of AAC users use an iPad for communication, and more than 25% use an iPhone or iPod Touch, according to the company’s surveys.”

These numbers, coming from the CEO of a prominent company in speech-generating AAC software, seemed inflated, especially since Medicaid (at least in Michigan) isn’t paying for iPads, iPhones, or iPod Touches. In my five years working in schools and home care, I’ve encountered two clients who use either – I’ve personally (with my admittedly limited sample size) seen more kids using GoTalks than iPads.

Thinking about the issue a little more deeply (and only a little – I’m blogging, not doing peer-reviewed research), the following questions about the figures emerged:

1. Did this get reported accurately? Reporters mishear (or “mishear”) executives all the time, especially when a certain figure would look impressive in an article, and verifying that the figures were accurately reported is the first step when analyzing the information.

2. Assuming that all of the people using AAC who do not use iPads use an iPhone or iPod Touch for communication (so minimizing overlap between the figures), more than 15% of AAC users use both an iPad and an iPod Touch/iPhone for communication. That’s also a pretty extraordinary figure.

3. What’s the definition of terms – in particular, what’s meant by “AAC” and what’s meant by “for communication”? Does the AAC definition include low-tech solutions like communication boards or a pen and paper? Does “for communication” specifically refer to using the iOS device as a speech-generating device, or would it also count if, say, someone used a communication board or GoTalk to generate the message but used FaceTime to send that message to its recipient?

4. What populations were included in these surveys? With the article being about children, did the surveys include populations that have different communication needs and different levels of technological familiarity (thinking in particular of adults with aphasia or cerebral palsy, who may not be appropriate for any high-tech AAC solution)?

5. How was data collected for this survey? There are a few critical variables with regard to this: who was asked (a parent and an SLP may have different definitions of what AAC is and have different response rates), how they were selected (public health enrollment lists? AssistiveWare’s mailing list? Names out of a hat?), and the manner of contact and response (physical mail-in surveys will be responded to at a different rate and by different demographics than a phone survey or a web survey) play a significant role in shaping the data collected.

I don’t know the answer to any of these questions. I contacted AssistiveWare asking them, but since that was about fifteen minutes before writing this post, I’m unsurprised that I haven’t yet gotten a response (although if I get one, I’ll be sure to discuss it here). And in the end, this isn’t about AssistiveWare or the CNN article in specific; Mr. Niemeijer may have well-collected data to support the extraordinary figure that was presented, and more power to him if he does. What this is about is the speech language pathologist’s role and responsibility as a trained specialist in the field to first identify information and claims that don’t fit well with our experience, and then apply critical reasoning skills to determine reasons that our data doesn’t mesh with the claims and, if appropriate, how to modify either the data-collection methods leading to the claim or our own practices to improve outcomes for our patients and students. The cycle of data collection, analysis, and modification is central to providing evidence-based practice (which, as others have recently pointed out, is not the same as research-based practice). Further, in a culture where the media reports the information that sells best rather than the information that informs best – or, to be fair, in a field where analysis and outcomes can change with new research or new investigations of old research (I’m looking at Mr. Wakefield and his “groundbreaking” research linking autism and vaccines as an example) – identifying popular news within our scope of practice, critically analyzing its claims, and educating those we serve is a critical service. If we don’t take the lead in this, who else will?

As always, the comments field is open. I’d love to hear what others think.

John