Working with Q-Interactive

Written By: W. Howard Buddin Jr., Ph.D.
Published On: 12/18/2013

Making Candles

I’ve had several ideas over the past few years about how I might shoehorn some, or all, of a battery of tests onto my computer. Most of these involved scanning stimulus book pages and rendering them as high resolution images for display on a laptop. This, I thought, would be especially useful for tests with items that are presented for only a few seconds before the next item is shown (e.g., the TOMM).

I wanted to do things this way largely because I was frustrated with administering tests in a manner that had effectively remained unchanged for literally decades (or centuries, for that matter). It felt like a case of arrested development and stagnation in the field of neuropsychology. Other healthcare disciplines seem to have surpassed us by implementing technology to improve patient care. In fact, I can’t remember the last time my PCP wasn’t actively using a tablet to record pertinent medical information and review past records while in the room with me.

If not in actuality, then at least in pretense, my primary healthcare doc seems to be keeping up and moving forward with the latest advances in technology. I like that; maybe it’s just me, though.

The question, then, was what in the world was keeping neuropsychological test development stuck in the past?

Love, Hate

I am of two minds when it comes to neuropsychology’s relationship with the publishing industry.

  1. I appreciate the infrastructure that they bring to the table. Like it or not, large-scale production to meet the needs of an industry isn’t free. The resources needed for test development, publication of texts, journals, etc., and their subsequent mass distribution, are far in excess of what even a committed group can reasonably muster.[1]
  2. I dislike the relationship between our field and the publishing industry at-large. We are dependent upon them to the point that we almost cannot exist without them.

Despite my feelings in either direction, I have to say that I was excited when Pearson decided to develop tablet-administered versions of some of their/our most commonly used tests.

This Is Not Your Grandparents’ Neuropsychological Assessment

Late last year, Pearson (aka Pearson Education, Inc., aka Pearson Clinical, aka PsychCorp) released the Q-Interactive platform. Briefly, Q-Interactive is “a comprehensive digital platform” that allows for clinicians to create and manage patient data and batteries of testing instruments. With this release, Pearson both instituted a sea-change in the way we administer tests and laid out a roadmap for what they see as the future of assessment. You can learn more about Q-Interactive’s features by visiting the product landing page.

Naturally, I wanted to learn more about the platform and tests. To the Internet I went. Interestingly, there was almost no information about the Q-Interactive platform outside of the official product pages. This is probably due, at least partially, to a few factors that are common to new, niche-market releases (e.g., newness of the platform, seemingly relative low rate of adoption by clinicians). Despite the lack of third-party reviews, I decided to march forward anyway, and signed up for the 30-day trial.

Initial Steps: Hardware and Software Requirements

Please note: this article cannot cover all of the various aspects of Q-Interactive, so I’ll focus on what stood out most to me, in addition to covering the most obvious, major factors of this paradigm-changing (yet pleasantly familiar) system.

The Hardware

I needed two iPads before signing up for the trial. One trip to my local Apple store and just under $1,000 later, I was set to go. [2]

The cost of the price of admission (roughly $900 for two of Apple’s second generation iPads and two iPad Smart Cases) paled in comparison to the collective costs of purchasing several of the testing kits that Q-Interactive’s product pool covers. This is an amazing boon for people like me (only months into private practice): I was able to get up and running with a very respectable arsenal of tests at the ready for relatively very little money up front.

As a side note, I observed that Pearson is doing an objectively bad job marketing this idea, if not the Q-Interactive platform as a whole. One of the more recent emails I received touted “Goodbye stopwatches” as though this should be a major selling point of adopting Q-Interactive. Saying goodbye to stopwatches isn’t really a great value-added proposition, at least not for me. There are many other things I care about, like reliability, validity, ease of administration, security, and more. Enough about that, though – let’s get back to the hardware.

Saying goodbye to stopwatches: a value-added proposition?
Saying goodbye to stopwatches: a value-added proposition?

Q-Interactive runs on any second generation or newer iPad. It also is compatible with iOS 7, Apple’s most recent release of their mobile operating system. I have Q-Interactive on a pair of iPad 2’s running iOS 7, and things seem to be fine. More recently, Pearson announced that the iPad Mini and iPad Air could be used as well, with the former restricted to clinician use only.

Pearson also says that you need a stylus for administration. I have found that this is true only if you plan on making behavioral observations within the application interface; a “notes” button is on each screen/page to afford quickly launching a full-screen canvas on which one can take said notes. It is very helpful if you’re going to record verbatim responses for each verbal item, but is, again, not a necessity, per se. For me, writing legibly with pen and paper is already a tenuous proposition. Turns out, writing on the iPad results in poorly-formed glyphs that are an order of magnitude worse. Your mileage may vary in that department, but one thing is clear: you do not need a stylus to administer the Q-Interactive tests. None that I encountered, at least.

The Software and Interfaces

The Q-Interactive app is fairly well thought out for a first generation product. The experiences I’ve had to date have been positive, overall, and I’m far more impressed than not. Interestingly, on one of the occasions that I spoke with Pearson’s technical support, I asked if they would register me for any future beta-testing. The rep informed me that they had performed no beta testing with actual, real neuropsychologists using their product. I was dubious; he re-asserted: no public beta testing. This was shocking to me, since pretty much any software developer will want to test their product before its public release, and they do this by recruiting a select group of individuals and asking them to try out the product and (hopefully) help catch any remaining bugs.[3]

Speaking of bugs…

I have, on a few occasions, received an error message during the post-evaluation scoring of Symbol Search and Coding. Basically, since you cannot score these two subtests during the administration[4], you return later to add in the scores that you calculated by hand. The error message tells me that I am running out of memory on my iPad, and the operation cannot be performed. The popup dialog offers a “solution,” to access the function from one of the other interface menus, but it doesn’t work. That it didn’t work was not surprising to me, given the way that iOS manages memory and applications. In short, it’s damn near impossible to “run out of memory” on an iOS device, particularly one that is brand new, has no applications running in the background, and has almost no other software installed.

I recently contacted Pearson’s technical support about the error message and received a somewhat lukewarm, if not reassuring, response that “there’s always a fix.” I have sent in the error logs from my iPad as the tech requested, and I am actually pretty sure that they’ll call back. I have been impressed with their customer service to date – their reps have been attentive, helpful and, dare I say, friendly. As for fixing the problem: Force quitting like any other iOS 7 app has worked for me 100% of the time.[5]

One of the nicest aspects of the Q-Interactive experience is the on-the-fly scoring: tests that are administered using only the iPads are totally scored once the end of the subtest is reached. The benefits of this should be clear to most, but they’re worth outlining anyway.

First, the potential error associated with hand scoring is reduced substantially. There is still room for error, to be sure, as you can easily select the incorrect score for an item; however, the likelihood of such an error is small, and there is the opportunity to review all scores after you have administered a subtest.

A second benefit is the reduction in scoring time. Scoring something like a WAIS-IV by hand isn’t particularly difficult, but it does take time, and you quickly reach a point where scoring any faster is either not possible or could lead to mistakes. As the tests are scored as you go, scoring time is reduced to the length of time needed to score e.g., Digit Symbol Coding and Symbol Search on the WAIS-IV, or Visual Reproduction on the WMS-IV.

The final, substantial benefit comes with the scoring output. Once scoring is complete, you must “remove” (a poorly chosen word relative to what actually takes place) the battery of tests from the iPad and then log into qiactive.com to download the results. I can see this being a pain point for some, but really there is no other way they could deliver the results in a persistent, secure fashion. Besides, it’s really not that hard, and you have to log into the site to manage patient information, anyways. The output can be brief (just the primary Indexes and Scale Scores) or extensive (the Kitchen Sink). The latter of the two provides scores and data that are not possible to obtain using the traditional, hand-scored approach or the scoring assistant software. I actually used one data point, total time of administration for a WISC-IV, in a recent feedback and report. I knew the testing took a long time (ended up being 93 minutes, some of which I allowed for qualitative, observational data), but I didn’t realize it had taken quite that long.

Bonus Round

Pearson sweetens the pot in two ways when you sign up and pay for the annual license.

  1. They send you a “starter kit” with packets of protocols, response booklets, and other supplies like templates, blocks, etc. The package is substantial and generous, but if you are thinking about taking the plunge, think quickly: the starter kits are free only through the end of 2013. Tick, tock.
  2. You don’t pay for any of the subtests you give for the first 30 days after subscribing and paying the annual license. That means you get 30 days of free administrations during the trial period and another 30 days after your initial annual commitment.
Templates and sundry supplies
Templates and sundry supplies
Protocols and response booklets
Protocols and response booklets

The (Lack of) Normative Data

The biggest, most egregious problem with Q-Interactive is the utter lack of new, updated normative data for any of the tests. Pearson self-published some equivalency studies that seek to demonstrate evidence of construct validity, but these are clearly biased and have shamefully small n’s to boot. They currently have six of these equivalency studies available for download on their research page. It’s far too much information to review here, so please review these studies for yourself.

I’ve so far administered the Wechsler Adult Intelligence Scale – IV (WAIS – IV), the Wechsler Memory Scale – IV (WMS-IV), the Wechsler Intelligence Scale for Children – IV (WISC – IV), the Wechsler Individual Achievement Test - III (WIAT-III), portions of the Delis Kaplan Executive Function System (DKEFS), and the California Verbal Learning Test – II (CVLT-II). At the end of the day, I can tell you that I believe these tests are measuring their respective constructs just as validly as their traditional counterparts. My gut feeling, though, has yet to be borne out by real data. Paul Meehl taught us all a lesson about that many years ago, so I will not rely on my own beliefs quite yet.

Summary and First Impressions

The Q-Interactive system represents a needed update to a dated and tired method of test administration. It offers a way for neuropsychologists to streamline their practice by minimizing the materials needed to administer tests, and affords mobility, versus the customary, usually necessary, testing at one location. The tests are scored almost immediately, and can be downloaded for archiving or databasing. The upfront costs are low, compared to the same number of tests purchased in their traditional, paper-pencil-protocol formats. Importantly, there is another cost, although it’s not of the monetary ilk: questionable validity, mostly due to the lack of true normative data.

So ends this gross overview of the Q-Interactive system. If you have the means (i.e., two iPads), sign up for the trial; see what you think. Based on my experiences thus far, it’s unlikely that I’ll ever turn back – it would feel like eschewing light bulbs for making candles.


  1. At least with respect to present standard operating procedures. The way things are done now is not the way things must be done heretofore.  ↩

  2. A pair of second-generation iPads @ $399/ea., two Apple Smart Cases @ $49/ea., and sales tax. The Smart Cases were, in hindsight, a poorly chosen product. They do what they’re supposed to do, but in the testing environment you’ll want something that can securely prop up the iPad at various angles, which is where the Smart Cases come up short.  ↩

  3. And then I found this: “The clinicians who participated in our public beta agree: The launch of Q-interactive truly represents a groundbreaking turning point for interactive clinical assessments,” said Linda Gerardi, director of Q-interactive at Pearson…“ So, the developer, who develops the applications for Pearson that need developing, was either (1) not completely forthright (2) somehow misinformed into believing that beta testers were all ”in house," or (3) patently out of the loop. You cannot be a software developer, whose job is to refine a product during beta testing, and not know where and from whom bug reports are eminating. It is literally impossible.  ↩

  4. Assuming that you are properly scoring each item, all of the subtest Scaled Scores are generated in real-time, and you can see the SS upon completion of each subtest.  ↩

  5. By the way, if you use are having the same problem with Q-Interactive and this fix works for you, please make sure to ascribe credit to the appropriate party: this guy, not Pearson.  ↩

Similar Pages

Contact

twitter facebook

© 2012 - 2018 Neuropsych Now


Licensed under a Creative Commons Attribution-NC-ND 4.0 International License

Privacy · Terms