meeting follow-up

2 views
Skip to first unread message

John Williams

unread,
Jun 12, 2017, 4:03:11 PM6/12/17
to spm-2-proje...@googlegroups.com, Kailey Bax

Hi all,

 

Nice to talk to you all this morning. Here are the final Adult items for your records.

 

Here also is a brief list of follow-ups based on the agenda items we covered while I was on the call:

 

1.       T-score range categories: we discussed changing the labels for the t-score ranges from what we used in prior versions  (Typical, Some Problems, Definite Dysfunction) to something else (such as Typical, Atypical, Very Atypical). We left it that I would do some research to find out what are the most common ways of defining those categories across the industry. Our new Product Support Specialist, Kailey Bax, is going to help out with this, so I’ve copied her here.

2.       Adult form names: We discussed a few options for the non-self-report adult form (informant report; family and friends report ; additional reporter) and landed for the time being on “Other Report,” which is general enough to take in the various settings and raters that users may encounter, while also having good parallel wording with Self-Report. Kailey can also give us a heads-up an any other options that we may have overlooked.

3.       DIF scores: As I look more closely at the forms, we could modify them so that the user can generate DIF scores for each sensory category.

4.       Vulnerability scores: we’ll look for a way to make use of the vulnerability data. Not sure what that might look like until we see the data.

 

Nest steps:

 

1.       While you all continue to work on the case studies, I will continue pre-writing the manual chapters. When I get through Chapter 3 (Interpretation), I will send that to you as your next major task, as that chapter will probably be your most extensive content contribution to the manual.

2.       Related to number 4 above, I’m thinking now that it may be a good idea for you all to review and approve the vulnerability categories sooner rather than later. Originally I had thought we should do it after the final items had been selected to save you from having to validate all of the extra items that will not make it into the final scales. But given that you identified some errors in the labels, I think we should do it before we select the final items, for two reasons: one, in case we want to make any item selection decisions based on the representations of vulnerabilities; and two, so that we don’t have to wait for the vulnerability labels to get started on those analyses.

 

Let me know if I missed anything, and if you agree with #2 in the next steps.

 

Talk to you soon…

 

John

 

 

 

John C. Williams, PhD

Senior Project Director

Licensed Clinical Psychologist

 

d 424.201.8869

t  800.648.8857 or 424.201.8800

f  424.201.6950

 

625 Alaska Avenue, Torrance, CA 90503

 

www.wpspublish.com

 

SPM-2_Adult_standardization_forms.xlsx

ateachabout

unread,
Jun 13, 2017, 10:52:09 AM6/13/17
to spm-2-proje...@googlegroups.com, Kailey Bax

Hi John,
Thank you for the detailed follow up and for the Adult form.

Re the vulnerabilities:
As I mentioned on yesterday's call Rick and I have reviewed all the new items and vulnerabilities for each of the forms to organize in preparation for the Quick Tips. 

I will share them with our SPM author team so we are all in agreement before sending them to you. 

By when will you need them? 

Thanks,

Diana




Diana Henry, MS, OTR/ L, FAOTA 
Henry Occupational Therapy facebook




 Sprint Samsung Galaxy S® 6.

John Williams

unread,
Jun 13, 2017, 1:44:08 PM6/13/17
to spm-2-proje...@googlegroups.com

Thanks, Diana:

 

There is no particular urgency right now for the forms, so whenever you can comfortably get them done would be just fine for me.

 

I know it’s a lot of work to check all 1500 (or so) items across every form. Please also remove every instance of what I have called “multi” and pick a single vulnerability – I know this may be a judgment call on your part as to which vulnerability is most prominent, but we need to have a  single label for each, both to facilitate the statistical analyses, and also to demonstrate content validity.

 

Let me know if you have any questions or concerns as you get into it.

 

And thank you !

 

John

--
You received this message because you are subscribed to the Google Groups "SPM-2 Project Workspace" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spm-2-project-wor...@googlegroups.com.
To post to this group, send email to spm-2-proje...@googlegroups.com.
Visit this group at https://groups.google.com/group/spm-2-project-workspace.
For more options, visit https://groups.google.com/d/optout.

John Williams

unread,
Jun 14, 2017, 8:05:55 PM6/14/17
to spm-2-proje...@googlegroups.com, Kailey Bax

Hi all,

 

Kailey did a nice job reviewing various publishers t-score range labels (see list below). Although nearly all instruments pair a t-score range (e.g., T-65—75) with an interpretation (usually a paragraph or more, similar to SPM and SPM-P), not every instrument adds a descriptive label to each range (e.g., “above average”). Of those that do label the ranges with descriptive words, some of them are more useful than others as models for SPM-2. This is because some measures (like SPM-2 and other pathology scales) care most about deviations above the mean, whereas other measures (like adaptive behavior scales) care most about deviations below the mean. Still others care as much about high scores as they do about low scores (for example IQ measures). So, not every example on Kailey’s list will map perfectly onto SPM-2, but it gives us a good sense of what the industry standard includes.

 

So here are my current thoughts about what to call the ranges:

·         Although it would have been fine keeping them as they are (primarily because continuity is something that users value quite a lot), if you hate them then it’s OK to change them, as long as we explain somewhere why we did that, and what if any effect it should have on users’ interpretation of scores.

·         I’m not sure Typical, Atypical, and Very Atypical is going to work so well, mainly because typical and atypical are most often used as binary categories, so it’s not clear what “very atypical” would mean and it could be confusing for users, as well as the end-receivers of the assessment results (parents, etc)

·         An alternative based on Kailey’s research could be something as simple as Typical Range, Elevated Range, and Very Elevated Range.

·         A third idea is to use terms that are already used as the primary descriptors inside the interpretive passage for the SPM-P ranges, such as Typical, Mild-to-Moderate Difficulties, and Significant Sensory Processing Problem.

 

Hopefully that gives you all some new ideas to think about.

 

J

 

John

 

 

From: Kailey Bax
Sent: Monday, June 12, 2017 3:01 PM
To: John Williams <jwil...@wpspublish.com>
Subject: RE: meeting follow-up

 

MHS assessments:

CBRS – low, low average, average, high average, elevated, very elevated

ASRS – average, slightly elevated, elevated, very elevated

CDI-2 – average or lower, high average, elevated, very elevated

 

Pearson:

MMPI-2 – below average, average, above average

SCL-90-R: below normative mean, average, above average

BASC-3: low extreme, sig. below average, average, sig. above average, upper extreme

 

WPS

PCRI – very low, low, average (anything above 40T is considered average)

CDRS-R – utilizes interpretive descriptions rather than score classifications such as “uncommon, possible engagement in denial” “possible depressive disorder, further evaluation may be needed.”

 

Other

STAXI – low moderate, low, moderate, high

 

ASEBA

ASEBA assessments only give the classifications of Borderline (67-70T) and Clinical (t>70), as anything below 67T is considered normal.

Reply all
Reply to author
Forward
0 new messages