Use of the MMPI-2-RF in Veterans Affairs Clinic Settings

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Fan_of_Meehl

Full Member
7+ Year Member
Joined
Oct 22, 2014
Messages
2,467
Reaction score
4,873
Greetings,

I would like to survey the opinions, experiences, pitfalls, advice, etc. from professional colleagues who have tried to utilize objective personality/psychopathology testing in clinical (treatment) settings at VA such as the MMPI-2-RF or PAI.

Members don't see this ad.
 
I'll get the discussion started with some thoughts/observations and experiences in this area...

Specifically, I wish to generate productive discussion about how it is best to handle results that clearly indicate an overreporting response bias (F-r > 120, Fp-r > 100, other failed validity indices) and how to provide feedback to patients and write up the results in the chart. Just a little bit of context first. I do not consider myself a 'crusader' for detecting malingering in the clinical population I work with. If I was, I wouldn't have lasted a month and I've been working with veterans in this context now for many years (knock on wood). Truth be told I absolutely hate clinical scenarios that arise in this setting where I feel compelled to administer objective testing (because I feel like I have no other ethical choice due to several factors (which I'll try to outline below) in order to make any sense of the clinical history, responses to interview questions, and observations up to that point in the assessment/diagnosis process. I do not routinely utilize objective testing with patients. I only resort to it when I feel I have no other choice and really need the info to make sense of the case. For example, someone presenting to a PTSD specialty clinic because 'other people told me to come here' but not being able, despite repeated efforts, to elaborate meaningfully on why they chose to present for treatment. I clarify, ad nauseum, the nature of the services (psychotherapy treatment) that the clinic is offering, trying to have a conversation with the client about whether they actually want said services (e.g., active treatment, homework, self-monitoring, setting goals for cognitive/behavior change, skill building, etc.). I make it clear what we are not here to do (the Compensation and Pension and disability/ service-connection process). I get the 'run-around.' They say 'people tell me I'm crazy.' Well, do you think you're crazy? What do you think they mean by that? Where are you having problems? What are your specific symptoms? How often do they occur? Etc., etc. (you know the drill). The clear as day subtext is that they are there to 'pick up' their PTSD diagnosis for purposes of service-connection and circle high numbers on symptom self-report questionnaires (PCL-5) and I'm just supposed to, you know (wink, wink), put the puzzle pieces together myself (it ain't hard) and just 'give the diagnosis' that they want. But I ain't doing that just to do that. To me, that is one of those clear 'lines in the sand' that I just won't cross. I will not lie TO my patients and I will not intentionally lie FOR my patients, either. And it isn't based on some 'crusader' of morality mindset, either. Patients present (when they legitimately present) for psychotherapy because they are having problems that--fundamentally--are the result of self-deception (lying to themselves) on some level, no matter what the diagnosis. It would be iatrogenic (harm-inducing) in the extreme if I were to create a case formulation and treatment plan based on a lie that I am constructing (i.e., giving a diagnosis that I don't believe actually exists in them) in order to avoid conflict or lower my own anxiety I have about not lying in that organizational setting. I also believe it's a very slippery slope. If you, as a professional or as a person, make it a habit of constructing lies in order to survive or thrive in an organization then, in the end, it is going to eat you alive (if you have a conscience) and cause stress/burnout. So--and I'm 'confessing' a bit here--I have made a deal with the Devil so to speak and--in my clinical role (treatment provision)--I have basically decided that if an artful malingerer wishes to make up compelling facts about a non-existent trauma history and symptom self-report presentation of PTSD symptoms, say, then I am okay accepting their self-report and making a diagnosis and treatment plan consistent with that coherent (at that point) clinical presentation, entirely based on self-report as it may be and entirely plausible and internally consistent as it has been presented to me. No problem. I am clear that I am not in a forensic role where my main duty is to ferret out 'the truth' and 'catch' people who are (intentionally or otherwise) inaccurately portraying their military, trauma, or symptom histories. That's how I survive in the VA setting as a clinician.

However, there are limits to this. There are certain scenarios where I can't even make the presentation make sense and I have to resort to objective testing in order to clarify the picture and I think that it's perfectly reasonable (from a 'naive' professional perspective of simply trying to perform a competent psychological evaluation) to resort in those circumstances to objective personality/psychopathology assessment with an instrument such as the MMPI-2-RF (which I think the literature has demonstrated--along with my personal clinical experience--to have far superior validity scales to something like the PAI in this population). If the protocol is valid, then great. If the validity scales are 'suggestive of possible overreporting' then, fine, I can finesse that from the standpoint of a treating clinician and proceed with tentative or provisional diagnoses and implement a treatment plan that is likely to be helpful to the veteran. However, when (as is disturbingly often the case), the validity scales are so high as to clearly indicate over-reporting (e.g., F-r > 120, Fp-r > 100, other validity scales also majorly elevated (> 100)), this presents a situation that must be handled very carefully in the VA organizational and healthcare environment. How do you handle that? I have heard it said (and I, perhaps, naively even believe this) that 'in a highly complex, dangerous situation, your best bet for survival is to cleave to the truth as best you can and be very careful how you act and act very deliberately' (or something to that effect). So, the approach is basically to not say what you cannot say but also, of course, not to imply or state that you think 'malingering,' per se, is going on (i.e., stipulating motivation on the part of the client). And, of course there are ways of writing a brief paragraph in the chart note indicating the concerns about protocol invalidity and the cloud of uncertainty that this casts on data based on self-report. But, of course, in clinical contexts like VA outpatient MH settings, basically everything is based on self-report. So I guess you could consider 'diagnoses' like 'No Diagnosis,' 'Unspecified Mental Disorder (or whatever, too lazy to look up the exact wording these days),' 'Other Specified Trauma- and Stressor-Related Disorder,' or diagnose less complex (than PTSD) diagnostic entities that the patient endorses such as 'Insomnia Disorder,' or 'Unspecified Depressive (or Anxiety) Disorder' and then just case-formulate and treat those clinical syndromes with straightforward cognitive-behavioral interventions such as relaxation training, sleep hygiene, behavioral activation, etc. Given that it is a treatment context, after (carefully) sharing the feedback with the veteran about the test results (along with the other sources of data including the intensive clinical interviewing) not really cohering into a clear clinical diagnostic picture, I suppose you can re-engage the veteran along the lines of...now what specific symptoms are causing you trouble and what say we use some basic skills-building to address these complete with self-monitoring, worksheets, and cognitive/behavioral change strategies (that require effort on your part) to see if they can be helpful to you? If they meaningfully engage in active treatment efforts then, great. If they (which I would predict to be highly likely) basically passively drop out of therapy with you, then great.

But then...(and here's where things get really interesting)...they lodge a complaint or they present to a different clinician who is just fine 'connecting the nonexistent dots' to give the veteran what 'they want' (PTSD diagnosis)--this, in my observation is extremely common in the organization. Well, whatever, I guess this is one of those situations where, when push comes to shove, we have to decide whether we have any integrity as a profession at all. Sigh. Okay. Bring it on, I guess.

I'll stop rambling and see what other people have to say about their experiences with these sorts of situations. I have surveyed the published literature and I can't find anything (so far) that even addresses this issue. It's odd since a) I think it is pretty widely understood (and taught) that the most reliable/valid psychological evaluations are conducted utilizing multi-method techniques gathering data from chart review, interview, observation, symptom self-report AND objective personality/psychopathology assessment instruments and, b) VA psychologists are ostensibly there to perform competent professional evaluations to facilitate effective treatment plans. The silence of the field is deafening on this issue. It's really hard to figure out--even hypothetically--how you would address the issue competently and truthfully with interns/trainees other than 'we don't do that' because reasons (furtively looking over your shoulder and patting down the intern for recording devices). I jest...but not really.
 
Last edited:
  • Like
Reactions: 1 user
I believe I should comment here. This is a major area of my work and a large number of the publications related to military and veteran use of PAI and MMPI are mine. For disclosure sakes, I have/do received research support from Pearson, Minnesota Press, and PAR. I am also on the advisory board for PAR for the PAI. I'll start by giving some general impressions of the state of the science around response bias detection within Veteran and Active-Duty personnel, with an explicit focus on Veteran. That said the problems are the same in both, and measurement error in AD can result in difficult conclusions in Veterans because of the way CnP evaluations can access military evaluation records.

The traditional metrics of effect (e.g. d and g) tend to favor the MMPI slightly. This is true across contexts and not just with Veterans although I will note that the patterns for scale effectiveness are not stable and vary greatly in Veteran populations. I suspect some of this stems from differences in variability options (t-f verus likert) and that adaptations in method (e.g , irt) will more or less equalize in effect. There have historically been areas missing on the PAI (e.g., cognitive response scales) which have been avaliable and excellent on the MMPI. Recent research has expanded to create those for the PAI, making the measurement domain coverage more equal. I do think the MMPI has better scales on average, but as a function of several PAI scales still being used despite a lack of evidence for their effectiveness (e.g., RDF). This reminds me of the MMPI-2 scales (e.g., Ds) that stuck around for no good reason until the RF/3. Head to head comparisons (e.g. Tylicki et al., 2021) don't show major differences in effect (difference of a small effect .19 in Cohen's d - I didn't convert to Cohen's Q so thats a rough eyeball) and, again, I suspect this is response variability difference rather than scale effectiveness.

This variation in stable effects across Veterans is likely a result of so few criterion grouped studies (k=4), their low total sample size (~n=300), and the difficulty in handling disability issues associated with CnP. One of the major issues I see us facing the ability to make concrete interpretations is that without study, making a distinction between malingering and pathogenic distress is difficult. If you look at invalidity rates across stop code (Clinical setting), we see substantial variability in rates. If you look at the relationship between disability status and failure rate on MMPI/PAI/Stand alone PVT and SVT, you see a substantial relationship. This pattern of moderation makes it difficult, and although robust literature on standard moderators (e.g. age, Education, etc) exist within the assessment literature, I don't see the same level of meta-analyzed (or even avaliable and sufficient for such analysis) knowledge for specific Veteran factors. We also lack a comparison group for CnP evaluations, which further complicates the evaluation process. The lack of knowledge about effective diagnosis and decision making extends beyond just the validity scales. Questions about the influence of service Era on pathology, for instance, are rarely examined and this seems critical to any sort of interpretation. Even when I've done so, it is largely limited to a single clinical setting (e.g., PCT) at a single VA, and we know clinics vary in validity and substantive scale patterns.

So to your question about what to do with the data when it is invalid. I tend to just conclude the inability to make determinations for any of a number of reasons (e.g., malingering, extreme distress, etc) unless there is clear evidence of a change which is behaviorally disproportionate. Unfortunately for SVTs, there aren't the same detection strategies available to PVT which offer strong probabilistic statements inferring incongruence in data suggestive of malingering (e.g., 'performance worse than what would be expected of someone with severe dementia despite no reported ADL difficulties'). I dont think that this is entirely because we 'cant' develop those strategies but because the study of response validity is fairly limited, especially in the realm of theory generation. There are tons of cross-validations (I publish them constantly), but that doesn't always translate into moving the needle on methods. The last major SVT development to my eyes was RBS, using PVT criterion to identify items - which was subsequently adapted as CBS for the PAI. I also discount some scales entirely (e.g., PAI's NIM) because they are particularly useless in my eyes due to their association with stress/distress.

Here is my short perspective. When there are 4 MMPI-2-RF studies on Veterans and 3 on Active-Duty - I'm skeptical that we have the data we need to make the types of conclusions we are expected to make. These studies use retrospective designs, and we dont even know how much the different criterions impact the outcome metrics (e.g., WMT v MSVT v TOMM v MFAST). We know each differ in their sens/spec, but not what this means for study design / group determination. This, yet again, makes it difficult for us to move the needle of decision making in a manner that I feel comfortable being "confident" about.

note. any typos or weird sentences are likely due to my rambling pain-med induced thoughts. So hopefully this makes sense.
 
Last edited:
  • Like
  • Love
Reactions: 3 users
Members don't see this ad :)
They want the PTSD diagnosis for forensic purposes. It’s still forensic.

IIRC, the DSM and its associated editorial texts say about 30% of diagnoses should be Unspecified.
 
  • Like
Reactions: 1 users
I believe I should comment here. This is a major area of my work and a large number of the publications related to military and veteran use of PAI and MMPI are mine. For disclosure sakes, I have/do received research support from Pearson, Minnesota Press, and PAR. I am also on the advisory board for PAR for the PAI. I'll start by giving some general impressions of the state of the science around response bias detection within Veteran and Active-Duty personnel, with an explicit focus on Veteran. That said the problems are the same in both, and measurement error in AD can result in difficult conclusions in Veterans because of the way CnP evaluations can access military evaluation records.

The traditional metrics of effect (e.g. d and g) tend to favor the MMPI slightly. This is true across contexts and not just with Veterans although I will note that the patterns for scale effectiveness are not stable and vary greatly in Veteran populations. I suspect some of this stems from differences in variability options (t-f verus likert) and that adaptations in method (e.g , irt) will more or less equalize in effect. There have historically been areas missing on the PAI (e.g., cognitive response scales) which have been avaliable and excellent on the MMPI. Recent research has expanded to create those for the PAI, making the measurement domain coverage more equal. I do think the MMPI has better scales on average, but as a function of several PAI scales still being used despite a lack of evidence for their effectiveness (e.g., RDF). This reminds me of the MMPI-2 scales (e.g., Ds) that stuck around for no good reason until the RF/3. Head to head comparisons (e.g. Tylicki et al., 2021) don't show major differences in effect (difference of a small effect .19 in Cohen's d - I didn't convert to Cohen's Q so thats a rough eyeball) and, again, I suspect this is response variability difference rather than scale effectiveness.

This variation in stable effects across Veterans is likely a result of so few criterion grouped studies (k=4), their low total sample size (~n=300), and the difficulty in handling disability issues associated with CnP. One of the major issues I see us facing the ability to make concrete interpretations is that without study, making a distinction between malingering and pathogenic distress is difficult. If you look at invalidity rates across stop code (Clinical setting), we see substantial variability in rates. If you look at the relationship between disability status and failure rate on MMPI/PAI/Stand alone PVT and SVT, you see a substantial relationship. This pattern of moderation makes it difficult, and although robust literature on standard moderators (e.g. age, Education, etc) exist within the assessment literature, I don't see the same level of meta-analyzed (or even avaliable and sufficient for such analysis) knowledge for specific Veteran factors. We also lack a comparison group for CnP evaluations, which further complicates the evaluation process. The lack of knowledge about effective diagnosis and decision making extends beyond just the validity scales. Questions about the influence of service Era on pathology, for instance, are rarely examined and this seems critical to any sort of interpretation. Even when I've done so, it is largely limited to a single clinical setting (e.g., PCT) at a single VA, and we know clinics vary in validity and substantive scale patterns.

So to your question about what to do with the data when it is invalid. I tend to just conclude the inability to make determinations for any of a number of reasons (e.g., malingering, extreme distress, etc) unless there is clear evidence of a change which is behaviorally disproportionate. Unfortunately for SVTs, there aren't the same detection strategies available to PVT which offer strong probabilistic statements inferring incongruence in data suggestive of malingering (e.g., 'performance worse than what would be expected of someone with severe dementia despite no reported ADL difficulties'). I dont think that this is entirely because we 'cant' develop those strategies but because the study of response validity is fairly limited, especially in the realm of theory generation. There are tons of cross-validations (I publish them constantly), but that doesn't always translate into moving the needle on methods. The last major SVT development to my eyes was RBS, using PVT criterion to identify items - which was subsequently adapted as CBS for the PAI. I also discount some scales entirely (e.g., PAI's NIM) because they are particularly useless in my eyes due to their association with stress/distress.

Here is my short perspective. When there are 4 MMPI-2-RF studies on Veterans and 3 on Active-Duty - I'm skeptical that we have the data we need to make the types of conclusions we are expected to make. These studies use retrospective designs, and we dont even know how much the different criterions impact the outcome metrics (e.g., WMT v MSVT v TOMM v MFAST). We know each differ in their sens/spec, but not what this means for study design / group determination. This, yet again, makes it difficult for us to move the needle of decision making in a manner that I feel comfortable being "confident" about.

note. any typos or weird sentences are likely due to my rambling pain-med induced thoughts. So hopefully this makes sense.
Thank you for your detailed and thoughtful response. I feel that I definitely need to zoom in on and better understand a lot of the published literature (especially that which is referred to in your post). I have to also say that I am considering the utility (or lack thereof) of utilizing objective testing earlier on in the assessment/intervention process--where I conceptualize the assessment-->hypothesis generation/testing-->intervention--> {back to assessment, etc.} process as a paradigmatically continuous and iterative process to generate and refine my clinical case formulation and intervention efforts with the patient over time. If, after the initial chart review, intake/interviewing/sx self-report, observation process the clinical picture is especially fuzzy, low-resoluation, vague / unclear...then of course I am entertaining a good number of possibilities such as: a) need to increase rapport/therapeutic relationship and trust; b) avoidance/ dissociative elements which are core to the symptom experience of those with bona fide trauma and stressor-related disorders; c) lack of understanding of the interview/assessment process or understandable wariness with regard to it; and, of course, d) any number of other possible explanations including the unmentionable possibility of intentional or unintentional distortion of the symptomatic/clinical reality.

One of the reasons I am considering using the objective instruments (PAI/MMPI) at that stage is in hopes of the validity scales NOT being 'blown out of the water' and actually, you know, getting some clinically valid/reliable information that can contribute positively to the continued enterprise of hypothesis generation/testing, diagnosis/ case formulation, therapeutic relationship building, and psychoeducation/ socialization to the psychotherapy process in collaboration with the veteran who is ostensibly presenting for clinical care. In my mind, even just getting your input on this, it has been very helpful just to start a conversation on the intersection of the 'basic' scientific/psychometric literature on this from experts but also on the practical, ethical, and procedural considerations that a 'rank-and-file' clinician will need to consider to navigate efforts to meaningfully utilize measures such as the MMPI/PAI in clinical settings at VA.

Thank you for the back and forth. Just in the process of doing this I am exploring my own thinking of how to approach these situations and I find it extremely useful.
 
Last edited:
  • Like
Reactions: 1 user
I have so many thoughts and ill likely come back and add more later. I'm excited to be talking in Austin at the society of personality assessment later this month, along with Les, about future needs in assessment. A big part of that will be my thoughts on what we need for better validity detection.

I've considered writing a book to guide VA clinicians in the use of these measures and to synthesize the literature for them specific to this populations. What's your sense of a hunger for something like that?
 
  • Like
  • Love
Reactions: 4 users
I have so many thoughts and ill likely come back and add more later. I'm excited to be talking in Austin at the society of personality assessment later this month, along with Les, about future needs in assessment. A big part of that will be my thoughts on what we need for better validity detection.

I've considered writing a book to guide VA clinicians in the use of these measures and to synthesize the literature for them specific to this populations. What's your sense of a hunger for something like that?
Pretty voracious, actually, lol.

I'm passionate about trying to be the best clinician I can be using science to help veterans. I'd probably pre-order a book like that without thinking twice.
 
Pretty voracious, actually, lol.

I'm passionate about trying to be the best clinician I can be using science to help veterans. I'd probably pre-order a book like that without thinking twice.
That was my sense locally as well. I'll chat with publishers then. I wanted to wait post tenure. It's been on my mind for 5 years or so lol.
 
  • Like
Reactions: 1 users
That was my sense locally as well. I'll chat with publishers then. I wanted to wait post tenure. It's been on my mind for 5 years or so lol.
I think that the topic of the ethical and competent clinical use of objective assessment instruments (MMPI/PAI) in the role of treating clinician (as opposed to C&P examiners) is an incredibly under-developed area at VA and one that is sorely needed. I had a brief stint as a full-time VA C&P examiner and had a lot of productive and informative exchanges on the C&P listserv with other examiners during that time but the issues encountered in the role of treatment provider at VA are a whole 'nother kettle-o'-fish. Although there is a lot of, frankly, 'lip service' at the organizational level about the importance of 'ethics' it is clear that there are some major lacunae there in terms of mental health practice. Some 'black hole' territories that we just 'do not speak of' outside of behind closed doors. I think it does the field and the veterans a grave disservice. Case in point...

I have had more than one former special forces veteran who had endured a long career of service suffering from bona fide severe PTSD symptoms who initially presented to care clearly under-reporting symptomatology and resisting/fighting the idea that they may have PTSD as a diagnosis. Only by focusing first on developing rapport, trying to demonstrate true caring/integrity in the context of the therapeutic relationship, and over time making it clear that I was going to cleave to the truth (or at least what I believed to be true) even in the face of their palpable anger/irritation with me (e.g., for suggesting that the dx of PTSD may be appropriate for them) did we finally reach a point where they were able to accept that they had PTSD and engage in effective treatment. They confided in me later that the main reason they 'fought' acceptance of the diagnosis for so long and so hard (and delayed accessing and benefiting from treatment) was the horrendous stigma that 'PTSD' had in their special forces/ combat arms communities because they knew that so many people were committing fraud (I mean, people they knew or knew of personally) and they equated the diagnosis, basically, with fraudulent presentation of psychological illness for benefits.

I believe that when our courage fails as psychologists to at least try to do the right thing with respect to these issues/controversies we actually (albeit incrementally and gradually and indirectly) do harm to veterans. That's going to be a controversial statement, but it is my position.
 
  • Like
Reactions: 1 user
I'm gonna bookmark this and come back when I can more fully digest what's been written.

For now, I want to add the perspective of a former VA acute inpt psychologist because the only 2 places in VA where I think assessment can realistically happen are neuropsych/gero and inpatient due to grid/access demands yet the need is there, especially where SMI might be relevant.

I tried to admin a PAI and do some assessment with every new admit who didn't have a clear presentation (detox, housing, stressors leading to SI, etc) or was already extremely well known psychiatrically because nobody is given the proper outpatient time to do good assessment in VA. Or even if they do have some time, they may not have the skills to properly assess for things like bipolar disorder (social workers, NPs, psychiatrists/psychologists with poor training).

Beyond that, record keeping sucks for this. I have come to love CPRS but a downside is that info gets lost.

If I ever completed an assessment, I would make sure to update the problem list and make a comment on that problem to see my assessment note on date x and hope that future providers would that it into consideration, especially for those veterans who need to be properly medicated to avoid cycling in and out of the hospital and needlessly suffering in their personal lives.
 
  • Like
Reactions: 1 user
I'm gonna bookmark this and come back when I can more fully digest what's been written.

For now, I want to add the perspective of a former VA acute inpt psychologist because the only 2 places in VA where I think assessment can realistically happen are neuropsych/gero and inpatient due to grid/access demands yet the need is there, especially where SMI might be relevant.

I tried to admin a PAI and do some assessment with every new admit who didn't have a clear presentation (detox, housing, stressors leading to SI, etc) or was already extremely well known psychiatrically because nobody is given the proper outpatient time to do good assessment in VA. Or even if they do have some time, they may not have the skills to properly assess for things like bipolar disorder (social workers, NPs, psychiatrists/psychologists with poor training).

Beyond that, record keeping sucks for this. I have come to love CPRS but a downside is that info gets lost.

If I ever completed an assessment, I would make sure to update the problem list and make a comment on that problem to see my assessment note on date x and hope that future providers would that it into consideration, especially for those veterans who need to be properly medicated to avoid cycling in and out of the hospital and needlessly suffering in their personal lives.
It is really frustrating that no one at a 'higher' level thinks to do a 'project' like surveying practicing providers and simply asking for input on what would be most useful and efficient for them to have in their charting system.

I'd love to have a 'tab' in CPRS for the 'case formulation' that could be quickly, efficiently, and cleanly edited and kept up to date. There is SO much 'note bloat' that I see where people just copy and past their entire intakes (and, then some of them copy/past all their past progress notes) into each progress note cumulatively to the point where I get frustrated reading the last progress note and trying to sift through all the historical info to the info that is 'fresh' and relevant to their last clinical contact with the veteran. It would also be cool to be able to have customized templates with something like a 'checklist' of your most commonly used interventions (as a provider) so you could simply check off (and have the date automatically inserted) different treatment elements, informed consent, socialization to psychotherapy process, etc., as you complete them with clients in a particular encounter and then you could have an easily accessible 'tab' or list to click on right before the next therapy session to see what you've already covered with them (or skills already taught) to immediately prepare you for what to focus on in the session. Perhaps (and I'm dreamin' here) when we move to more of a 'process-based' psychotherapy approach (customized therapy approach based on individualized case formulation), there may be some demand for such a thing.
 
  • Like
Reactions: 1 users
Thank you for your detailed and thoughtful response. I feel that I definitely need to zoom in on and better understand a lot of the published literature (especially that which is referred to in your post). I have to also say that I am considering the utility (or lack thereof) of utilizing objective testing earlier on in the assessment/intervention process--where I conceptualize the assessment-->hypothesis generation/testing-->intervention--> {back to assessment, etc.} process as a paradigmatically continuous and iterative process to generate and refine my clinical case formulation and intervention efforts with the patient over time. If, after the initial chart review, intake/interviewing/sx self-report, observation process the clinical picture is especially fuzzy, low-resoluation, vague / unclear...then of course I am entertaining a good number of possibilities such as: a) need to increase rapport/therapeutic relationship and trust; b) avoidance/ dissociative elements which are core to the symptom experience of those with bona fide trauma and stressor-related disorders; c) lack of understanding of the interview/assessment process or understandable wariness with regard to it; and, of course, d) any number of other possible explanations including the unmentionable possibility of intentional or unintentional distortion of the symptomatic/clinical reality.

One of the reasons I am considering using the objective instruments (PAI/MMPI) at that stage is in hopes of the validity scales NOT being 'blown out of the water' and actually, you know, getting some clinically valid/reliable information that can contribute positively to the continued enterprise of hypothesis generation/testing, diagnosis/ case formulation, therapeutic relationship building, and psychoeducation/ socialization to the psychotherapy process in collaboration with the veteran who is ostensibly presenting for clinical care. In my mind, even just getting your input on this, it has been very helpful just to start a conversation on the intersection of the 'basic' scientific/psychometric literature on this from experts but also on the practical, ethical, and procedural considerations that a 'rank-and-file' clinician will need to consider to navigate efforts to meaningfully utilize measures such as the MMPI/PAI in clinical settings at VA.

Thank you for the back and forth. Just in the process of doing this I am exploring my own thinking of how to approach these situations and I find it extremely useful.

One of the things that separates Clinical psychologists from counseling psych is the focus on treating diagnoses, not life difficulties. Pretty much everyone would agree that diagnosis must be objectively based. Even outside of legal concerns, the law expects us to differentiate self report from the truth (e.g., determining if a self report is delusional). Many settings like us to evaluate the reality of self report, when it is necessary. Some of those same settings also try to tell us that we are incapable of evaluating the reality of self report, when it is uncomfortable. This conflicting report seems to be based upon administrators being uncomfortable with patients being upset. I refuse to accept the conflicting ideas. Either I am capable of determining that someone is not Jesus, or we need to call the Vatican. Anytime I am confronted with that idea, I present the options (e.g., "Tell me specifically when I can determine if someone's report is reality. If everyone is always telling the truth, then when are we discharging everyone with schizophrenia, substance abuse, and every dementia patient who says their memory is fine").

In my limited clinical work, I find that patients are amenable to the explanation of diagnosis based treatments ("we need to determine what the diagnosis is, so we can treat it. My profession can get really technical about the meaning of things. For example, sometimes patients tell me they have a memory problem when they really mean they can't remember the words for things. They're telling the truth, but that problem is professionally very different than what professionals would call a memory problem". If someone complains, I again will force the question. Are you telling me to treat something inappropriately? Who is going to accept liability for any harm caused by applying a treatment not indicated for the diagnosis? I will need that in writing.
 
  • Like
  • Love
Reactions: 4 users
One of the things that separates Clinical psychologists from counseling psych is the focus on treating diagnoses, not life difficulties.
The data does not support this distinction. It may have at one point, but that's no longer the case and far fdoesn't. Antidotes may, but data doesnt. It hasn't for a very long time.
 
  • Like
Reactions: 4 users
Members don't see this ad :)
The data does not support this distinction. It may have at one point, but that's no longer the case and far fdoesn't. Antidotes may, but data doesnt. It hasn't for a very long time.
yeah, the distinction between clinical and counseling is long dead. But I threw it in there to hedge off some things.
 
  • Like
Reactions: 1 user
One of the things that separates Clinical psychologists from counseling psych is the focus on treating diagnoses, not life difficulties. Pretty much everyone would agree that diagnosis must be objectively based. Even outside of legal concerns, the law expects us to differentiate self report from the truth (e.g., determining if a self report is delusional). Many settings like us to evaluate the reality of self report, when it is necessary. Some of those same settings also try to tell us that we are incapable of evaluating the reality of self report, when it is uncomfortable. This conflicting report seems to be based upon administrators being uncomfortable with patients being upset. I refuse to accept the conflicting ideas. Either I am capable of determining that someone is not Jesus, or we need to call the Vatican. Anytime I am confronted with that idea, I present the options (e.g., "Tell me specifically when I can determine if someone's report is reality. If everyone is always telling the truth, then when are we discharging everyone with schizophrenia, substance abuse, and every dementia patient who says their memory is fine").

In my limited clinical work, I find that patients are amenable to the explanation of diagnosis based treatments ("we need to determine what the diagnosis is, so we can treat it. My profession can get really technical about the meaning of things. For example, sometimes patients tell me they have a memory problem when they really mean they can't remember the words for things. They're telling the truth, but that problem is professionally very different than what professionals would call a memory problem". If someone complains, I again will force the question. Are you telling me to treat something inappropriately? Who is going to accept liability for any harm caused by applying a treatment not indicated for the diagnosis? I will need that in writing.
Some very good points/observations here.

I think it is yet another example of the problem in these settings of attempts to dissociate authority from responsibility.

And I agree. If I have (under my license) the responsibility to make reliable/valid diagnoses and treatment plans based on following standards of care/practice in the field (i.e., doing a comprehensive evaluation consisting of at least chart review, interview/observation, collection of 'self-report' symptom questionnaires, and objective assessment data), then it should follow that I also have the authority to have final say regarding what diagnosis or case formulation best fits the data (especially when my write up of the case clearly articulates a logical interpretation process attempting to reconcile all of the data from all of the aforementioned sources and especially when it is easy to demonstrate that I am clearly operating within published and widely agreed upon standards of care/practice in the field.

I have noticed over the course of my career in mental health (and general health) practice settings the 'long, slow march' to incrementally strip away the authority from providers whilst trying to add additional layers of responsibility/ accountability and monitoring (to a tedious and ridiculous degree) by non-licensed, non-competent, and untrained (or lesser trained) personnel who, in truth, have absolutely no legal responsibility for the outcomes at the end of the day. It's very frustrating.

What's even more frustrating is when the majority of providers get away with (hell, even get accolades and 'kudos' for) what I would even consider to be negligent/incompetent 'evals' (e.g., just slinging a PCL-5 (if that) and 'diagnosing' PTSD without even mentioning (a word or a sentence) about trauma history) as long as 'the veteran is happy.' Meanwhile, putting forth the 'extra' effort to properly diagnose and treat people involves a whole lot more time/effort/discomfort and 'risk' of administrative fallout. And then when trainees get involved and just bring up basic crap that we all learned in graduate school (and is, in fact, valid) the organization tries to ignore the issues or sweep them under the rug. I mean, I understand that it's always been this way in the field to an extent...it just seems like things are getting worse and worse as time goes on.
 
  • Like
Reactions: 2 users
Some very good points/observations here.

I think it is yet another example of the problem in these settings of attempts to dissociate authority from responsibility.

And I agree. If I have (under my license) the responsibility to make reliable/valid diagnoses and treatment plans based on following standards of care/practice in the field (i.e., doing a comprehensive evaluation consisting of at least chart review, interview/observation, collection of 'self-report' symptom questionnaires, and objective assessment data), then it should follow that I also have the authority to have final say regarding what diagnosis or case formulation best fits the data (especially when my write up of the case clearly articulates a logical interpretation process attempting to reconcile all of the data from all of the aforementioned sources and especially when it is easy to demonstrate that I am clearly operating within published and widely agreed upon standards of care/practice in the field.

I have noticed over the course of my career in mental health (and general health) practice settings the 'long, slow march' to incrementally strip away the authority from providers whilst trying to add additional layers of responsibility/ accountability and monitoring (to a tedious and ridiculous degree) by non-licensed, non-competent, and untrained (or lesser trained) personnel who, in truth, have absolutely no legal responsibility for the outcomes at the end of the day. It's very frustrating.

Not really. It's hypocrisy at a systemic level. There are self contradicting instructions. If it was about responsibility, you could ask for someone to resolve the contradiction. But you'll get in trouble for noticing that fact. If it was about authority, you could go to the head of psychology, and ask. But you'll get in trouble for saying anything.

The instructions say they value your professional services, but they also give uneducated the ability to dictate how to practice. They are telling you "You can't tell if someone is telling the truth.... Except for some parts of the clinical exam, when it benefits us.". They are also telling you, "It is unacceptable to upset people. Except when it benefits us in acute settings, or drug seeking, or or or or...".

Self contradicting instructions literally cannot be followed. Realistically, you can practice ethically and note that there are administrative issues (e.g., "While I have been presented evidence that documents that the defendant had an 4.0 average, I have been instructed that I may not include this fact into my professional consideration. ). It gets accepts the requirements while getting the issue on the board.
 
  • Like
Reactions: 2 users
Unfortunately for SVTs, there aren't the same detection strategies available to PVT which offer strong probabilistic statements inferring incongruence in data suggestive of malingering (e.g., 'performance worse than what would be expected of someone with severe dementia despite no reported ADL difficulties').
I've previously used the Morel Emotional Numbing Test (MENT; About the MENT | Morel Emotional Numbing Test) because, as a psychiatric PVT, it allows more definitive statements to be made about PTSD rule-out -- My bias in contributing to this thread is that of someone whose only experience within the VA has been in neuropsychology.

One of the reasons I am considering using the objective instruments (PAI/MMPI) at that stage is in hopes of the validity scales NOT being 'blown out of the water' and actually, you know, getting some clinically valid/reliable information that can contribute positively to the continued enterprise of hypothesis generation/testing, diagnosis/ case formulation, therapeutic relationship building, and psychoeducation/ socialization to the psychotherapy process in collaboration with the veteran who is ostensibly presenting for clinical care.
With this in mind, anecdotally, I've gotten fewer flagrantly invalid PAIs at the VA than MMPIs, which, as JAG mentioned, I attribute to the polytomous vs. dichotomous scaling differences between the instruments.
 
Last edited:
  • Like
Reactions: 1 user
I've previously used the Morel Emotional Numbing Test (MENT; About the MENT | Morel Emotional Numbing Test) because, as a psychiatric PVT, it allows more definitive statements to be made about PTSD rule-out -- My bias in contributing to this thread is that of someone whose only experience within the VA has been in neuropsychology.


With this in mind, anecdotally, I've gotten fewer flagrantly invalid PAIs at the VA then I have MMPIs, which, as JAG mentioned, I attribute to the polytomous vs. dichotomous scaling differences between the instruments.

Just as an aside, the MENT has godawful sensitivity. It will catch only the most blatant of malingerers.
 
  • Like
Reactions: 1 user
Just as an aside, the MENT has godawful sensitivity. It will catch only the most blatant of malingerers.
I was surprised, again anecdotally, how many failures I got on it -- If I had to estimate... I would say somewhere around 20%? This was in a concussion clinic, FWIW, so maybe I shouldn't have been surprised.
 
I was surprised, again anecdotally, how many failures I got on it -- If I had to estimate... I would say somewhere around 20%? This was in a concussion clinic, FWIW, so maybe I shouldn't have been surprised.
Yeah, you're probably missing an additional 20%.
 
  • Like
Reactions: 1 users
Yeah, you're probably missing an additional 20%.
I'll take 50% detection of symptom feigning for 15ish minutes of administration and scoring.
 
  • Like
Reactions: 1 user
I'll take 50% detection of symptom feigning for 15ish minutes of administration and scoring.

Just depends on what your setting is and how comfortable you may be with getting drilled with that if you get called as a fact witness. In a concussion clinic, people like me are reviewing your work frequently, and it's our job to point out how the measures used do not justify the conclusions that you made.
 
Just depends on what your setting is and how comfortable you may be with getting drilled with that if you get called as a fact witness. In a concussion clinic, people like me are reviewing your work frequently, and it's our job to point out how the measures used do not justify the conclusions that you made.
Choosing not to take the "you" in this personally. :)
 
I'll take 50% detection of symptom feigning for 15ish minutes of administration and scoring.
Just guess then
 
  • Like
Reactions: 1 users
Choosing not to take the "you" in this personally. :)

It was a general "you" in the sense of concussion clinics. But, if that's your only attempt at SVT/PVT use in an eval with a high degree of somatization and or/malingering, as well as high rates of litigation, then yes, you are included in the "you" as well.
 
I believe I should comment here. This is a major area of my work and a large number of the publications related to military and veteran use of PAI and MMPI are mine. For disclosure sakes, I have/do received research support from Pearson, Minnesota Press, and PAR. I am also on the advisory board for PAR for the PAI. I'll start by giving some general impressions of the state of the science around response bias detection within Veteran and Active-Duty personnel, with an explicit focus on Veteran. That said the problems are the same in both, and measurement error in AD can result in difficult conclusions in Veterans because of the way CnP evaluations can access military evaluation records.

The traditional metrics of effect (e.g. d and g) tend to favor the MMPI slightly. This is true across contexts and not just with Veterans although I will note that the patterns for scale effectiveness are not stable and vary greatly in Veteran populations. I suspect some of this stems from differences in variability options (t-f verus likert) and that adaptations in method (e.g , irt) will more or less equalize in effect. There have historically been areas missing on the PAI (e.g., cognitive response scales) which have been avaliable and excellent on the MMPI. Recent research has expanded to create those for the PAI, making the measurement domain coverage more equal. I do think the MMPI has better scales on average, but as a function of several PAI scales still being used despite a lack of evidence for their effectiveness (e.g., RDF). This reminds me of the MMPI-2 scales (e.g., Ds) that stuck around for no good reason until the RF/3. Head to head comparisons (e.g. Tylicki et al., 2021) don't show major differences in effect (difference of a small effect .19 in Cohen's d - I didn't convert to Cohen's Q so thats a rough eyeball) and, again, I suspect this is response variability difference rather than scale effectiveness.

This variation in stable effects across Veterans is likely a result of so few criterion grouped studies (k=4), their low total sample size (~n=300), and the difficulty in handling disability issues associated with CnP. One of the major issues I see us facing the ability to make concrete interpretations is that without study, making a distinction between malingering and pathogenic distress is difficult. If you look at invalidity rates across stop code (Clinical setting), we see substantial variability in rates. If you look at the relationship between disability status and failure rate on MMPI/PAI/Stand alone PVT and SVT, you see a substantial relationship. This pattern of moderation makes it difficult, and although robust literature on standard moderators (e.g. age, Education, etc) exist within the assessment literature, I don't see the same level of meta-analyzed (or even avaliable and sufficient for such analysis) knowledge for specific Veteran factors. We also lack a comparison group for CnP evaluations, which further complicates the evaluation process. The lack of knowledge about effective diagnosis and decision making extends beyond just the validity scales. Questions about the influence of service Era on pathology, for instance, are rarely examined and this seems critical to any sort of interpretation. Even when I've done so, it is largely limited to a single clinical setting (e.g., PCT) at a single VA, and we know clinics vary in validity and substantive scale patterns.

So to your question about what to do with the data when it is invalid. I tend to just conclude the inability to make determinations for any of a number of reasons (e.g., malingering, extreme distress, etc) unless there is clear evidence of a change which is behaviorally disproportionate. Unfortunately for SVTs, there aren't the same detection strategies available to PVT which offer strong probabilistic statements inferring incongruence in data suggestive of malingering (e.g., 'performance worse than what would be expected of someone with severe dementia despite no reported ADL difficulties'). I dont think that this is entirely because we 'cant' develop those strategies but because the study of response validity is fairly limited, especially in the realm of theory generation. There are tons of cross-validations (I publish them constantly), but that doesn't always translate into moving the needle on methods. The last major SVT development to my eyes was RBS, using PVT criterion to identify items - which was subsequently adapted as CBS for the PAI. I also discount some scales entirely (e.g., PAI's NIM) because they are particularly useless in my eyes due to their association with stress/distress.

Here is my short perspective. When there are 4 MMPI-2-RF studies on Veterans and 3 on Active-Duty - I'm skeptical that we have the data we need to make the types of conclusions we are expected to make. These studies use retrospective designs, and we dont even know how much the different criterions impact the outcome metrics (e.g., WMT v MSVT v TOMM v MFAST). We know each differ in their sens/spec, but not what this means for study design / group determination. This, yet again, makes it difficult for us to move the needle of decision making in a manner that I feel comfortable being "confident" about.

note. any typos or weird sentences are likely due to my rambling pain-med induced thoughts. So hopefully this makes sense.

I'm gonna take this opportunity to seize on your expertise: what do you think of the PAI with the veteran population? That's what I mostly use, because I just don't find the MMPI-2-RF that useful.
 
I'm gonna take this opportunity to seize on your expertise: what do you think of the PAI with the veteran population? That's what I mostly use, because I just don't find the MMPI-2-RF that useful.

Personally, while I find the face validity of the PAI diagnoses helpful, the validity metrics are kind of garbage in my experience. In settings where we've used that and had other PVT/SVTs in the mix, the PAI is rarely failed in all but the most obvious malingerers/invalid batteries.
 
  • Like
Reactions: 1 users
Just guess then
LOL -- The point is that I'm not going to scoff at a tool that, in 15 minutes of administration and scoring, allows me to accurately identity (i.e., not randomly assume the identity of) a substantial portion of the PTSD dissimulators in my clinic. This point assumes that a passed MENT is a minimum bar for interpretation of other PTSD indicators, while a failed MENT reflects probable dissimulation.

In other words, a passed MENT by no means equals any kind of psychiatric diagnosis, just like a passed TOMM or MSVT by no means equals any kind of cognitive diagnosis -- A MENT failure, though, is a handy data point to have when ruling-out PTSD, especially in a setting that (a) incentivizes certain diagnoses and (b) isn't particularly friendly to unpopular clinical opinions (*cough* the VA *cough*). I've found the utility of the MENT to vary setting-to-setting, the VA being one setting where it was particularly useful... I've found less utility for it in other clinical, non-VA settings, FWIW.
 
  • Like
Reactions: 1 user
LOL -- The point is that I'm not going to scoff at a tool that, in 15 minutes of administration and scoring, allows me to accurately identity (i.e., not randomly assume the identity of) a substantial portion of the PTSD dissimulators in my clinic. This point assumes that a passed MENT is a minimum bar for interpretation of other PTSD indicators, while a failed MENT reflects probable dissimulation.

In other words, a passed MENT by no means equals any kind of psychiatric diagnosis, just like a passed TOMM or MSVT by no means equals any kind of cognitive diagnosis -- A MENT failure, though, is a handy data point to have when ruling-out PTSD, especially in a setting that (a) incentivizes certain diagnoses and (b) isn't particularly friendly to unpopular clinical opinions (*cough* the VA *cough*). I've found the utility of the MENT to vary setting-to-setting, the VA being one setting where it was particularly useful... I've found less utility for it in other clinical, non-VA settings, FWIW.

If it helps, we purchased the MENT for that reason.
 
  • Like
Reactions: 2 users
One particular issue with utilizing the MENT in the context of tx provision in an outpatient VA clinic is that the core design...
I would consider revising this post to omit specifics on the structure of the test...
 
  • Like
Reactions: 1 user
One particular issue with utilizing the MENT in the context of tx provision in an outpatient VA clinic is that the core design...
Without getting too specific... My understanding is that the developer of the MENT consulted with VA legal about the instrument's wording during its initial development, and they were OK with the wording and structure of the instrument.
 
  • Like
Reactions: 1 user
Without getting too specific... My understanding is that the developer of the MENT consulted with VA legal about the instrument's wording during its initial development, and they were OK with the wording and structure of the instrument.
To keep it vague...VA Legal isn't the issue. Administrators who could REALLY take issue with certain aspects of the design and thought process of including it as part of a standard outpatient assessment for psychotherapy services are. In that respect it seems similar to certain neuropsych tests and I could possibly see it being used in C & P evals...I just think it would be an extremely hard sell to the non-psychologist folks who generally run mental health services (at least at many VA's). Heck, I expect pushback for using the MMPI/PAI.
 
To keep it vague...VA Legal isn't the issue. Administrators who could REALLY take issue with certain aspects of the design and thought process of including it as part of a standard outpatient assessment for psychotherapy services are. In that respect it seems similar to certain neuropsych tests and I could possibly see it being used in C & P evals...I just think it would be an extremely hard sell to the non-psychologist folks who generally run mental health services (at least at many VA's). Heck, I expect pushback for using the MMPI/PAI.

To be fair to the the VA, I always used PVT/SVTs in my assessments in the VA with zero pushback.
 
  • Like
Reactions: 1 users
But 15 minutes does NOT do this. Or am on another planet here?
 
Last edited:
But 15 minutes does NOT do this. Or am on another planet here?
You can administer this specific instrument that I'm describing in probably 10 minutes and score it in another 5 minutes... If someone "fails" it, I would most likely interpret that as evidence of probable dissimulation. If they "pass" it, then I would look to other indicators of PTSD and diagnosis validity, including more sensitive indicators, embedded SVTs, etc.

When I've used this specific PVT (which isn't perfect or magical by any means), I've viewed it as a minimum that must be met for me to move further in my PTSD differential. I agree with the critiques mentioned by others above that, in a lot of settings, this instrument lacks the sensitivity to be useful in cases other than the most flagrant of dissimulation... Along these lines, I've found it very useful in select clinics / settings.

Unfortunately for SVTs, there aren't the same detection strategies available to PVT which offer strong probabilistic statements inferring incongruence in data suggestive of malingering (e.g., 'performance worse than what would be expected of someone with severe dementia despite no reported ADL difficulties').

My biggest "oh, this is cool" feeling about this instrument is that it's a psychiatric PVT (as opposed to SVT), which I've never seen before. As JAG mentioned (^^^) earlier, there are differences in interpretation of PVT and SVT failures.
 
Last edited:
The fact that a rather LARGE percentage of veterans presenting for outpatient care would fail symptom validity indices is the biggest 'open secret' in the organization among those who routinely see veterans for (attempted) psychotherapy. The fact that a LARGE percentage of the empirical studies out there on PTSD as a diagnosis are performed on this patient population is another 'open secret' in that, in the opinion of many, this literature and its findings should often be taken with the proverbial 'grain of salt.' Same for the 'mTBI'/ concussion literature.
 
  • Like
Reactions: 1 user
I'm gonna take this opportunity to seize on your expertise: what do you think of the PAI with the veteran population? That's what I mostly use, because I just don't find the MMPI-2-RF that useful.
The scale design and structure lays out more clear mappings to pathology but the logit functions are underdeveloped. The validity scales, for reasons above (re: likert), are generally slightly less effective. some are trash (RDS). I prefer the reading level/wording of the PAI
The PAI doesn't have the same extensive research base, but I'm not convinced that makes the studies (when contracted apples to apples) are worse. There are some notable gaps relative to the RF/3, such as comparison groups. Neither are extensive in military or veteran specific studies on validity on diagnosis. Bellet et al did a great diagnostic study for PTSD, similar to Sellbom et al.

I'll have more to say later this month.
 
  • Like
Reactions: 1 users
Personally, while I find the face validity of the PAI diagnoses helpful, the validity metrics are kind of garbage in my experience. In settings where we've used that and had other PVT/SVTs in the mix, the PAI is rarely failed in all but the most obvious malingerers/invalid batteries.
have you checked out CBS? I've had good results with it
 
The Cambridge Brain Sciences battery stuff? Not yet. I haven't reviewed the lit on that one to see how it lines up with more traditional testing yet.
Na, the Cognitive Bias Scale (CBS) and Cognitive Bias Scale of Scales (CB-SOS). Started coming out in 2019 and has been cross validated a number of times with various PVT and SVT. It was developed based on RBS methods (CBS) and the CB-SOS follows a scale-based scoring approach to test out an easier way for clinicians to incorporate a cognitive symptom set. CBS compares pretty closely to RBS (Tylicki et al), with variation likely a function of likert v T/F response. Haven't seen a head to head on the CB-SOS scales yet. The SOS approach reminds me some of the Gaines et al (2013) MFI over-reporting scale - that one also seems to outperform most standard PAI scales.

CBS/CB-SOS lit

Thats all of it to date.

Below is the Gaines et al article for MFI if you dont know that one.
Gaines, M. V., Giles, C. L., & Morgan, R. D. (2013). The detection of feigning using multiple PAI scale elevations: A new index. Assessment, 20(4), 437-447.
 
  • Like
Reactions: 1 user
Na, the Cognitive Bias Scale (CBS) and Cognitive Bias Scale of Scales (CB-SOS). Started coming out in 2019 and has been cross validated a number of times with various PVT and SVT. It was developed based on RBS methods (CBS) and the CB-SOS follows a scale-based scoring approach to test out an easier way for clinicians to incorporate a cognitive symptom set. CBS compares pretty closely to RBS (Tylicki et al), with variation likely a function of likert v T/F response. Haven't seen a head to head on the CB-SOS scales yet. The SOS approach reminds me some of the Gaines et al (2013) MFI over-reporting scale - that one also seems to outperform most standard PAI scales.

CBS/CB-SOS lit

Thats all of it to date.

Below is the Gaines et al article for MFI if you dont know that one.
Gaines, M. V., Giles, C. L., & Morgan, R. D. (2013). The detection of feigning using multiple PAI scale elevations: A new index. Assessment, 20(4), 437-447.

Thanks, I'll bookmark these for review when I get some time.
 
  • Like
Reactions: 1 user
The scale design and structure lays out more clear mappings to pathology but the logit functions are underdeveloped. The validity scales, for reasons above (re: likert), are generally slightly less effective. some are trash (RDS). I prefer the reading level/wording of the PAI
The PAI doesn't have the same extensive research base, but I'm not convinced that makes the studies (when contracted apples to apples) are worse. There are some notable gaps relative to the RF/3, such as comparison groups. Neither are extensive in military or veteran specific studies on validity on diagnosis. Bellet et al did a great diagnostic study for PTSD, similar to Sellbom et al.

I'll have more to say later this month.
Just re-reading this thread (lots of stuff to take in and consider)...

"There are some notable gaps relative to the RF/3, such as comparison groups. Neither are extensive in military or veteran specific studies on validity on diagnosis.'

So...the ten million dollar question is...

if (a) it is empirically clear (based on everything that I've seen in the literature and with my own eyes as a clinician) that the base rates of 'failure of validity indices' is observed to be high in this population and (b) we don't have 'extensive' studies on this widespread (and, I would argue, clinically and forensically important reality)...why do we think that is and what can be done about it?
 
Last edited:
Just re-reading this thread (lots of stuff to take in and consider)...

"There are some notable gaps relative to the RF/3, such as comparison groups. Neither are extensive in military or veteran specific studies on validity on diagnosis.'

So...the ten million dollar question is...

if (a) it is empirically clear (based on everything that I've seen in the literature and with my own eyes as a clinician) that the base rates of 'failure of validity indices' is observed to be high in this population and (b) we don't have 'extensive' studies on this widespread (and, I would argue, clinically and forensically important reality)...why do we think that is and what can be done about it?
Probably multiple potential reasons, just as with invalidity in other contexts. But can't discount the potential impact of various external incentives and rates of success of the behavior(s) in attaining the desired goal(s).

As for what to do about it, again, complex questions from both a systemic and individual clinician's perspective. Short answer, as a clinician, I discussed invalid findings with patients in an open, non-confrontational manner. Reviewed what that meant in terms of the data and its utility. Went through recommendations based on factors that I suspected might be contributing to the various forces at play in any individual evaluation.

There's any article by Carone et al. (2010) that reviews a model for providing feedback on invalid neuropsych results that can also be applicable to invalid psych testing data. And there was a recent survey by Martin and Schroeder (2021) that showed that while most neuropsychologists do review/discuss invalid findings in feedback, there was a lot of variability in the specific approach.

Unless I misread/misunderstood your questions, which is entirely possible.
 
  • Like
Reactions: 1 user
Just re-reading this thread (lots of stuff to take in and consider)...

"There are some notable gaps relative to the RF/3, such as comparison groups. Neither are extensive in military or veteran specific studies on validity on diagnosis.'

So...the ten million dollar question is...

if (a) it is empirically clear (based on everything that I've seen in the literature and with my own eyes as a clinician) that the base rates of 'failure of validity indices' is observed to be high in this population and (b) we don't have 'extensive' studies on this widespread (and, I would argue, clinically and forensically important reality)...why do we think that is and what can be done about it?
What can we do about it? Honestly? Research. We dont know the moderators or influences enough to make any steps now. The studys are far too limited and vary too much by context (Ingram et al., 2019; Glen et al., 2002). We need streamlined research and guidance. We dont have it.

1. VA mechanisms for funding non-VA researchers or even allowing them to be part of processes are extremely difficult. Depending on the VISN, it can be impossible, which means that most research faculty that could advance this are not involved. Funding allows more time for non-VA researchers, which also means a larger number and a greater diversity. MIRECs are limited in their focus and dont serve this purpose. The flag****ps (e.g., minneapolis focus on some elements of validity, like TBI - see the great work by Jacob Finn) but not others (e.g., validity - for which there are no more than 5 studies in Veteran). The 'no more than' is inprecise because it varies by scale inclusion and is limited in settings.

2. Funding. Funding. Funding. Assessment funding is non-existent. Which means its not a priority to university faculty because Grants are the game. If the gov spends as much as it does on disability, improving those evaluations could save money with small expense

3. The data is there. We could have all the things we want based on VINCI and the MH suite + how it is stored. Access is the issue, not data. So, returning to #1/2, we just need people to get data and get to publish it.


Heck DOD/VA funding should require better assessments than the PHQ/GAD during intake to help with diagnostic testing. There is a need and thats an easy step. It isn't a treatment outcome so not the primary focus of the millions given out, but it is such a critical need and gives a better context of who is treated that it seems reasonable to me.

I am working to create a comparison group for CNP evaluations with the MMPI-2-RF right now. I suspect that will be under review by late this year with publications in early next year. This will add incrementally to the existing comparison groups I've done and the study should also contain some PVT/SVT failure work with external criteria, as well as selfreport relationships and diagnostic group studies, but will depend on final data pull.
 
  • Like
Reactions: 1 users
Probably multiple potential reasons, just as with invalidity in other contexts. But can't discount the potential impact of various external incentives and rates of success of the behavior(s) in attaining the desired goal(s).

As for what to do about it, again, complex questions from both a systemic and individual clinician's perspective. Short answer, as a clinician, I discussed invalid findings with patients in an open, non-confrontational manner. Reviewed what that meant in terms of the data and its utility. Went through recommendations based on factors that I suspected might be contributing to the various forces at play in any individual evaluation.

There's any article by Carone et al. (2010) that reviews a model for providing feedback on invalid neuropsych results that can also be applicable to invalid psych testing data. And there was a recent survey by Martin and Schroeder (2021) that showed that while most neuropsychologists do review/discuss invalid findings in feedback, there was a lot of variability in the specific approach.

Unless I misread/misunderstood your questions, which is entirely possible.
Not at all...thank you for your response. It is exactly what I was looking for. I'll definitely be checking out those articles.

And I guess I was also wanting some clarification of perspectives (and thoughts) on the interpretation of elevated (especially VERY elevated) validity indices on the MMPI in veteran populations. I mean, I understand the caution in interpreting moderately high elevations on F-r (or even Fp-r to a certain point) as overreporting vs. being due to legitimate severe psychopathology in veterans with PTSD and I would never do that. But, when F-r is >= 120, Fp-r > 110, and a couple other validity scales are >100, combined with factors such as the following:

(a) despite convincingly demonstrating a reliable ability to circle 3's and 4's on the PCL-5, the veteran cannot meaningfully elaborate on any specifics surrounding those endorsements upon f/u clinical interviewing attempts
(b) upon interviewing attempts by the examiner to clarify the nature of endorsed critical items (indicating psychotic/bizarre sx experiences on the MMPI), veteran inevitably 'explains away' the endorsement in interview by providing clarifications of the endorsement as 'normative' and non-pathological. Moreover, when interviewer CAN get veteran to provide any specifics on severity, duration, and impact of symptoms, the info provided indicates mild symptoms with minimal impact (or veteran is excessively vague or does not respond).

In such situations (which I am encountering frequently in a clinical context), the extreme elevations on the validity scales do not appear at all likely to be due to the presence of genuine severe psychopathology in the context of bone fide PTSD. The far more likely interpretation is over-reporting across measures.
 
What can we do about it? Honestly? Research. We dont know the moderators or influences enough to make any steps now. The studys are far too limited and vary too much by context (Ingram et al., 2019; Glen et al., 2002). We need streamlined research and guidance. We dont have it.

1. VA mechanisms for funding non-VA researchers or even allowing them to be part of processes are extremely difficult. Depending on the VISN, it can be impossible, which means that most research faculty that could advance this are not involved. Funding allows more time for non-VA researchers, which also means a larger number and a greater diversity. MIRECs are limited in their focus and dont serve this purpose. The flag****ps (e.g., minneapolis focus on some elements of validity, like TBI - see the great work by Jacob Finn) but not others (e.g., validity - for which there are no more than 5 studies in Veteran). The 'no more than' is inprecise because it varies by scale inclusion and is limited in settings.

2. Funding. Funding. Funding. Assessment funding is non-existent. Which means its not a priority to university faculty because Grants are the game. If the gov spends as much as it does on disability, improving those evaluations could save money with small expense

3. The data is there. We could have all the things we want based on VINCI and the MH suite + how it is stored. Access is the issue, not data. So, returning to #1/2, we just need people to get data and get to publish it.


Heck DOD/VA funding should require better assessments than the PHQ/GAD during intake to help with diagnostic testing. There is a need and thats an easy step. It isn't a treatment outcome so not the primary focus of the millions given out, but it is such a critical need and gives a better context of who is treated that it seems reasonable to me.

I am working to create a comparison group for CNP evaluations with the MMPI-2-RF right now. I suspect that will be under review by late this year with publications in early next year. This will add incrementally to the existing comparison groups I've done and the study should also contain some PVT/SVT failure work with external criteria, as well as selfreport relationships and diagnostic group studies, but will depend on final data pull.
Thank you for sharing your thoughts on this. Exactly the type of discussion I think is needed.
 
Not at all...thank you for your response. It is exactly what I was looking for. I'll definitely be checking out those articles.

And I guess I was also wanting some clarification of perspectives (and thoughts) on the interpretation of elevated (especially VERY elevated) validity indices on the MMPI in veteran populations. I mean, I understand the caution in interpreting moderately high elevations on F-r (or even Fp-r to a certain point) as overreporting vs. being due to legitimate severe psychopathology in veterans with PTSD and I would never do that. But, when F-r is >= 120, Fp-r > 110, and a couple other validity scales are >100, combined with factors such as the following:

(a) despite convincingly demonstrating a reliable ability to circle 3's and 4's on the PCL-5, the veteran cannot meaningfully elaborate on any specifics surrounding those endorsements upon f/u clinical interviewing attempts
(b) upon interviewing attempts by the examiner to clarify the nature of endorsed critical items (indicating psychotic/bizarre sx experiences on the MMPI), veteran inevitably 'explains away' the endorsement in interview by providing clarifications of the endorsement as 'normative' and non-pathological. Moreover, when interviewer CAN get veteran to provide any specifics on severity, duration, and impact of symptoms, the info provided indicates mild symptoms with minimal impact (or veteran is excessively vague or does not respond).

In such situations (which I am encountering frequently in a clinical context), the extreme elevations on the validity scales do not appear at all likely to be due to the presence of genuine severe psychopathology in the context of bone fide PTSD. The far more likely interpretation is over-reporting across measures.

At least in non-VA samples, the "cry for help" profile is not really supported by the literature, especially at very high values. Especially in Fp-r, it's just invalid in al most all situations at a certain point.
 
  • Like
Reactions: 3 users
Top