PhD/PsyD Just a thread to post the weirdest/whackiest/dumbest mental health-related stuff you come across in the (social) media...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

PsychSupreme

Ph.D. Student | M.A. Clinical Psychology
2+ Year Member
Joined
Feb 5, 2020
Messages
258
Reaction score
378
Thanks to @psycho1391 for the idea! I'll kick us off by relating a recent comment in which a therapist on r/therapists said that drugs don't cause drug addiction.

Members don't see this ad.
 
  • Love
Reactions: 1 user
My favorites are always when they bash CBT and then praise DBT, ACT, etc. My ABSOLUTE favorite was someone bashing CBT as a treatment approach for OCD and then praising ERP.
 
  • Like
Reactions: 5 users
My favorites are always when they bash CBT and then praise DBT, ACT, etc. My ABSOLUTE favorite was someone bashing CBT as a treatment approach for OCD and then praising ERP.
Someone in the r/therapists sub told me that CBT is gaslighting because it teaches people to harshly judge themselves for having “bad” thoughts. So I responded that their understanding was incorrect--that CBT teaches people to appraise whether their thoughts are true and adaptive, without assigning the value of "good" or "bad" and without judgment for having had the thoughts in question. And they then responded that maybe my definition of CBT was like that, but not everyone agrees with that being how CBT is meant to be implemented, and asked me if I was unwilling to consider other definitions of CBT as being equally valid as mine. So I told them that I was indeed deeply reticent to redefine a well-structured, thoroughly-defined therapy modality with decades of theoretical validation, and that maybe they should consider that the "alternate" definitions are just flat incorrect. I received no response, so I assume that they felt I was gaslighting them and they wanted to avoid further manipulation from me *sarcasm*
 
  • Like
  • Haha
Reactions: 9 users
Members don't see this ad :)
I also recently had an LMHC tell me that midlevels already take "way more stats and methods than they wish they had to" and that the only difference between MSW programs and psych master's programs is that the former don't have the "rigorous" training in stats and methods. This coming from someone with an LMHC license in California, which means they likely have a master's in clinical psychology (that's the main method of LMHC licensure there). Anyway, my internal response to the "way more stats and methods than they wish they had to" line was to think "ONE CLASS? That's WAY MORE than you want to take?" Lol. I also couldn't help but lmao at the claim that MSW programs are just psych programs without stats. It's a completely different field with a wholly different set of basic coursework and knowledge base.
 
I also recently had an LMHC tell me that midlevels already take "way more stats and methods than they wish they had to" and that the only difference between MSW programs and psych master's programs is that the former don't have the "rigorous" training in stats and methods. This coming from someone with an LMHC license in California, which means they likely have a master's in clinical psychology (that's the main method of LMHC licensure there). Anyway, my internal response to the "way more stats and methods than they wish they had to" line was to think "ONE CLASS? That's WAY MORE than you want to take?" Lol. I also couldn't help but lmao at the claim that MSW programs are just psych programs without stats. It's a completely different field with a wholly different set of basic coursework and knowledge base.
The pervasiveness of wacky beliefs among master's-level practitioners is probably directly related to the lack of education about how to analyze evidence.
Classes on statistics and methods help, analyzing studies helps more, but actually trying to do good research and realizing how incredibly difficult it can be to achieve reliable, correct results is truly edifying.
 
  • Like
Reactions: 4 users
The pervasiveness of wacky beliefs among master's-level practitioners is probably directly related to the lack of education about how to analyze evidence.
Classes on statistics and methods help, analyzing studies helps more, but actually trying to do good research and realizing how incredibly difficult it can be to achieve reliable, correct results is truly edifying.

They also don't get how research funding works. I had someone call an article suspect because it was "VA funded" (really, it was a statement that the research was conducted by someone who had funding through the Office for Academic Affiliations) and the VA has an agenda to push time-limited treatment.
 
  • Wow
  • Like
Reactions: 1 users
They also don't get how research funding works. I had someone call an article suspect because it was "VA funded" (really, it was a statement that the research was conducted by someone who had funding through the Office for Academic Affiliations) and the VA has an agenda to push time-limited treatment.
So, so many things incorrect about this statement. I'm guessing this person never worked at a VA. But they're probably doing C&P evals. A patient once told me (so take with a grain of salt) that their psychologist told them the VA (i.e., me) would deny they'd had a brain injury back in Vietnam (they hadn't) because the VA had treatment for it but didn't want to take on the cost of offering it to people like him. This particular patient had vascular dementia.

The thoughts people have about VA.
 
So, so many things incorrect about this statement. I'm guessing this person never worked at a VA. But they're probably doing C&P evals. A patient once told me (so take with a grain of salt) that their psychologist told them the VA (i.e., me) would deny they'd had a brain injury back in Vietnam (they hadn't) because the VA had treatment for it but didn't want to take on the cost of offering it to people like him. This particular patient had vascular dementia.

The thoughts people have about VA.

Haha, yes, I had the exact same thought: tell me you've never worked for the VA without telling me
 
  • Like
Reactions: 1 users
They also don't get how research funding works. I had someone call an article suspect because it was "VA funded" (really, it was a statement that the research was conducted by someone who had funding through the Office for Academic Affiliations) and the VA has an agenda to push time-limited treatment.
To be fair, one of the first things I ask when analyzing a paper is "What did the author, or funder, want to find?" - the study will tend to be sensitive for what they want, and specific for what they don't. Government sources of funding are not categorically less prone to this than industry ones.

For example, the CATIE trial, a government-funded landmark psychiatric study on antipsychotics, was almost explicitly designed to show that old, cheap antipsychotics are as good or better than new, expensive ones, so the government wouldn't need to spend as much on the meds. It was an awful study - for example, it was testing ~5000 hypothesis but used 0.05 as its p cutoff. It is interesting because it failed to find it what it wanted, showing that olanzapine (new antipsychotic) was superior on one measure but otherwise new and old were about the same. However, even that finding is suspect because of how terrible the study was. Overall, massive waste of taxpayer funds.

In this case, upper administration at the VA wants what they are currently doing (time-limited treatment) to be best or at least equal, because if the findings were otherwise that would be troublesome to them.

EDIT: I was incorrect, they did adjust p to 0.017, which is partially less wrong than what they should have adjusted to (0.00001)
 
Last edited:
  • Like
Reactions: 1 users
To be fair, one of the first things I ask when analyzing a paper is "What did the author, or funder, want to find?" - the study will tend to be sensitive for what they want, and specific for what they don't. Government sources of funding are not categorically less prone to this than industry ones.

For example, the CATIE trial, a government-funded landmark psychiatric study on antipsychotics, was almost explicitly designed to show that old, cheap antipsychotics are as good or better than new, expensive ones, so the government wouldn't need to spend as much on the meds. It was an awful study - for example, it was testing ~5000 hypothesis but used 0.05 as its p cutoff. It is interesting because it failed to find it what it wanted, showing that olanzapine (new antipsychotic) was superior on one measure but otherwise new and old were about the same. However, even that finding is suspect because of how terrible the study was. Overall, massive waste of taxpayer funds.

In this case, upper administration at the VA wants what they are currently doing (time-limited treatment) to be best or at least equal, because if the findings were otherwise that would be troublesome to them.
One my supervisors is a major author on the CATIE trial. I've read the main findings several times. This is absolutely not at all an accurate account of the main statistical methods. I can't speak to their motivations for running the study, but the methods absolutely more robust than this gives them credit for being, and they didn't just report findings of p < .05 as significant without using conservative methods of correction for multiple comparisons (here's the NEJM publication). They used a Hochberg in at least one analysis, and it is more conservative and sensitive to Type-1 error than most usual corrective methods like the Bonferroni. Certainly there were some among the crew who were skeptical of the claim that second-generation antipsychotics are more effective at treating schizophrenia than are first-generation antipsychotics, and the trial did largely fail to demonstrate significant differences in most cases. But, having spent significant time in the psychotic disorders arena and been exposed to a good deal of the psychopharmacological literature on the issue, these findings have been replicated. It just is not the case that newer antipsychotics are more effective than earlier ones--they largely avoid the major issue of potential anticholingeric side-effects often seen in older drugs, but remain equivalent in incidence of extrapyramidal s/e. The one area in which literature consistently determines a potential differential efficacy for second-gens is in treatment of BP1 d/o with psychosis.
 
Last edited:
One my supervisors is a major author on the CATIE trial. I've read the main findings several times. This is absolutely not at all an accurate account of the main statistical methods. I can't speak to their motivations for running the study, but the methods absolutely more robust than this gives them credit for being, and they didn't just report findings of p < .05 as significant without using conservative methods of correction for multiple comparisons (here's the NEJM publication). They used a Hochberg in at least one analysis, and it is more conservative and sensitive to Type-1 error than most usual corrective methods like the Bonferroni. Certainly there were some among the crew who were skeptical of the claim that second-generation antipsychotics are more effective at treating schizophrenia than are first-generation antipsychotics, and the trial did largely fail to demonstrate significant differences in most cases. But, having spent significant time in the psychotic disorders arena and been exposed to a good deal of the psychopharmacological literature on the issue, these findings have been replicated. It just is not the case that newer antipsychotics are more effective than earlier ones--they largely avoid the major issue of potential anticholingeric side-effects often seen in older drugs, but remain equivalent in incidence of extrapyramidal s/e. The one area in which literature consistently determines a potential differential efficacy for second-gens is in treatment of BP1 d/o with psychosis.

I am going to point out that you do not understand the CATIE trial. The only conclusion it drew was that that perphenazine and the second-generation antipsychotics did not separate from eachother on any measure other than patients stuck with olanzapine longer (and olanzapine alone among the second generation antipsychotics). That and that olanzapine had the worst metabolic side effects.

My point is less about whether the study was wrong in its conclusions, and more about whether those conclusions could be relied upon. Many studies have been right by accident, for example, the first comparative study of lithium vs. valproic acid for bipolar concluded that the latter was just as effective as the former...but failed to mention that neither drug had separated from placebo. Something went wrong with the study obviously, and they were deceptive in how they stated their conclusions, but they did happen to be right that valproic acid is an excellent choice for bipolar mania.

Also my apologies, it has been a few years since I presented on the study and I do not have it on hand. You are correct, they did not use 0.05...they used 0.017. That is not an adequate correction for the thousands of comparisons they were making. There were numerous serious issues with the study. One particular one that sticks with me, why they used perphenazine:

"Although haloperidol is the first-generation agent most commonly used for comparison, we chose to use perphenazine because of its lower potency and moderate side-effect profile.(31)"

So to compare against second-generation antipsychotics as a class, and extrapolate to properties of first-generations as a class, they chose the antipsychotic most likely to be similar to second generation antipsychotics and best tolerated...because the government wanted to show that first generation >=second generation.

The reason that this particular thing stuck out to me...is that they were factually about the properties of perphenazine. It is higher potency than haloperidol, and in fact the second highest potency of all antipsychotics. This stuck out to me extra because that citation goes to an article by one of the authors whose only mention of perphenazine is: "Although we can never know what the results of past studies would have been had prophylactic anticholinergics been used, we can look forward to results of important new studies, such as the National Institute of Mental Health-funded Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) (51) schizophrenia trial that uses trilafon as the first-generation-antipsychotic comparator, a medication with less tendency to cause extrapyramidal side effects than haloperidol." That reference leads to the paper of CATIE design and development, which states "Perphenazine was selected as the conventional medication because it is a midpotency medication with only a moderate incidence of EPS (relative to high-potency medications and other midpotency medications) and sedation (relative to low-potency medications)."

So they referenced themselves referencing themselves for a rather important and wrong piece of information. I'm sure the original incorrect data for perphenazine potency is somewhere in the references to the original paper, but there are dozens of them and, at this point in studying the trial, I found my motivation sapped.
 
Last edited:
I am going to point out that you do not understand the CATIE trial. The only conclusion it drew was that that perphenazine and the second-generation antipsychotics did not separate from eachother on any measure other than patients stuck with olanzapine longer (and olanzapine alone among the second generation antipsychotics). That and that olanzapine had the worst metabolic side effects.

My point is less about whether the study was wrong in its conclusions, and more about whether those conclusions could be relied upon. Many studies have been right by accident, for example, the first comparative study of lithium vs. valproic acid for bipolar concluded that the latter was just as effective as the former...but failed to mention that neither drug had separated from placebo. Something went wrong with the study obviously, and they were deceptive in how they stated their conclusions, but they did happen to be right that valproic acid is an excellent choice for bipolar mania.

Also my apologies, it has been a few years since I presented on the study and I do not have it on hand. You are correct, they did not use 0.05...they used 0.017. That is not an adequate correction for the thousands of comparisons they were making. There were numerous serious issues with the study. One particular one that sticks with me, why they used perphenazine:

"Although haloperidol is the first-generation agent most commonly used for comparison, we chose to use perphenazine because of its lower potency and moderate side-effect profile.(31)"

So to compare against second-generation antipsychotics as a class, and extrapolate to properties of first-generations as a class, they chose the antipsychotic most likely to be similar to second generation antipsychotics and best tolerated...because the government wanted to show that first generation >=second generation.

The reason that this particular thing stuck out to me...is that they were factually about the properties of perphenazine. It is higher potency than haloperidol, and in fact the second highest potency of all antipsychotics. This stuck out to me extra because that citation goes to an article by one of the authors whose only mention of perphenazine is: "Although we can never know what the results of past studies would have been had prophylactic anticholinergics been used, we can look forward to results of important new studies, such as the National Institute of Mental Health-funded Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) (51) schizophrenia trial that uses trilafon as the first-generation-antipsychotic comparator, a medication with less tendency to cause extrapyramidal side effects than haloperidol." That reference leads to the paper of CATIE design and development, which states "Perphenazine was selected as the conventional medication because it is a midpotency medication with only a moderate incidence of EPS (relative to high-potency medications and other midpotency medications) and sedation (relative to low-potency medications)."

So they referenced themselves referencing themselves for a rather important and wrong piece of information. I'm sure the original incorrect data for perphenazine potency is somewhere in the references to the original paper, but there are dozens of them and, at this point in studying the trial, I found my motivation sapped.
I don’t know how you concluded I don’t understand the study. What you’re describing as the conclusions are in line with what I was saying, I just painted with broad strokes for brevity. And I guess you’re allowed to think the corrective methods are insufficient, but I’d beg to wonder why you think the several highly qualified biostatisticians on the study felt them appropriate for the study. I think you’re misinterpreting the p-value correction, for one thing. Also, you’re absolutely incorrect about the relative potency of perphenazine against haloperidol. Haloperidol is the stronger-potency antipsychotic of the two and is the standard comparative drug for lifetime neuroleptic dose. So I’m not sure what you’re getting at with that particular critique.
 
Last edited:
I don’t know how you concluded I don’t understand the study. What you’re describing as the conclusions are in line with what I was saying, I just painted with broad strokes for brevity. And I guess you’re allowed to think the corrective methods are insufficient, but I’d beg to wonder why you think the several highly qualified biostatisticians on the study felt them appropriate for the study
I would not be particularly surprised if there was a reasonable explanation, but I do not suppose that there is one until it has been provided and no such explanation is in the paper regarding results nor the one on design. It might be somewhere, but the onus is on these highly qualified biostasticians to make it available. I do not give highly qualified biostatisticians the benefit of the doubt when they employed by drug companies, I do not see reason to do so for government-employed ones either.

I do apologize for using strong language regarding your mentor's work, much of my negativity comes from the frustration and dismay I had with this and other studies. I respect the work they did (and do), but I am disheartened by how much of my field appears to be on less than reliable foundation, particularly given how vulnerable the patients affected are.

......

I think this exchange is a good illustration of the point I made earlier: It is hard to reliably know things, and we know surprisingly little. This discussion is about a psychiatric study of a large number of patients with a typically reliably diagnosed condition involving only one type of binary intervention that was designed by very smart, capable people...and one can still make compelling argument that it does not establish reliable knowledge.

Contrast with how much more challenging it is to do research in psychology, how much more operator-dependent interventions are, and how easy it is to do serious harm without realizing it...you folks have it rough.

......

I think I've forgotten where I was going with this, and that I am displacing from weird/wacky/dumb mental health stuff at the VA. Sorry if I was confrontational.
 
Members don't see this ad :)
I would not be particularly surprised if there was a reasonable explanation, but I do not suppose that there is one until it has been provided and no such explanation is in the paper regarding results nor the one on design. It might be somewhere, but the onus is on these highly qualified biostasticians to make it available. I do not give highly qualified biostatisticians the benefit of the doubt when they employed by drug companies, I do not see reason to do so for government-employed ones either.

I do apologize for using strong language regarding your mentor's work, much of my negativity comes from the frustration and dismay I had with this and other studies. I respect the work they did (and do), but I am disheartened by how much of my field appears to be on less than reliable foundation, particularly given how vulnerable the patients affected are.

......

I think this exchange is a good illustration of the point I made earlier: It is hard to reliably know things, and we know surprisingly little. This discussion is about a psychiatric study of a large number of patients with a typically reliably diagnosed condition involving only one type of binary intervention that was designed by very smart, capable people...and one can still make compelling argument that it does not establish reliable knowledge.

Contrast with how much more challenging it is to do research in psychology, how much more operator-dependent interventions are, and how easy it is to do serious harm without realizing it...you folks have it rough.

......

I think I've forgotten where I was going with this, and that I am displacing from weird/wacky/dumb mental health stuff at the VA. Sorry if I was confrontational.
No worries at all, the CATIE trials were done well before my time. I’m also not as versed in them as I ought to be.
 
One time, on a social media site, someone told me that “raven therapy” existed. Another time a training director said that a student wore “fashion sweatpants” to a professional setting, and I had to learn that drop seat sweatpants existed. Then I asked some psychiatrists what they use for a diagnosis when they prescribe benzos for patients that get freaked out in MRIs, and they all lost their minds. Another time, I told some tennis coach that graduating from an online program was not the same thing as going to real program and he threatened to sue me. And this other time…
 
  • Haha
Reactions: 1 user
One time, on a social media site, someone told me that “raven therapy” existed.
Isn't there a notorious/infamous "dissertation" from one of the mill-ish PsyD programs that had something to do with ravens?
 
To be fair, one of the first things I ask when analyzing a paper is "What did the author, or funder, want to find?" - the study will tend to be sensitive for what they want, and specific for what they don't. Government sources of funding are not categorically less prone to this than industry ones.

For example, the CATIE trial, a government-funded landmark psychiatric study on antipsychotics, was almost explicitly designed to show that old, cheap antipsychotics are as good or better than new, expensive ones, so the government wouldn't need to spend as much on the meds. It was an awful study - for example, it was testing ~5000 hypothesis but used 0.05 as its p cutoff. It is interesting because it failed to find it what it wanted, showing that olanzapine (new antipsychotic) was superior on one measure but otherwise new and old were about the same. However, even that finding is suspect because of how terrible the study was. Overall, massive waste of taxpayer funds.

In this case, upper administration at the VA wants what they are currently doing (time-limited treatment) to be best or at least equal, because if the findings were otherwise that would be troublesome to them.

EDIT: I was incorrect, they did adjust p to 0.017, which is partially less wrong than what they should have adjusted to (0.00001)

I agree that funding is an important part of evaluating research, but the person doing so also needs to understand how to interpret funding statements. An OAA statement isn't the same as research funding... and I would disagree that the uppermost VA leadership, meaning stemming up to the OAA, has motivation to show time-limited treatment works.
 
  • Like
Reactions: 1 user
I agree that funding is an important part of evaluating research, but the person doing so also needs to understand how to interpret funding statements. An OAA statement isn't the same as research funding... and I would disagree that the uppermost VA leadership, meaning stemming up to the OAA, has motivation to show time-limited treatment works.

Was involved in a good amount of research (grant funded and not) throughout my VA training and staff career, though not as much as the more research heavy positions, but I can't remember upper level leadership ever getting involved in our research or even commenting on it. Also know several research only/mostly colleagues in the VA who have never said anything about pressure to publish a certain "narrative." Definitely agree that there is no motivation to only release certain kinds of research. Also find it laughable that people think the VA is trying to push time limited treatment, as you and others have commented on the fact that in reality, Vets get permaservice if they want it.
 
  • Like
Reactions: 1 users
Right? The VA would love nothing more than research showing that lifetime attendance of process groups is the best treatment for PTSD
 
  • Haha
Reactions: 1 user
Right? The VA would love nothing more than research showing that lifetime attendance of process groups is the best treatment for PTSD

I feel like veterans would love this and congress would love the opposite. Not sure the VA cares either way.
 
I feel like veterans would love this and congress would love the opposite. Not sure the VA cares either way.
As long as they can make a stupid metric out of the result, the VA is hapy.
 
  • Like
Reactions: 1 user
As long as they can make a stupid metric out of the result, the VA is hapy.

No no, that is not true. The VA would need to make at least two conflicting metrics. One at the central office level and one at the hospital level. Then issue 1300 pages of memorandums filled with jargon to explain it and become collectively shocked when it is implemented incorrectly everywhere. That is the VA way!
 
  • Haha
  • Like
Reactions: 3 users
There's a new "why do people hate CBT?" thread and ohhhh man, the misinformation. I'm getting downvotes left and right.

At this point I want to make an Anti-CBT Bingo Card: Shallow, only symptom reduction, doesn't work for trauma, doesn't work for childhood or repeated trauma, doesn't work for "deeper" issues, invalidating, gaslighting, only shown more effective in research because it lends itself to research designs, only shown to be effective in research because of study or sample bias, capitalist, encourages "masking," too "top down," doesn't incorporate somatic stuff, improvements are only temporary and don't last.... anything I'm forgetting?
 
  • Haha
  • Like
Reactions: 3 users
There's a new "why do people hate CBT?" thread and ohhhh man, the misinformation. I'm getting downvotes left and right.

At this point I want to make an Anti-CBT Bingo Card: Shallow, only symptom reduction, doesn't work for trauma, doesn't work for childhood or repeated trauma, doesn't work for "deeper" issues, invalidating, gaslighting, only shown more effective in research because it lends itself to research designs, only shown to be effective in research because of study or sample bias, capitalist, encourages "masking," too "top down," doesn't incorporate somatic stuff, improvements are only temporary and don't last.... anything I'm forgetting?
Best for clients who lack psychological sophistication
 
  • Like
Reactions: 4 users
There's a new "why do people hate CBT?" thread and ohhhh man, the misinformation. I'm getting downvotes left and right.

At this point I want to make an Anti-CBT Bingo Card: Shallow, only symptom reduction, doesn't work for trauma, doesn't work for childhood or repeated trauma, doesn't work for "deeper" issues, invalidating, gaslighting, only shown more effective in research because it lends itself to research designs, only shown to be effective in research because of study or sample bias, capitalist, encourages "masking," too "top down," doesn't incorporate somatic stuff, improvements are only temporary and don't last.... anything I'm forgetting?

My one criticism of CBT, it works poorly on "Fox News" Syndrome as those folks seem to have a hard time formulating more realistic thoughts.
 
  • Haha
  • Like
Reactions: 4 users
There's a new "why do people hate CBT?" thread and ohhhh man, the misinformation. I'm getting downvotes left and right.

At this point I want to make an Anti-CBT Bingo Card: Shallow, only symptom reduction, doesn't work for trauma, doesn't work for childhood or repeated trauma, doesn't work for "deeper" issues, invalidating, gaslighting, only shown more effective in research because it lends itself to research designs, only shown to be effective in research because of study or sample bias, capitalist, encourages "masking," too "top down," doesn't incorporate somatic stuff, improvements are only temporary and don't last.... anything I'm forgetting?
I've had reliable, dramatic, and enduring success with patients that have repeated severe childhood trauma and very "deep" issues by using a therapy approach that is often much shallower and more symptom focused than CBT (or even the image these folks have of "CBT"). Operator skill can be quite decisive, and it is a bad workman that blames their tools.
I suspect these people hate CBT not just because they are bad at it, but because they are bad at therapy.
 
  • Like
Reactions: 1 users
I've had reliable, dramatic, and enduring success with patients that have repeated severe childhood trauma and very "deep" issues by using a therapy approach that is often much shallower and more symptom focused than CBT (or even the image these folks have of "CBT"). Operator skill can be quite decisive, and it is a bad workman that blames their tools.
I suspect these people hate CBT not just because they are bad at it, but because they are bad at therapy.
Nightmarishly, they also think they’re great at it. Bc the patients come in, vent about the problem of the week, and feel better in a kind of cathartic way. I suspect these same therapists eagerly triangular against anyone else the patient chooses to name as a problem. Bc the patient identifies them as the problem and their perception has to be validated, right?

When I taught basic therapy skills grad classes, I often used vignettes that included the person playing the patient being selective about disclosures (eg patient complains about a breakup, in the vignette it explains that the underlying story is that patient was not actually dating the other person and is semi obsessed with them; rarely did anyone pick up on challenging the story discrepancies). It took a lot to break some people out of automatically believing everything the patient says is true. I suspect many of these folks never broke out of that.
 
  • Like
Reactions: 1 user
Nightmarishly, they also think they’re great at it. Bc the patients come in, vent about the problem of the week, and feel better in a kind of cathartic way. I suspect these same therapists eagerly triangular against anyone else the patient chooses to name as a problem. Bc the patient identifies them as the problem and their perception has to be validated, right?

When I taught basic therapy skills grad classes, I often used vignettes that included the person playing the patient being selective about disclosures (eg patient complains about a breakup, in the vignette it explains that the underlying story is that patient was not actually dating the other person and is semi obsessed with them; rarely did anyone pick up on challenging the story discrepancies). It took a lot to break some people out of automatically believing everything the patient says is true. I suspect many of these folks never broke out of that.

"But if I challenge the patient in any way, I'll be gaslighting them!"
 
  • Like
Reactions: 4 users
A lot of anti-CBT folks are really just anti-anything structured that requires them to do more than just chat naturally (or even just be quiet or parrot back everything the client says). They just see "therapist" as a personality style or ingrained ability ("I was the one everyone talked to about their problems in high school") versus an occupation that requires actually mastery and application of specific skills. I always took a bit of perverse pleasure (and a little bit of heat from some of the faculty) in grad school when you inevitably went around the table and had to "share" (always "share", never "tell" or "say"!) what brought us to a doctoral program in psychology. You got lots of the "I've always been the one people talked to" and a few "I was inspired by by own therapy experiences." People weren't overly impressed when I said "I have no other marketable skills."
 
  • Like
Reactions: 8 users
A lot of anti-CBT folks are really just anti-anything structured that requires them to do more than just chat naturally (or even just be quiet or parrot back everything the client says). They just see "therapist" as a personality style or ingrained ability ("I was the one everyone talked to about their problems in high school") versus an occupation that requires actually mastery and application of specific skills. I always took a bit of perverse pleasure (and a little bit of heat from some of the faculty) in grad school when you inevitably went around the table and had to "share" (always "share", never "tell" or "say"!) what brought us to a doctoral program in psychology. You got lots of the "I've always been the one people talked to" and a few "I was inspired by by own therapy experiences." People weren't overly impressed when I said "I have no other marketable skills."

Yes, they also place a high value on insight when, IIRC, research shows increasing insight doesn't really do much.
 
  • Like
Reactions: 3 users
Yes, they also place a high value on insight when, IIRC, research shows increasing insight doesn't really do much.
Right. I mean, that's why REBT/CBT was developed because folks realized that insight alone isn't enough to create and maintain change.
 
  • Like
Reactions: 1 user
It took a lot to break some people out of automatically believing everything the patient says is true. I suspect many of these folks never broke out of that.
Two ideas that help learners with this:
-It is reasonable to assume that the patient is being honest to their experience, it is not reasonable to assume that this experience accords with reality. Discerning the difference is a skill, choosing what to do in light of that is an art (and a crucial one if psychosis is a possibility).

- Brains are not truth-finding organs, they are behavior-controlling ones. The brain has no compunctions about lying to itself to maintain what it sees as a more optimal course of behavior.

These can also be quite useful in therapy, and I've used the latter numerous times quite explicitly in therapy (e.g. in tx for trauma-related guilt).
 
  • Like
Reactions: 1 users
Changing gears slightly, anyone also note how little people on those subs know about legal issues and finance. It's pretty much a guarantee that multiple people will advocate for clearly illegal courses of action when it comes to billing/legal/reporting issues.
 
  • Like
Reactions: 1 user
A lot of anti-CBT folks are really just anti-anything structured that requires them to do more than just chat naturally (or even just be quiet or parrot back everything the client says). They just see "therapist" as a personality style or ingrained ability ("I was the one everyone talked to about their problems in high school") versus an occupation that requires actually mastery and application of specific skills. I always took a bit of perverse pleasure (and a little bit of heat from some of the faculty) in grad school when you inevitably went around the table and had to "share" (always "share", never "tell" or "say"!) what brought us to a doctoral program in psychology. You got lots of the "I've always been the one people talked to" and a few "I was inspired by by own therapy experiences." People weren't overly impressed when I said "I have no other marketable skills."
I think for those that treat PTSD, being confided in by patients sharing naturally about their trauma can be quite flattering to the ego ("look how good I am that they are trusting me with this dark stuff")...whereas actually doing structured therapy and getting to experiences and emotions that the patient is avoiding is too painful, and they might not be able to abstract beyond the "I'm a bad therapist, this is hurting them" nor be sufficiently skilled safely and effectively navigate the treatment (for therapist and patient).
 
  • Like
Reactions: 1 users
I think for those that treat PTSD, being confided in by patients sharing naturally about their trauma can be quite flattering to the ego ("look how good I am that they are trusting me with this dark stuff")...whereas actually doing structured therapy and getting to experiences and emotions that the patient is avoiding is too painful, and they might not be able to abstract beyond the "I'm a bad therapist, this is hurting them" nor be sufficiently skilled safely and effectively navigate the treatment (for therapist and patient).

YES, exactly. People don't want to cause their patients distress, which--while understandable--is a fundamental component of effective PTSD treatment. I also think it's hard for people not to see trauma as something really deep, impenetrable, and unchangeable. Kind of like people who think that trauma will always cause psychological problems, because how could it not? It's trauma!
 
  • Like
Reactions: 5 users
YES, exactly. People don't want to cause their patients distress, which--while understandable--is a fundamental component of effective PTSD treatment. I also think it's hard for people not to see trauma as something really deep, impenetrable, and unchangeable. Kind of like people who think that trauma will always cause psychological problems, because how could it not? It's trauma!
Somewhat similar to how providers don't want to "withhold" a diagnosis that a patient wants (or give one they don't want). Many people who go into MH professions (in my experience) are very non-confrontational/confrontation averse by nature, particularly outside of physicians. Probably hoped they'd get to avoid that by going into a helping profession.
 
  • Like
Reactions: 3 users
I've been pondering my comment above about having "no other marketable skills." Of the many goods things I've learned over the years, it's that you can't just be a technician. There is a lot of room (and necessity) for nuance, relationship building, social skills, marketing and salespersonship, and plain old just not being a jerk when implementing a more structured therapeutic protocol like CBT (or- in the case of what I do mostly- standardized testing). If the ""insight lovers" were trained appropriately, they would learn that it's not a matter of doing the CBT protocol -or- building relationships, collaborating/negotiating with clients, showing insight, etc., but rather a matter of doing both effectively. I'd challenge anyone to effectively implement CBT without doing a lot of that other stuff. Neither are sufficient on their own. The training goes both ways- the CBT need to acknowledge the other stuff that goes into successful outcomes, and the interaction between that stuff and adherence to a protocol.
 
Somewhat similar to how providers don't want to "withhold" a diagnosis that a patient wants (or give one they don't want). Many people who go into MH professions (in my experience) are very non-confrontational/confrontation averse by nature, particularly outside of physicians. Probably hoped they'd get to avoid that by going into a helping profession.
And we do a relatively crappy job of teaching confrontation as a skill, including how to modulate the level of confrontation to achieve maximum client benefit. It's all those "looser", read-the-room skills that are hard to quantify, if not qualify.
 
  • Like
Reactions: 1 user
not quite the same vein as previous posts, but... had a mom in our clinic who would only let her child drink Fiji and one other brand of bottled water because, she very adamantly asserted, the whatever whatevers in all other water causes / contributes to autism.
 
not quite the same vein as previous posts, but... had a mom in our clinic who would only let her child drink Fiji and one other brand of bottled water because, she very adamantly asserted, the whatever whatevers in all other water causes / contributes to autism.
ASD rates are pretty high, but that would send them out of the atmosphere if true.
 
One time, on a social media site, someone told me that “raven therapy” existed. Another time a training director said that a student wore “fashion sweatpants” to a professional setting, and I had to learn that drop seat sweatpants existed. Then I asked some psychiatrists what they use for a diagnosis when they prescribe benzos for patients that get freaked out in MRIs, and they all lost their minds. Another time, I told some tennis coach that graduating from an online program was not the same thing as going to real program and he threatened to sue me. And this other time…


Not to revive that particular argument, but we lost our minds in the sense that we told you the answer is that as an actual matter of fact and practice we generally do not specify a diagnosis for this purpose. The medication is being used in a non-specific way and the context you posited was specifically an in-patient setting. You found this answer inadequate but unfortunately reality is often disappointing.
 
Changing gears slightly, anyone also note how little people on those subs know about legal issues and finance. It's pretty much a guarantee that multiple people will advocate for clearly illegal courses of action when it comes to billing/legal/reporting issues.
✅ Saw some say animal abuse is mandated to be reported bc “they’re the same as us.”
 
Gonna add one to the Bingo card: "anecdotal evidence of someone getting cured from EMDR/IFS/somatic therapy/angel therapy/raven therapy after "years of CBT"

Or people citing their own therapy experience from when they were patients. Sigh.
 
  • Like
Reactions: 1 users
Gonna add one to the Bingo card: "anecdotal evidence of someone getting cured from EMDR/IFS/somatic therapy/angel therapy/raven therapy after "years of CBT"

Or people citing their own therapy experience from when they were patients. Sigh.

Yeah, that seems to be extremely prevalent. "Brainspotting worked for me, so I spent thousands of dollars getting trained in it! And CBT is wrong, and you're wrong because you're biased, colonialism, gaslighting, late stage capitalism, and.....reasons! And I'm not possibly biased at all despite what we know about cognitive dissonance and confirmation bias!"
 
  • Like
Reactions: 1 user
While we’re all at this party, what’s everyone’s opinion on the oft-used line that “the best therapists are those who have had/still have therapists of their own?” I know lots of MSW and counseling programs encourage or even require students to have therapy during their training, but I’m not super sure I think it’s necessary for being a good therapist. I can see how it helps us understand the power dynamics of being a patient, but not really much beyond that.
 
Last edited:
While we’re all at this party, what’s everyone’s opinion on the oft-used line that “the best therapists are those who have had/still have therapists of their own.” I know lots of MSW and counseling programs encourage or even require students to have therapy during their training, but I’m not super sure I think it’s necessary for being a good therapist. I can see how it helps us understand the power dynamics of being a patient, but not really much beyond that.
Anecdotally: I know many, many excellent therapists who, to the best of my knowledge, never participated in and/or weren't actively participating in their own therapy.
 
  • Like
Reactions: 2 users
Top