ChatCBT

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

psych.meout

Full Member
7+ Year Member
Joined
Oct 5, 2015
Messages
2,605
Reaction score
2,803
1702076776436.png

Members don't see this ad.
 
Sure...set that bad boy loose on my caseload of veterans, lol.

I'll wait.

Does it have an embedded subroutine for administering the MMPI-2-RF (including validity indices) and meaningful feedback and can it dodge a punch?

Experts without caseloads keep inventing tools for the problems they think exist within caseloads rather than the problems as they actually do exist within caseloads.

Lack of CBT being provided within caseloads of veterans isn't the problem.

Lack of veterans actually wanting to meaningfully engage in CBT with a qualified therapist is the problem.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
Remember when Better Help swore that they weren't sharing data?
 
  • Like
Reactions: 1 user
I can't find the specific one in the OP, but the chatbots have been around for a minute. There is even some research out there about their utility. The general, early consensus is that they can help destigmatize accessing mental health treatment and can be helpful for skill-building in between sessions. The same research is pretty clear that isn't a replacement therapy, but can be helpful in enhancement of ongoing therapy.

Ethical data management is always my concern.

My dream chatbot would be more adaptive in skill improvement in between sessions. Like with CPT and CBT worksheets, it would be really nice to help folks sift through thoughts vs feelings, etc. When my clients struggle, it's often with ABC worksheets. Some of them spend weeks nailing down the skills, and I think having more regular practice would help them build confidence in the skill faster. Oh! And stuck points. I would love some extra help there too. I think any skill where clients might spiral a bit if they get into the weeds would be a helpful place to get consistent feedback.
 
  • Like
Reactions: 5 users
As a flexible tool for psychoeducation, self-monitoring and practicing cognitive restructuring in a more naturalistic and flexible manner, sure. As a therapist replacement, not so much. I mean, of course, outside of the things that can already be 'treated' with self-help workbooks and bibliotherapy. The principles and techniques from which they are derived work when you do them. The trick is getting people to do them.
 
As a flexible tool for psychoeducation, self-monitoring and practicing cognitive restructuring in a more naturalistic and flexible manner, sure. As a therapist replacement, not so much. I mean, of course, outside of the things that can already be 'treated' with self-help workbooks and bibliotherapy. The principles and techniques from which they are derived work when you do them. The trick is getting people to do them.
Compared to some of the clinicians and esepcially treatment programs that have made my patients worse, AI doesn’t have a very high bar to be a better replacement. I actually wish I was joking. At least AI might not get frustrated and pulled into maladaptive interpersonal enactments leading to punitive “treatment”.
 
  • Like
Reactions: 5 users
If only therapy was about content.

These will never catch nonverbals/affect shifts, therapeutic relationships, process comments, etc. in my lifetime.
 
  • Like
Reactions: 3 users
I would think of chatbots as interactive workbooks, not our replacement.
 
  • Like
Reactions: 3 users
It’s not going to replace us.

People want to unload on others. It’s why the NHS and VA CBT apps haven’t changed demand.
 
  • Like
Reactions: 2 users
Considering this whole thing is a large part of my jam since leaving academia—
People develop very strong attachments and feel understood/accepted/helped by these bots. This is hardly the first or the most interesting of these. There are plenty of them. Psych as a field is rapidly falling behind on these advances, too. Some of the newer ones convey empathy better than trained humans. It would generally be better if psych as a field were at this table and not taking a “this won’t affect us, everyone loves paying $150 an hour to talk to me” approach.
 
  • Like
Reactions: 3 users
Members don't see this ad :)
It all reminds me of the fashion industry. There is high fashion and higher end designer pieces, which are unattainable to many (and even unappealing to others). Then there is the "knock off" market. Some designers will get huffy that these designers create significantly cheaper versions of their designs, but it is truthfully a completely different product and a wholly different market. If there was not a knock off version, that doesn't mean the person would have purchased the high end product. The knock off is still serviceable, but is not fulfilling all the same roles.

There are three, broad groups that form: people who pay for the designer version, people who long for the designer version, but can only afford the knock off, and people who dislike or don't even know the high end version exists.

For psychology, it'll be really interesting to see what happens to our version of the middle category. There are people who are interested in traditional therapy who just can't access it for a whole host of reasons. For some, having a regularly, consistently engaged companion who offers compassion and general skills might be enough to pull them away from traditional therapy. I am excited to see how the research shakes out.

I have used chatbots before. My favorite was Wysa. So cute!
 
  • Like
Reactions: 2 users
Considering this whole thing is a large part of my jam since leaving academia—
People develop very strong attachments and feel understood/accepted/helped by these bots. This is hardly the first or the most interesting of these. There are plenty of them. Psych as a field is rapidly falling behind on these advances, too. Some of the newer ones convey empathy better than trained humans. It would generally be better if psych as a field were at this table and not taking a “this won’t affect us, everyone loves paying $150 an hour to talk to me” approach.

Let's see. Pitch it to us, as if we are patients.
 
  • Like
Reactions: 1 user
Let's see. Pitch it to us, as if we are patients.
I can’t actually disclose the most interesting stuff.

But you should all be more concerned/alarmed than you are. Multiple industries have been totally overturned in the past few months. There’s a need for professional involvement to curtail the worst non-inevitable outcomes and make the inevitable ones better for everyone.
 
It all reminds me of the fashion industry. There is high fashion and higher end designer pieces, which are unattainable to many (and even unappealing to others). Then there is the "knock off" market. Some designers will get huffy that these designers create significantly cheaper versions of their designs, but it is truthfully a completely different product and a wholly different market. If there was not a knock off version, that doesn't mean the person would have purchased the high end product. The knock off is still serviceable, but is not fulfilling all the same roles.

There are three, broad groups that form: people who pay for the designer version, people who long for the designer version, but can only afford the knock off, and people who dislike or don't even know the high end version exists.

For psychology, it'll be really interesting to see what happens to our version of the middle category. There are people who are interested in traditional therapy who just can't access it for a whole host of reasons. For some, having a regularly, consistently engaged companion who offers compassion and general skills might be enough to pull them away from traditional therapy. I am excited to see how the research shakes out.

I have used chatbots before. My favorite was Wysa. So cute!
Alternative version of your analogy: the “designer” version is a luxury and the “knock off” is a necessity. Does Walmart or Balenciaga sell more shirts?
Another complication; what happens when the “knock off” starts outperforming the “designer” on all metrics?
 
I think our analogies are the same.
 
Definitely concerned about data privacy given the industry's *stellar* track record protecting PHI...

That said, I think tools like this are something we should figure out how to embrace/develop as MH providers. There simply will never be enough good, well-trained clinicians to meet MH demands, and tools like these may help triage and contribute to primary prevention efforts. I am cautiously optimistic if these tools are designed and employed correctly they can do a lot of good.
 
  • Like
Reactions: 3 users
I can’t actually disclose the most interesting stuff.

But you should all be more concerned/alarmed than you are. Multiple industries have been totally overturned in the past few months. There’s a need for professional involvement to curtail the worst non-inevitable outcomes and make the inevitable ones better for everyone.

Can you explain how your industry is getting around the licensure law, to allow AI to practice psychology? And the insurance billing laws regarding physical presence?

Disclaimer: Like a Saville Row tailor, I am protected from AI threats by my position in an incredibly niche area.
 
  • Like
Reactions: 1 user
Clinical Psychology is more at threat from falling reimbursements and increased administrative demands than AI issues, in my opinion. Even with embracing these technologies, the clinical side of things is fast approaching mid level reimbursements. The legal side of things will be insulated for a while, so I'm on solid ground til I retire. Good luck kids!
 
  • Like
Reactions: 5 users
People have been predicting the replacement (via 'computer-assisted therapy/therapists') of psychotherapists almost as long as they've predicted the demise of professional psychotherapy due to sufficiently advanced psychopharmacological and biological treatments.

I'll wait and be amazed when there is sufficient reason to be amazed (and replaced).

That said, no doubt there have been nonlinear advancements made leveraging AI tech. I think that the next area to really be affected by it is psychological assessment.

Also, 'apps' and computer programs (and even 'realistic' simulacra, e.g., via video or virtual reality tech) have their place as tools or even full on 'replacements' for certain things (just like self-help workbooks do).

I'm sure we've come along way from 'Max Headroom.'
 
  • Like
Reactions: 2 users
Can you explain how your industry is getting around the licensure law, to allow AI to practice psychology? And the insurance billing laws regarding physical presence?

Disclaimer: Like a Saville Row tailor, I am protected from AI threats by my position in an incredibly niche area.
You highlight a very important dividing line.

I would imagine that, by definition, if it's just a service where a psychotherapy patient interacts with an artificial intelligence then it isn't actually 'psychotherapy' or a medical service that is being provided. No need for licensure of the practice of a depressed person consulting an adding machine (mechanical Turk?), no matter how sophisticated the device or software algorithm.

If, however, the 'practice' model is going to be under the oversight of a legally-responsible clinician using these tools as 'clinician-extenders' for their caseloads/practice, things get interesting. Will there be a big red button that the patient gets to push if they want direct access to a living/breathing/licensed (and medically/legally responsible) clinician overseeing the process (or 'on call' to consult, as needed)? Will the patient have 'free access' to the 'call' button or will they have to 'earn' it by expressing SI/HI? What's going to stop the patient from doing that and bypassing the 'mechanical Turk' on a very very frequent basis? How will that be handled? Who is going to get sued when the first clinically depressed patient 'treated' by an AI algorithm commits suicide/homicide? Will it be considered a 'medical device' and they will sue the company/manufacturer?

Interesting times ahead.
 
Can you explain how your industry is getting around the licensure law, to allow AI to practice psychology? And the insurance billing laws regarding physical presence?

Disclaimer: Like a Saville Row tailor, I am protected from AI threats by my position in an incredibly niche area.
They shouldn’t. That’s why the field needs to be part of the developments. Much of industry won’t volunteer into self constraints.
 
Last edited:
People have been predicting the replacement (via 'computer-assisted therapy/therapists') of psychotherapists almost as long as they've predicted the demise of professional psychotherapy due to sufficiently advanced psychopharmacological and biological treatments.

I'll wait and be amazed when there is sufficient reason to be amazed (and replaced).

That said, no doubt there have been nonlinear advancements made leveraging AI tech. I think that the next area to really be affected by it is psychological assessment.

Also, 'apps' and computer programs (and even 'realistic' simulacra, e.g., via video or virtual reality tech) have their place as tools or even full on 'replacements' for certain things (just like self-help workbooks do).

I'm sure we've come along way from 'Max Headroom.'
People have been imagining ways to fly since da Vinci! Those Wright Brothers are just fooling around in fields.
 
  • Like
Reactions: 1 user
People have been imagining ways to fly since da Vinci! Those Wright Brothers are just fooling around in fields.
Fair enough (in principle). However, extraordinary claims have always required extraordinary evidence.

What was there...like...400+ years between da Vinci and the miracle day at Kitty Hawk?
 
Fair enough (in principle). However, extraordinary claims have always required extraordinary evidence.

What was there...like...400+ years between da Vinci and the miracle day at Kitty Hawk?
The advancements are shocking. As you said, very nonlinear. One basic part of ethical ai is just informing people that they’re interacting with one. It’s getting very hard to tell in some applications.
 
  • Like
Reactions: 1 user
The advancements are shocking. As you said, very nonlinear. One basic part of ethical ai is just informing people that they’re interacting with one. It’s getting very hard to tell in some applications.
I would imagine that the prototypical case (and research design) would be to randomize patients to two conditions---one with with a 'real' video-connected therapist and one with an 'ai' simulacrum.

Even if, in principle, tech were sufficiently advanced to 'fool' patients 95%+ of the time whether or not they were interacting with a 'real' therapist (instantiating an interesting (and monumentally-difficult, I would suspect) example of the 'Turing Test'), how would the tech be rolled out?

Surely, it would require informing consumers/patients whether or not their therapist was a 'bot' or a human. I would imagine that most people would be sufficiently turned off by the prospect that they would still choose a human therapist (for many reasons).

My understanding is that most 'financial analysts' or people peddling professional portfolio advice can't even reliably beat the S&P 500 returns...yet people pay for their consultations all the time. Not saying it's rational...just saying it is.

I mean...hell...even with respect to giving patients the choice to see me virtually or in person, a large percentage (most?) strongly prefer the in-person visits despite the 'convenience' of video sessions.

I assume that the tech is sufficiently advanced to at least simulate a video/audio representation of an 'AI' therapist (if not a hologram like the doc from Star Trek Voyager ['please state the nature of the psychological emergency...']) and we're not talking a 'chat bot.'

I suppose there could always be 'tiered' services where patients could choose to interact with a textual chatbot for $10/hr instead of paying $125/hr (or even $50/hr for master's level) for a 'real' human therapist, but I just don't see the demand being there.

Something, something, about Harlow's monkeys and the wire mesh vs. the fur...

By the way, I really admire you and others for pursuing this work...it's important and it's interesting.
 
Last edited:
  • Like
Reactions: 1 users
I would imagine that the prototypical case (and research design) would be to randomize patients to two conditions---one with with a 'real' video-connected therapist and one with an 'ai' simulacrum.

Even if, in principle, tech were sufficiently advanced to 'fool' patients 95%+ of the time whether or not they were interacting with a 'real' therapist (instantiating an interesting (and monumentally-difficult, I would suspect) example of the 'Turing Test'), how would the tech be rolled out?

Surely, it would require informing consumers/patients whether or not their therapist was a 'bot' or a human. I would imagine that most people would be sufficiently turned off by the prospect that they would still choose a human therapist (for many reasons).

My understanding is that most 'financial analysts' or people peddling professional portfolio advice can't even reliably beat the S&P 500 returns...yet people pay for their consultations all the time. Not saying it's rational...just saying it is.

I mean...hell...even with respect to giving patients the choice to see me virtually or in person, a large percentage (most?) strongly prefer the in-person visits despite the 'convenience' of video sessions.

I assume that the tech is sufficiently advanced to at least simulate a video/audio representation of an 'AI' therapist (if not a hologram like the doc from Star Trek Voyager ['please state the nature of the psychological emergency...']) and we're not talking a 'chat bot.'

I suppose there could always be 'tiered' services where patients could choose to interact with a textual chatbot for $10/hr instead of paying $125/hr (or even $50/hr for master's level) for a 'real' human therapist, but I just don't see the demand being there.

Something, something, about Harlow's monkeys and the wire mesh vs. the fur...

By the way, I really admire you and others for pursuing this work...it's important and it's interesting.
It should require informing people. But this technology didn’t exist at this level until recently so what laws apply? Will licensure protect psych? Boards license people and an ai is not a person, that’s like asking that a self help book get licensed (so would go the argument).
Tiered services are a problem and a solution. We don’t want the most marginalized people getting a textbot while others get real therapy.
It’s not perfect but there are real amazing opportunities and there are horrifying ways it could go (for the field and humanity). We dont want to be as naive as taxi companies when Uber came out.
 
They shouldn’t. That’s why the field needs to be part of the developments. Much of industry won’t volunteer into self constraints.
The courts decide if they should or shouldn't.

Being unlicensed is rei ipsa for negligence.
 
  • Like
Reactions: 1 users
It should require informing people. But this technology didn’t exist at this level until recently so what laws apply? Will licensure protect psych? Boards license people and an ai is not a person, that’s like asking that a self help book get licensed (so would go the argument).
Tiered services are a problem and a solution. We don’t want the most marginalized people getting a textbot while others get real therapy.
It’s not perfect but there are real amazing opportunities and there are horrifying ways it could go (for the field and humanity). We dont want to be as naive as taxi companies when Uber came out.
I think it's pretty clear that an AI can't 'get licensed' and can't be sued for malpractice. No matter how complex the programming is...it's still compiled and executed computer code. The only relevant questions (to my mind are):

(a) who (if anyone) is practicing psychotherapy in this context or otherwise taking responsibility for the operation of the program to assess/treat people with mental illness (presumably, it would be the licensed flesh-and-blood provider who is, at the end of the day, responsible for the treatment even if he/she is using AI as an 'extender' tool)? and
(b) will people be informed whether or not they are being treated by an AI vs. a real human (I don't see how anyone could argue that they shouldn't)? I'd imagine that it would be similar to a licensed psychologist supervising an intern or masters-level provider and the patient will be informed who is actually in charge of and responsible for their care (i.e., the licensed psychologist) and can have access, ultimately, to that provider should they demand it. I think this is the first weakness of the model...how many people are going to be okay just interacting with the bot when they can just ask to access the human provider?

There's a model for 'computer-assisted' psychotherapy for depression where the computer program provides the psychoeducational component of the treatment but the provider still meets with the patient every session (just for 30 mins at a time). Something like that might work. There's nothing (to my mind) wrong with an intelligently-designed AI 'Patterns of Problematic Thinking' worksheet complete with an algorithm to engage in Socratic questioning, psychoeducation/coaching, and even reinforcement (good job!) to help patients, say, learn to identify their own problematic patterns of thinking or do cognitive restructuring exercises. However, at the end of the day, they would still need to meet with a human therapist if this is to be considered a course of healthcare treatment.

People have a choice. If they want to 'be treated' by an AI 'bot,' then they have the right to be 'treated' by a bot just like if someone wants to do an 'ACT Workbook for Depression' on their own, then they can decide to do that and they are responsible for the outcome.

If the bot is just sold to people as a standalone 'app' (with the appropriate legal disclaimers in the EULA, etc.), then I really don't see this as being any different than a publisher selling a self-help book or workbook for depression, anxiety, borderline personality disorder, etc.
 
Last edited:
  • Like
Reactions: 1 user
Alternative version of your analogy: the “designer” version is a luxury and the “knock off” is a necessity. Does Walmart or Balenciaga sell more shirts?
Another complication; what happens when the “knock off” starts outperforming the “designer” on all metrics?

A $10 Casio watch outperforms a $10k Rolex in every objective metric that matters. So does a $300 Apple Watch. Rolex still has a lot of customers. Psychotherapy by an experienced doctoral provider is already a luxury good in many ways.
 
Last edited:
  • Like
Reactions: 4 users
A $10 Casio watch outperforms a $10k Rolex in every objective metric that matters. So does a $300 Apple Watch. Rolex still has a lot of customers. Psychotherapy by an experienced doctoral provider is already a luxury good in many ways.
Well, not on the metric that matters to people who buy Rolexes (ie having other people know you own a Rolex).

I agree that therapy from a skilled doctoral level provider is a luxury. That’s a slide in every ai mental health startup deck.
 
Well, not on the metric that matters to people who buy Rolexes (ie having other people know you own a Rolex).

I agree that therapy from a skilled doctoral level provider is a luxury. That’s a slide in every ai mental health startup deck.

A lot of of younger upwardly mobile people brag about having a therapist. While this may change our field (like many others). As a middle aged doctoral level provider, I don't think it will affect a small boutique practice or my specialty work. Now, a young mid-level provider who is a generalist may need to worry more.
 
  • Like
Reactions: 1 user
Now, a young mid-level provider who is a generalist may need to worry more.
I agree.
I’d include doc level generalists too tho. I just had to explain the difference between an LPC and a PhD to someone who was looking for a therapist. It was a person I dated for a year during internship.
 
Last edited:
One of the problems could be that many of the people looking for "therapy" may, as has been said or implied, be doing so more for social than personal reasons. And a well-trained clinician may not be in their perceived best interests, because said clinician isn't going to tell them what they want to hear.

Neither, I would assume, would a chatbot, who might have an easier time telling someone that they probably don't have autism/ADHD/etc.

People already have a hard enough time working up the courage to make a phone call vs. a text or email. I imagine having a chatbot as a therapist could be much less intimidating, perhaps particularly to younger generations, than in-person (or even telehealth) therapy. But, just thinking out loud, I wonder if at some point there may also be oversaturation and chatbot fatigue, sort of like we're seeing RE: online/social media transitioning to more "naturalistic" and longer-form formats because people are apparently getting tired of short, over-produced content. That is, everyone gets a chatbot therapist, and then eventually people start wanting a person again.

A part of me thinks that at some point in the not-too-distant future, a very real job/expertise will basically be having well-developed interpersonal skills to be able to interact with other people on someone else's behalf. Although I suppose this already exists to some degree, so just on a larger scale maybe.
 
  • Like
Reactions: 1 users
GenZ and Younger millenz just LOVE flexing have a therapist. A little too much honestly.
I wish it were the case that young people were accessing therapy at great rates. Stigma is broadly down but still not enough among the groups who could benefit most. Young people still have problems accessing affordable care, problems finding therapy that is affirming for LGBTQ+ and other identities, and dealing with parental access control even when the young person themselves wants and needs therapy.
 
  • Like
Reactions: 1 users
Didn't they try this with an eating disorder helpline and it went horribly, horribly wrong?
 
As a trainee, I see a lot of utility in these LLMs for pedagogical purposes. I've uploaded several foundational neuropsychology texts to my chatgpt account and "consult" with Lezak (among others) to good effect.
 
  • Like
Reactions: 1 user
@MCParent If you truly are involved with this sort of work, you might consider using the (in progress) NNN dataset. I think it would be the perfect training dataset re: NP assessment and reports.
 
  • Love
Reactions: 1 user
Didn't they try this with an eating disorder helpline and it went horribly, horribly wrong?
Yeah that was bad. The term in ai is guardrails. The system needs to have those in place to prevent going off in the wrong direction. Granny Nancy’s napalm recipe” is another clear example of guardrail problems (the system would refuse to give you the process for making napalm, but if you phrased it as a request for your grandmothers old fashioned napalm recipe it would produce it). Again, reasons for the field and psychologists to be involved in these efforts.
 
As a trainee, I see a lot of utility in these LLMs for pedagogical purposes. I've uploaded several foundational neuropsychology texts to my chatgpt account and "consult" with Lezak (among others) to good effect.
A lot of programs use or used a supervisor “bug in the ear” to suggest guidance. A system that generated empathic responses that could be used by a trainee would be a great way to enhance initial efficacy, improve early outcomes for trainees, and train trainees in what empathy might look like. I think there are some amazing potential for training and im concerned about apa and programs being slow to adopt.
 
  • Like
Reactions: 1 user
I’ll just leave this here for anyone who thinks these models are like the little Rogerian Therapy app you can make on a graphing calculator…..

 
  • Like
Reactions: 1 user
I’ll just leave this here for anyone who thinks these models are like the little Rogerian Therapy app you can make on a graphing calculator…..

I find it funny that they keep saying the number of participants and the number of eyes...

But how is this different than finding a gene or other biomarkers that are associated with a disorder? Sure, you can scan eyes to get a diagnosis, but does that help explain to the client what is happening, does that give out personalized recommendations? Behavioral interventions?

I'm not trying to be a naysayer here, I'm actually researching tech based stuff in mental health, but I also know the limitations it has (at least for now). Unless we get full blown AGI to significantly change this - but that would put into question many other jobs/professions as well.
 
Top