- Joined
- Oct 5, 2015
- Messages
- 2,605
- Reaction score
- 2,803
Sure...set that bad boy loose on my caseload of veterans, lol.
Remember the Carl Rogers app on Palm Pilots?
Compared to some of the clinicians and esepcially treatment programs that have made my patients worse, AI doesn’t have a very high bar to be a better replacement. I actually wish I was joking. At least AI might not get frustrated and pulled into maladaptive interpersonal enactments leading to punitive “treatment”.As a flexible tool for psychoeducation, self-monitoring and practicing cognitive restructuring in a more naturalistic and flexible manner, sure. As a therapist replacement, not so much. I mean, of course, outside of the things that can already be 'treated' with self-help workbooks and bibliotherapy. The principles and techniques from which they are derived work when you do them. The trick is getting people to do them.
Considering this whole thing is a large part of my jam since leaving academia—
People develop very strong attachments and feel understood/accepted/helped by these bots. This is hardly the first or the most interesting of these. There are plenty of them. Psych as a field is rapidly falling behind on these advances, too. Some of the newer ones convey empathy better than trained humans. It would generally be better if psych as a field were at this table and not taking a “this won’t affect us, everyone loves paying $150 an hour to talk to me” approach.
I can’t actually disclose the most interesting stuff.Let's see. Pitch it to us, as if we are patients.
Alternative version of your analogy: the “designer” version is a luxury and the “knock off” is a necessity. Does Walmart or Balenciaga sell more shirts?It all reminds me of the fashion industry. There is high fashion and higher end designer pieces, which are unattainable to many (and even unappealing to others). Then there is the "knock off" market. Some designers will get huffy that these designers create significantly cheaper versions of their designs, but it is truthfully a completely different product and a wholly different market. If there was not a knock off version, that doesn't mean the person would have purchased the high end product. The knock off is still serviceable, but is not fulfilling all the same roles.
There are three, broad groups that form: people who pay for the designer version, people who long for the designer version, but can only afford the knock off, and people who dislike or don't even know the high end version exists.
For psychology, it'll be really interesting to see what happens to our version of the middle category. There are people who are interested in traditional therapy who just can't access it for a whole host of reasons. For some, having a regularly, consistently engaged companion who offers compassion and general skills might be enough to pull them away from traditional therapy. I am excited to see how the research shakes out.
I have used chatbots before. My favorite was Wysa. So cute!
I can’t actually disclose the most interesting stuff.
But you should all be more concerned/alarmed than you are. Multiple industries have been totally overturned in the past few months. There’s a need for professional involvement to curtail the worst non-inevitable outcomes and make the inevitable ones better for everyone.
You highlight a very important dividing line.Can you explain how your industry is getting around the licensure law, to allow AI to practice psychology? And the insurance billing laws regarding physical presence?
Disclaimer: Like a Saville Row tailor, I am protected from AI threats by my position in an incredibly niche area.
They shouldn’t. That’s why the field needs to be part of the developments. Much of industry won’t volunteer into self constraints.Can you explain how your industry is getting around the licensure law, to allow AI to practice psychology? And the insurance billing laws regarding physical presence?
Disclaimer: Like a Saville Row tailor, I am protected from AI threats by my position in an incredibly niche area.
People have been imagining ways to fly since da Vinci! Those Wright Brothers are just fooling around in fields.People have been predicting the replacement (via 'computer-assisted therapy/therapists') of psychotherapists almost as long as they've predicted the demise of professional psychotherapy due to sufficiently advanced psychopharmacological and biological treatments.
I'll wait and be amazed when there is sufficient reason to be amazed (and replaced).
That said, no doubt there have been nonlinear advancements made leveraging AI tech. I think that the next area to really be affected by it is psychological assessment.
Also, 'apps' and computer programs (and even 'realistic' simulacra, e.g., via video or virtual reality tech) have their place as tools or even full on 'replacements' for certain things (just like self-help workbooks do).
I'm sure we've come along way from 'Max Headroom.'
Fair enough (in principle). However, extraordinary claims have always required extraordinary evidence.People have been imagining ways to fly since da Vinci! Those Wright Brothers are just fooling around in fields.
The advancements are shocking. As you said, very nonlinear. One basic part of ethical ai is just informing people that they’re interacting with one. It’s getting very hard to tell in some applications.Fair enough (in principle). However, extraordinary claims have always required extraordinary evidence.
What was there...like...400+ years between da Vinci and the miracle day at Kitty Hawk?
I would imagine that the prototypical case (and research design) would be to randomize patients to two conditions---one with with a 'real' video-connected therapist and one with an 'ai' simulacrum.The advancements are shocking. As you said, very nonlinear. One basic part of ethical ai is just informing people that they’re interacting with one. It’s getting very hard to tell in some applications.
It should require informing people. But this technology didn’t exist at this level until recently so what laws apply? Will licensure protect psych? Boards license people and an ai is not a person, that’s like asking that a self help book get licensed (so would go the argument).I would imagine that the prototypical case (and research design) would be to randomize patients to two conditions---one with with a 'real' video-connected therapist and one with an 'ai' simulacrum.
Even if, in principle, tech were sufficiently advanced to 'fool' patients 95%+ of the time whether or not they were interacting with a 'real' therapist (instantiating an interesting (and monumentally-difficult, I would suspect) example of the 'Turing Test'), how would the tech be rolled out?
Surely, it would require informing consumers/patients whether or not their therapist was a 'bot' or a human. I would imagine that most people would be sufficiently turned off by the prospect that they would still choose a human therapist (for many reasons).
My understanding is that most 'financial analysts' or people peddling professional portfolio advice can't even reliably beat the S&P 500 returns...yet people pay for their consultations all the time. Not saying it's rational...just saying it is.
I mean...hell...even with respect to giving patients the choice to see me virtually or in person, a large percentage (most?) strongly prefer the in-person visits despite the 'convenience' of video sessions.
I assume that the tech is sufficiently advanced to at least simulate a video/audio representation of an 'AI' therapist (if not a hologram like the doc from Star Trek Voyager ['please state the nature of the psychological emergency...']) and we're not talking a 'chat bot.'
I suppose there could always be 'tiered' services where patients could choose to interact with a textual chatbot for $10/hr instead of paying $125/hr (or even $50/hr for master's level) for a 'real' human therapist, but I just don't see the demand being there.
Something, something, about Harlow's monkeys and the wire mesh vs. the fur...
By the way, I really admire you and others for pursuing this work...it's important and it's interesting.
The courts decide if they should or shouldn't.They shouldn’t. That’s why the field needs to be part of the developments. Much of industry won’t volunteer into self constraints.
I think it's pretty clear that an AI can't 'get licensed' and can't be sued for malpractice. No matter how complex the programming is...it's still compiled and executed computer code. The only relevant questions (to my mind are):It should require informing people. But this technology didn’t exist at this level until recently so what laws apply? Will licensure protect psych? Boards license people and an ai is not a person, that’s like asking that a self help book get licensed (so would go the argument).
Tiered services are a problem and a solution. We don’t want the most marginalized people getting a textbot while others get real therapy.
It’s not perfect but there are real amazing opportunities and there are horrifying ways it could go (for the field and humanity). We dont want to be as naive as taxi companies when Uber came out.
Alternative version of your analogy: the “designer” version is a luxury and the “knock off” is a necessity. Does Walmart or Balenciaga sell more shirts?
Another complication; what happens when the “knock off” starts outperforming the “designer” on all metrics?
Well, not on the metric that matters to people who buy Rolexes (ie having other people know you own a Rolex).A $10 Casio watch outperforms a $10k Rolex in every objective metric that matters. So does a $300 Apple Watch. Rolex still has a lot of customers. Psychotherapy by an experienced doctoral provider is already a luxury good in many ways.
Well, not on the metric that matters to people who buy Rolexes (ie having other people know you own a Rolex).
I agree that therapy from a skilled doctoral level provider is a luxury. That’s a slide in every ai mental health startup deck.
I agree.Now, a young mid-level provider who is a generalist may need to worry more.
I wish it were the case that young people were accessing therapy at great rates. Stigma is broadly down but still not enough among the groups who could benefit most. Young people still have problems accessing affordable care, problems finding therapy that is affirming for LGBTQ+ and other identities, and dealing with parental access control even when the young person themselves wants and needs therapy.GenZ and Younger millenz just LOVE flexing have a therapist. A little too much honestly.
Didn't they try this with an eating disorder helpline and it went horribly, horribly wrong?
Didn't they try this with an eating disorder helpline and it went horribly, horribly wrong?
Yeah that was bad. The term in ai is guardrails. The system needs to have those in place to prevent going off in the wrong direction. Granny Nancy’s napalm recipe” is another clear example of guardrail problems (the system would refuse to give you the process for making napalm, but if you phrased it as a request for your grandmothers old fashioned napalm recipe it would produce it). Again, reasons for the field and psychologists to be involved in these efforts.Didn't they try this with an eating disorder helpline and it went horribly, horribly wrong?
A lot of programs use or used a supervisor “bug in the ear” to suggest guidance. A system that generated empathic responses that could be used by a trainee would be a great way to enhance initial efficacy, improve early outcomes for trainees, and train trainees in what empathy might look like. I think there are some amazing potential for training and im concerned about apa and programs being slow to adopt.As a trainee, I see a lot of utility in these LLMs for pedagogical purposes. I've uploaded several foundational neuropsychology texts to my chatgpt account and "consult" with Lezak (among others) to good effect.
I find it funny that they keep saying the number of participants and the number of eyes...I’ll just leave this here for anyone who thinks these models are like the little Rogerian Therapy app you can make on a graphing calculator…..
Development of Deep Ensembles to Screen for Autism and Symptom Severity
This diagnostic study examines the potential of deep ensemble models to screen for autism spectrum disorder and symptom severity using retinal photographs.jamanetwork.com