Not that at all- its just a great example of how inane the counter-argument is.
I have no real skin in this game. I'm a licensed Psychologist in Canada at the Masters level. No new legislation is going to impact me. The only thing that I care about is that the provinces/states retrain their right to dictate the licensing standards...and that they don't fall to the pressure of the APA/CPA. This is for two reasons. 1. People aren't exactly jumping at the opportunity to move to Alabama, Wyoming or Manitoba to practice. They need flexibility. 2. I've seen no proof that additional training (beyond a certain point) produces significantly better clinicians.
In regards to # 2
This is from Science and Pseudoscience in Clinical Psychology...(taking some excerpts out)
Clinical lore suggests that psychologists and mental health professionals learn from experience by working with clients in clinical settings. Experienced clinicians are presumed to make more accurate and valid assessments of personality personality and psychopathology than less experienced graduate students and mental health providers. Similarly, presumed experts are assumed to be more competent providers of psychological interventions than other clinicians. Psychology training programs adhere to these assumptions, and common supervisory practices emphasize the value of experience in the development of competent clinicians. The inherent message to mental health trainees is that clinical acumen develops over time and with increased exposure to various clients and presenting problems.
Narrative reviews of clinical judgment have concluded that when clinicians are given identical sets of information, experienced clinicians are no more accurate than less experienced clinicians and graduate students, though they may be better at structuring judgment tasks (e.g., generating questions during an interview; Dawes, 1994; Garb, 1989, 1998, 2005; Garb & Schramke, 1996; Goldberg, 1968; Tracey, Wampold, Lichtenberg, & Goodyear, 2014; Wiggins, 1973; see also Meehl, 1997).
1997). Similarly, a recent meta-analysis (Spengler et al.,2009) found only a small positive effect for training and experience.
The authors synthesized results from 75 clinical judgment studies. A finding they emphasized is that the combined effect of training and experience was small but positive (d = 0.12; this is equivalent to a correlation of about r = 0.06). different. Also, Spengler et al. concluded that having specific training and experience with a judgment task was unrelated to validity.
Experienced versus Less Experienced Clinicians
In conclusion, when clinicians are given identical sets of information, experienced clinicians are generally no more accurate than less experienced clinicians. When practitioners are required to search for information or decide what judgments should be made, experience may be related to validity for some judgment tasks.
Clinicians versus Trainees
Results have been no more promising when clinicians have been compared to trainees. In one study (Hannan et al., 2005; also see Whipple & Lambert, 2011, for additional details), 20 trainees and 20 licensed professionals at a university outpatient clinic were instructed to predict outcomes for clients they were seeing in counseling. In particular, they were instructed to predict which of their clients would be worse off at the end of treatment. Forty of 550 patients deteriorated by the end of treatment (as measured by the Outcome Questionnaire–45 [OQ-45]; Lambert, 2004). Only 3 of the 550 clients had been predicted by
their therapist to leave treatment worse off than when they began (one of the three predictions was correct). The experienced therapists did not identify a single client who had deteriorated.
Clinicians versus Graduate Students
Studies have revealed no differences in accuracy between experienced clinicians and graduate students when judgments are made on the basis of interview data (Anthony, 1968; Schinka & Sines, 1974), biographical and history information (Oskamp, 1965; Witteman & van den Bercken, 2007), behavioral observation data (Garner & Smith, 1976; Walker & Lewine, 1990), data from therapy sessions (Brenner & Howard, 1976), MMPI protocols (Chandler, 1970; Danet, 1965; Goldberg, 1965, 1968; Graham, 1967, 1971; Oskamp, 1962; Walters
Walters et al., 1988; Whitehead, 1985), projective-drawing protocols (Levenberg, 1975; Schaeffer, 1964; Stricker, 1967), Rorschach protocols (Gadol, 1969; Turner, 1966; Whitehead, 1985; see also Hunsley, Lee, Wood, & Taylor, Chapter 3, this volume), screening instruments for detecting neurological impairment (Goldberg, 1959; Leli & Filskov, 1981, 1984; Robiner, 1978), and all of the data that clinical and counseling psychologists usually have available in clinical practice (Johnston & McNeal, 1967).
Clinicians and Graduate Students versus Lay Judges
When given psychometric data, clinicians and graduate students were more
accurate than lay judges (e.g., undergraduates, secretaries) depending on the type of test data. Psychologists were not more accurate than lay judges when they were given results from projective tests, including results from the Rorschach Inkblot Method and Human Figure Drawings (Cressen, 1975; Gadol, 1969; Hiler & Nesvig, 1965; Levenberg, 1975; Schaeffer, 1964; Walker & Linden, 1967). Nor were clinical psychologists more accurate than lay judges when the task was to use screening instruments (e.g., the Bender–Gestalt test) to detect neurological impairment (Goldberg, 1959; Leli & Filskov, 1981, 1984; Nadler, Fink,
Shontz, & Brink, 1959; Robiner, 1978). For example, in one of these studies (Goldberg, 1959),
clinical psychologists were not more accurate than their own secretaries. Finally, when given MMPI protocols, psychologists and graduate students were more accurate than lay judges (Aronson & Akamatsu, 1981; Goldberg, 1968; Oskamp, 1962). For example, Aronson and Akamatsu (1981) compared the ability of graduate and undergraduate students to perform Q-sorts to describe the personality characteristics of patients with psychiatric conditions on the basis of MMPI protocols. Students’ level of training differed in
that graduate students had taken coursework in the MMPI and had some experience administering and/or interpreting the instrument, whereas undergraduates had only attended two lectures on the MMPI. Criterion ratings were based on family and patient interviews. Correlations between judges’ ratings and criterion ratings were .44 and .24 for graduate and undergraduate students’ ratings, respectively. Graduate student ratings were significantly more accurate than undergraduate ratings.
Scott O. Lilienfeld, Steven Jay Lynn, Jeffrey M. Lohr. Science and Pseudoscience in Clinical Psychology, Second Edition (p. 1). Guilford Publications. Kindle Edition.