2
237696
I don't understand why more people don't challenge this supposed research that shows NPs providing "care equal to or better than a physician" when the individual research projects support no such claim.
A typical study randomly assigns patients who have already been diagnosed with a specific disease (I think the study Mundinger authored and is always throwing around used specifically hypertension, type II diabetes and asthma) to either physicians or NPs and usually includes less than a dozen NPs and a few hundred patients. They track patients for a relatively short period (6 or 12 months maybe), calling it long-term follow up, and then measure patient satisfaction surveys and resource utilization (number of specialist/ER visits, etc) or basic lab numbers. They get comparable results and then somehow declare that their study shows NPs being "equal" to physicians. It's absurd. And if I hear or read one more person saying "studies show NPs to deliver care equal to or better than physicians" I'm going to scream.
Frankly, these studies could probably include a third group of RNs or medical assistants, plop patients with a pre-diagnosed condition on their lap and direct them to follow cookbook treatment algorithms and end up with another group providing "comparable care" to physicians. Teach them to smile and be on time and make good small talk and their "patient satisfaction" scores would be just as high.
There are so many holes in these research designs that I think people embarass themselves without realizing it when they reference them, and yet the faulty conclusions people extrapolate from them seem to have found a permanent hold in the DNP advocacy/propoganda machine and they go to their state legislatures and repeat this drivel unchallenged.
A typical study randomly assigns patients who have already been diagnosed with a specific disease (I think the study Mundinger authored and is always throwing around used specifically hypertension, type II diabetes and asthma) to either physicians or NPs and usually includes less than a dozen NPs and a few hundred patients. They track patients for a relatively short period (6 or 12 months maybe), calling it long-term follow up, and then measure patient satisfaction surveys and resource utilization (number of specialist/ER visits, etc) or basic lab numbers. They get comparable results and then somehow declare that their study shows NPs being "equal" to physicians. It's absurd. And if I hear or read one more person saying "studies show NPs to deliver care equal to or better than physicians" I'm going to scream.
Frankly, these studies could probably include a third group of RNs or medical assistants, plop patients with a pre-diagnosed condition on their lap and direct them to follow cookbook treatment algorithms and end up with another group providing "comparable care" to physicians. Teach them to smile and be on time and make good small talk and their "patient satisfaction" scores would be just as high.
There are so many holes in these research designs that I think people embarass themselves without realizing it when they reference them, and yet the faulty conclusions people extrapolate from them seem to have found a permanent hold in the DNP advocacy/propoganda machine and they go to their state legislatures and repeat this drivel unchallenged.