|
|
|
|
On Psychotherapy Research
(SEPI Forum, March-April 2004)
(Editors' Note) A
lively debate was kindled on the SEPI listserv by Paul
Wachtel's comment on the New York Times article
"Defying Psychiatric Wisdom, These Skeptics Say 'Prove
It'", by Erica Goode, March 9, 2004. The article
underscored "a widening divide in the field between
researchers, who rely on controlled trials and other
statistical methods of determining whether a
therapeutic technique works, and practitioners, who
are often guided by clinical experience and intuition
rather than scientific evidence." It cited
researchers' view according to which "psychology
should have clinical practice guidelines, and
psychotherapists should favor treatments that are
backed by evidence from controlled clinical trials
over treatment whose effectiveness is supported by
anecdotes and case histories only." On the other hand,
"Some clinicians say that their work with troubled
patients can never be captured by experimental trials
and that traditional science has little relevance in
the consulting room, where psychotherapists often deal
with problems far more complex than those addressed by
'cookbook' psychotherapies." The article finally
quoted Ronald Levant, president-elect of the American
Psychological Association, according to whom
"Lilienfeld and others had gone overboard in their
enthusiasm for scientific vetting of therapeutic
techniques." More on Levant's statements below, in our
March 20 contribution. We want to thank all 13
participants to this lively debate, who are the
following (listed in the order in which they
intervened): Paul
Wachtel, Gerald
Davison,
George Stricker,
Tullio Carere-Comes, Hilde Rapp,
Paolo Migone,
Mardi Horowitz, Tyler
Carpenter,
Franz Caspar, Alan Nathan,
Zoltan Gross, Stephan Tobin,
and Stanley Messer.
Paul
Wachtel, 10 March 2004
Some of you
may have seen the article in the New York Times
on Tuesday about the nuevo fanaticism when
it comes to so-called "empirical validation" of
psychological treatments. Being empirically
responsible is a good thing, but when it gets
confused with a tendentious definition of what
empirical validation really is it becomes something
quite else. For those of you to whom strictly
randomized trials of DSM defined disorders with
manuals as the only form of acceptable adherence
check sounds like ideology disguised as science, you
might enjoy the following suggestion below:
- Parachute use to prevent death and
major trauma related to gravitational challenge:
systematic review of randomized controlled
trials. Smith G.C. & Pell J.P.,
Department of Obstetrics and Gynecology, Cambridge
University, Cambridge, CB2 - 2QQ, E-Mail
<gcss2@cam.ac.uk>
- OBJECTIVES: To determine whether
parachutes are effective in preventing major
trauma related to gravitational challenge.
- DESIGN: Systematic review of
randomized controlled trials.
- DATA SOURCES: Medline, Web of
Science, Embase, and the Cochrane Library
databases; appropriate internet sites and citation
lists.
- STUDY SELECTION: Studies showing the
effects of using a parachute during free fall.
- MAIN OUTCOME MEASURE: Death or major
trauma, defined as an injury severity score >
15.
- RESULTS: We were unable to identify
any randomized controlled trials of parachute
intervention.
- CONCLUSIONS: As with many
interventions intended to prevent ill health, the
effectiveness of parachutes has not been subjected
to rigorous evaluation by using randomized
controlled trials. Advocates of evidence based
medicine have criticized the adoption of
interventions evaluated by using only
observational data. We think that everyone might
benefit if the most radical protagonists of
evidence based medicine organized and participated
in a double blind, randomized, placebo controlled,
crossover trial of the parachute.
Gerald Davison, 10 March 2004
Paul, with
respect and affection, the analogy to parachutes is
specious. As you know, I (and also with Marv
Goldfried) have written on the limitations of
randomized trials, so I don't swallow it whole. But
observational data of deaths from parachutes not
deploying is hardly like the "observational" reports
that clinicians make of the outcomes (and processes)
of their interventions. To quote some wag, the
plural of "anecdote" is not data.
George Stricker, 10 March 2004
As luck would
have it, the Times article earlier this week
stimulated a conversation among my students about
the topic, and I just sent the following note to
them:
There is a
wide set of conundrums associated with this whole
controversy. All I can offer is my opinion, and
others are entitled to theirs. First, if someone
could give me a cookbook that said, if you do X, Y
will happen, I would grab it in a minute. Many
people would feel that such a cookbook would take
all the glamour and excitement out of treatment, but
we aren't (or shouldn't be) doing it for our own
stimulation. Having said that, I also don't believe
such a cookbook exists or is likely to, although
there are some highly specific conditions that have
some fairly well developed effective procedures (and
not to use them because they aren't interesting is a
gross disservice to the patient). Same with testing
- if there was a true litmus test for any pathology
or dynamics, I'd want it - I just don't know where
to find it. Therefore, what we are thrown back on is
to do the best we can, using whatever information we
have. The twin dangers of this position is thinking
we have more or less information than we do. One
group, described in the article, suggest that
anything that doesn't meet strict scientific
scrutiny is terrible, overlooking that, held to that
standard, we would do virtually nothing (as would
physicians for most of what they do). The other
group, recognizing the paucity of evidence, throw
out any evidence that does exist, denying their
patients the best care available because it isn't
consistent with their prejudice, and dismissing all
knowledge because knowledge is imperfect. That
brings us back to Aristotle and the Golden Mean.
Gerald
Davison, 10 March 2004
George, neither
extreme is necessary or desirable. Even those who
embrace treatment manuals are well aware of their
limitations and of the need for idiographic analyses
of specific cases. What I personally object to is
President-Elect Levant espousing a public position
against "too much science," or however he put it. I
believe also that those who see no value in RCT's put
forward an epistemology and specific operational
suggestions on how to decide what we know or have some
idea of what we may personally favor.
Maybe it has to
do with where one starts
in deciding how to proceed with a patient. Where does
one place his or her bets. What kind of data does one
find persuasive, helpful, etc?
Tullio
Carere-Comes, 10 March 2004
Hi Paul, thank
you for calling our attention onto the New York
Times Tuesday article. I have enjoyed the parachute
analogy, and don't understand why Gerald Davison
calls it specious. It simply reminds us that science
is not always and not necessarily experimental. How
could for instance historiography ever be
experimental? Scientific methods should obviously be
adapted to their objects, though this might not be
obvious to the Randomized Clinical Trials'
ideologists (to whom, as they often maintain, "there
is only one science" - the experimental one, of
course).
I agree with
George's Golden Mean between clinical experience and
intuition on one side, and scientific evidence on
the other. Provided that we don't mistake RCT
evidence for scientific evidence: it seems to me
that there is not much science in RCT ideology. RCT
ideology imagines that psychotherapy should be like
any other medical treatment, i.e. specific disorders
should be treated by means of (empirically
supported) specific procedures. In this fantasy
psychotherapy should be a bunch of short-term
manualized treatments (ideally, one for every
DSM-defined disorder). They should be short-term,
because in genuine, open-ended psychotherapy the
therapist is bound to change his/her approach at
every step in every session, to meet his/her
patient’s needs. RCT are about imaginary objects,
not real therapy. Genuine therapy cannot be
manualized, because it works with real, not
standardized, people and conditions. Ergo, RCT
evidence is of little use for real psychotherapy,
while it brings with itself a significant risk of
theoretical abuse. I too would object to Levant's
position against "too much science": to me there is
not too much science, there is only too much bad
science.
Gerald
Davison, 11 March 2004
We will have
to agree to disagree on the aptness of Paul's
parachute analogy, though one member of this
listserv emailed me privately to suggest that Paul
did not mean it seriously.
That aside, I
would agree that RCT's are not the only method for
concluding that we know something. An RCT, for
example, gives only limited information on what to do
at a specific point in time and space with a
particular patient. To be sure, there are those
favoring RCT's who might rightly be called ideologues,
but I would respectfully suggest that this is a straw
person and not worth our time. And calling people who
see some value in RCT's ideologues is unproductive ad
hominem and will not get us anywhere.
Also, treatment
manuals vary according to how "ballistic" they are as
compared to responsive to the give-and-take of therapy
interactions. I am familiar with many manuals and few
if any of them distort reality in the way you are
satirizing them.
So let me take
the following tack: What epistemological criteria do
you accept as defining something we assert we know
in psychotherapy? Are the reports of clinicians of
their experiences with patients enough for us? If
so, how do we decide which reports have validity
and/or heuristic value? Basically, what are the
alternatives?
Paul
Wachtel, 11 March 2004
Jerry, George,
Tullio, and paratroopers, Jerry, to begin with, yes,
the person who emailed you privately was correct, at
least in large measure. I mainly forwarded the piece
because I thought it was funny. But you are correct
nonetheless to take it seriously, since I did also
think it did a nice job of skewering something that
needed skewering. So the question is: what is it
that needs skewering and satire? It's not the need
for science. I am in complete agreement that we need
more science, more solid and carefully evaluated
evidence, not less. If Ron Levant said that the
problem is "too much science," then shame on him (I
haven't read what he actually said). And I found
just as delightful as the parachute study your quip
about anecdote and data, which is similarly both
funny and funny precisely because it
is so on the mark. There is much too much "here's
what I remember about what I felt about what my
patient remembered about what he felt and besides,
Kernberg said it so that's a cite to a data source."
But there is
also a difference between science and ritual and
between science and self-serving propaganda. My
objection is in no way at all to the call for
careful and systematic study in the most rigorous
way that is suited to the phenomenon being studied.
My problem with the narrow and tendentious way that
empirical validation is often defined is that
manuals are just one way of evaluating
whether a treatment approach was followed and that
manuals are only appropriate for evaluating whether
a manualized treatment was being employed
properly. By the requirement of manualization - and
the misleading implication that that is the only way
to check on treatment adherence - the very
definition rules out as even possible to
empirically evaluate any treatment that is not
organized around a manual. (And by the way, the very
use of manuals by some psychodynamic
therapy researchers seems to me mainly a concession
to the political and economic
pressures to do so. Quite seriously, I only
discovered years after they were published that
several of the most prominent "treatment manuals"
used in psychodynamic therapy research were
manuals. I had read them, liked them, but thought
they were "books." Treatment manuals that are
"responsive to the give and take of therapy
interactions" or the individuality of the patient
are so largely because they are NOT really "manuals"
except in the political sense that they "pass" and
get the writer grants.
I also find
tendentious the requirement, cited in many places,
that treatment evaluations be limited to the
treatment of a specific disorder. First of all, much
of what we treat is only in a very limited sense
specific disorders (there are some exceptions, to be
sure, and for them I do think that the treatments
that have been designed and evaluated for those
disorders - often treatments that do lend
themselves appropriately to manuals - should be the
ones used). But for most of the DSM, I think the
categories will look to psychologists looking back
100 years from now like the equivalent of earth,
air, fire, and water, only with many more categories
to satisfy the various constituencies.
So how should
we proceed? In MANY ways. There should be a wide
diversity of methods used to evaluate, each
appropriate to the question and the phenomenon being
pursued. This is NOT really difficult. Even
randomized trials, for example, can be pursued by
randomly assigning half the patients who come to the
clinic to one treatment and half to another -
WITHOUT categorizing them by whether they are earth,
air, fire, or water - and see which does better with
the general mix that comes to the clinic.
Should we STOP
there? Absolutely not. The more specificity we can
achieve the better. I'm in full agreement with that.
If we can specify (the old saw) that particular
treatments are better for particular patients, etc.,
obviously all the better. But if the only research
that "COUNTS" - and the only research that is FUNDED
- breaks the patients up into very narrow and
specific DSM categories (and, in many instances, in
addition, eliminates all the patients who have the
"complications" and "confounding" features that just
happen to be what plagues most people who go to
therapists, then that is ideology and politics
masquerading as science.
Similarly,
concern with adherence, with whether the treatment
administered is the one the researcher is claiming
is the treatment does not require manuals. Indeed,
even where manuals are appropriate, they are only as
good as the ratings of whether the treatment has
adhered to the manual. In exactly the same fashion,
raters could evaluate, based on session samples,
whether the treatment was truly a classical
analysis, an object relations approach, a Gestalt
approach, whatever.
OK, enough,
enough. Life calls! Clearly Jerry, you stimulated
me, as usual. Gotta get you to more SEPI meetings so
we can talk about these things in person. There's
only so long one can go on in email without carpal
tunnel syndrome (for which I WOULD probably want a
manualized treatment that has been thru rigorous
RCTs).
Gerald
Davison, 11 March 2004
Paul, as
usual, well put, especially vis-à-vis the stranglehold that
the DSM has on clinical research and grant-getting.
Hilde
Rapp, 11 March 2004
Tullio, it may
be worth noting that some of the fiercest critics of
RCTs come from the pharmaceutical and medical
constituency itself. One type of criticism relates
to the incomparability of the demographic
characteristics of the experimental sample to those
of the target population that is meant to benefit
from the intervention under test.
For example,
basing judgments about the safety, appropriateness
and dosage of drugs tested on white middle class
college students, intended to benefit the elderly,
children, or Asians with liver enzyme deficiencies
can actually endanger their lives. Using RCT data
from the nineteen seventies collected from 5000 or
so middle class residents of beautiful Framingham in
New England to standardize risk framing analysis
score for heart disease has been shown to lead to
wildly inaccurate predictions for expectable deaths
in a British sample where these have been evaluated
against follow up data from longitudinal studies
(Welcome trust)
It may be
salutary to remember that David Sackett, the
'father' of EBM defined evidence based medicine in
1996 as follows: 'the conscientious, explicit and
judicious use of current best evidence in making
decisions about the care of individual patients'
(Sackett D, Rosenberg W, Gray J, Haynes R,
Richardson W. Evidence based medicine: what it is
and what it isn't. Br. Med. J., 1996;
312:71-72). I'd sign that- wouldn't you?
Some
criticisms relate to the observation that findings
from some beautifully designed and meticulously
carried out trials are less than useful due to poor
construct validity issuing in asking the 'wrong
questions', for instance when trials designed to
support health policy decisions for improving the
health of populations are misused for
decision-making in the care of individual patients -
or vice versa... Horses for courses!
The US Office
of Technology and Assessment (1972) has been
instrumental for the development of criteria for
health technology assessment in the US, Canada, the
UK etc which make very useful reading...
Tullio
Carere-Comes, 11 March 2004
- On
11-03-2004, "Gerald Davison" wrote:
- So let me take the following tack:
What epistemological criteria do you accept as
defining something we assert we know in
psychotherapy? Are the reports of clinicians of
their experiences with patients enough for us? If
so, how do we decide which reports have validity
and/or heuristic value? Basically, what are the
alternatives?
First premise:
When I give a medicine, I get an effect that is only
partially due to the specific action of the
substance. A significant part of the effect, often
the greatest part of it, is due to a non-specific
action of the therapeutic intervention (placebo).
Second premise: when I administer a
psychotherapeutic procedure, I get an effect that is
uncertainly related to the hypothesized specific
action of the procedure, as the greater impact is
due to the context in which the procedure is
employed. To put it simply: patients do not respond
to what I think I am doing, they respond to what
THEY think is happening in the therapeutic
interaction. The gap between what I think and what
they think is much greater than in pharmacological
treatments, to the point that in a
meta-meta-analysis that compares different therapies
the effect size appears to be almost irrelevant
(Luborsky L., Rosenthal R., Diguer L., Andrusyna
T.P., Berman J.S., Levitt J.T., Seligman D.A. &
Krause E.D. (2002). The Dodo bird verdict is alive
and well – mostly. Clinical
Psychology: Science and Practice, 9, 1: 2-12.
Commentaries [pp. 13-34]: D.L. Chambless; B.J.
Rounsaville & K.M. Carrol; S. Messer & J.
Wampold; K.J. Schneider; D.F. Klein; L.E. Beutler).
A huge amount of research data supporting the second
premise can be found in Wampold (The
Great Psychotherapy Debate: Models, Methods and
Findings.
Mahwah, NJ: Lawrence Erlbaum. 2001).
In other words, psychotherapy IS placebo, as someone
has poignantly put it.
These premises
do not make me jump to the conclusion that empirical
research is useless. On the contrary it is useful,
to the extent that it has corroborated what many of
us suspected, i.e. that specific factors play a
minor role in psychotherapy, compared to the common
factors' major role. It can be even more useful, if
it can help us look inside psychotherapy qua
"placebo". Psychotherapy research should move in the
very opposite direction than medical research:
whereas the latter is only concerned with the
objective activity of a therapeutic ingredient, and
considers the subjective side (the placebo) as
deserving little if any attention, the opposite is
or should be true in the former.
How should it
work? Let us consider for instance a basic
constituent of "placebo", the common factor "secure
base" or "secure attachment" (I maintain it is a
common factor to the extent that no therapist could
work without it, whatever his or her theoretical
allegiance). It can be studied through an empirical
research of the correlational (not experimental)
type. For instance, at the end of a session of a
real (not experimental or manualized) therapy both
patient and therapist fill a questionnaire in which
they rate on a 5-point scale the strength of the
patient's need for secure base, and on another
5-point scale the quality of the therapist's
response to that need. The concordance between
patient's and therapist's ratings can be correlated
with the session outcome, itself rated on a 5-point
scale. Many other correlations like this can be
studied. As a result of such correlational research,
my heuristic intuition of the existence of a common
factor called "secure base" or something like that,
could be empirically corroborated (or falsified).
Unfortunately
such a research (on which I am working, by the way)
is unlikely to be funded or supported by public
agencies, because it does not meet the criteria of
experimental research that currently rule our field.
Have I answered your question, Jerry?
Paolo
Migone, 11 March 2004
Regarding the
problem of the importance of science, RCT,
Empirically Supported Treatments (EST) etc., I would
like to say that in a future issue of Psychological
Bulletin, coming out in May 2994, there will
be a very important article by Drew Westen, Kate
Morrison, and Heather Thompson-Brenner, titled "The
empirical status of empirically supported
psychotherapies: assumptions, findings, and
reporting in controlled clinical trials", which is a
detailed critique to the EST ideology. In my opinion
this paper will be a point of reference for all
interested in this issue (Drew Westen sent me this
paper that he begun about 5 years ago, and I am
trying to obtain the right for the Italian
translation). What I find interesting in this
article is that Westen et al. are not at all
against EST, or against science, or against the need
and importance of rigorous outcome research. On the
contrary, they strongly believe in science, and is
just for this reason that they criticize some
shortcoming of one kind of psychotherapy research,
in order to "improve" the field of research. And,
most interestingly, they use only
empirical data coming from the same
source of evidence of EST, in order to contradict
some aspects of EST ideology and background. For
example, the various problems mentioned by Paul
Wachtel (such as the paradox of psychotherapy
research, i.e., the more "realiable" and well done
is a study, it loses "validity", etc.) are examined
in detailed by Westen et al.
In other
words, I do not think at all that there is a
dichotomy between science or something else
(intuition, art, or whatsoever), but the dichotomy
is only between different kinds of methodologies of
scientific research, that can be more or less
sophisticated.
Mardi
Horowitz, 12 March 2004
Sorry, I did
not read all the empirical e-mails so someone may
have made this point. I like the direction of the
discussion, it bifurcates well into a scientific
dialogue and a political one. As to the former, I
think we can draw the line at clinical reports of
only one case. While I do that all the time, I would
not judge the results empirical, UNLESS something
else happens: other clinicians have to nod in
agreement, as in now that you formulate it thus and
so I concur from my own experience. We have very few
papers where anyone tries to see what percent agree,
how much, on what basis. I think that is the fuzzy
edge, maybe and maybe not yet "empirical".
Certainly, however, there is excessive emphasis on
the controlled clinical trial. In my opinion that is
the last thing to do, after a bunch of much less
expensive and limited use of correlational, multiple
regression (step wise), and descriptive plus
reliability of observations made type studies: only
then can the multiple interactive variables be
examined. A controlled trial is the last
confirmatory step, and they are too often the only
thing funded, and too constrained to an already
outmoded diagnostic system, whose categorical
disorders are not homogeneous in causation or
course.
Gerald
Davison, 12 March 2004
Well said!
Tullio
Carere-Comes, 12 March 2004
Thank you Hilde
for your data. I would add that it has been shown (N
Engl J Med, 2000, 342) that if observational studies
are well designed, their results are not very
different from those of well designed RCTs on the same
matter, but produce more data that RCTs cannot record.
I am not against
RCTs in principle, I am against the dominant position
they have received in both medical and psychological
research, one that by far exceeds their usefulness. We
should get rid of the deleterious Popperian
philosophy, according to which observation is only
good for generating hypotheses to put to empirical
test. Well disciplined observation is a source of
valuable data in its own right. In some cases it may
be worth testing these data through RCTs, but in
psychotherapy research I believe that it is much less
often the case than funding agencies seem to believe.
Tyler
Carpenter, 12 March 2004
Working
effectively in a prison setting has made me even
more of a pragmatists than I was before I started
there. From my point of view empirically supported
treatments (or whatever they're called - "evidence
based practice" is the buzz word in correctional
work I think), provide manuals, protocols that go
beyond what a small group of more highly and
expensively trained personnel could provide and give
a framework for understanding a lot more about the
patient's symptoms and potential trajectories than I
might otherwise given the brief time available to
work. I then see therapy as working at the
interstices or nexus of the systems from inside to
outside the inmate/patient. This involves leveraging
what can be brought to bear in the usual therapeutic
task and may involve consulting to corrections
personnel, getting a med consult to tweak a symptom
dimension, working on the dynamic aspects evoked by
the use of whatever treatment philosophy is brought
to bear.
Hilde
Rapp, 12 March 2004
Dear Tullio, I
entirely agree with you, and I have just written
something where I make a sustained plea for
reinstating observation at the center of our work so
that re-'search' becomes a more systematized aspect
of our general 'search' for truth..
Franz
Caspar, 12 March 2004
Dear
Colleagues, I sure do not want to reiterate well
formulated points in the recent exchange. Just a
general comment. We need more
awareness of what can be said and what cannot be
said based on a particular study or type of study.
The logic of RCTs requires the precise definition of
what the treatment was (which is usually done by
manuals which would have to be rather narrow to
achieve what their job is – I had similar
experiences as Paul with having a hard time seeing
"books" as "manuals". In principle, the definition
of what the treatment was could also be achieved by
prescribing heuristic rules and then precisely
describing the actual procedure after termination!).
Logically, RCTs have the highest value in terms of
supporting causal conclusions. Emphasizing the
limits of RCTs can not do away with this! We will
need additional RCTs in the future to answer some
relevant questions, while it is obvious that other
questions (and some are the most relevant for
practitioners!) need other types of research.
Different types of research can't replace each other
but are rather in a complementary relationship. We
need effectiveness research, research beyond narrow
DSM categories, research on effective principles,
implementation research, service research, research
on therapists and how they can or can't use
empirical evidence, etc. Looking into the
discussions, e.g. in APA divisions, or NIMH, I find
a lot of awareness for this, although the
consequences have not been drawn sufficiently.
Research and instruction of practitioners costs
money, doing all kind of complementary types of
research costs even more money. So we also need some
realism and patience. In the meantime we need to
acknowledge the gaps in the empirical basis for
deriving our therapeutic action from high quality
empirical evidence, but to strive for keeping these
gaps as small as possible.
I'm aware that all this is not new, but sometimes, it
seems to me that a realistic, balanced view gets lost. I
nevertheless very much enjoyed the parachute-text as a
nice half-serious illustration of a particular point.
Gerald
Davison, 12 March 2004
- On
11-03-2004 14:04, "Hilde Rapp" wrote:
- it may be worth noting that some of
the fiercest critics of RCTs come from the
pharmaceutical and medical constituency itself.
One type of criticism relates to the
incomparability of the demographic characteristics
of the experimental sample to those of the target
population that is meant to benefit from the
intervention under test.
This is a
practical sampling issue. Doesn't one run into the
same problem with observational studies? Issues of
external validity are present in all knowledge-gathering enterprises. It may be salutary to remember
that David Sackett, the 'father' of EBM defined
evidence based medicine in 1996 as follows: 'the
conscientious, explicit
and judicious use of current best evidence in making
decisions about
the care of individual patients' The issue is how
one defines "evidence." Aye, there's the rub.
Alan
Nathan, 17 March 2004
I have
thoroughly enjoyed the discussion on EST's and have
been meaning to respond. I teach in the clinical
psychology program at Argosy University. I would like
to share this discussion with some of our students if
there are no objections to my downloading comments and
their sources.
I most agree
with the notion that we need to integrate objective
and subjective and experimental and observational data
if we are to arrive at conclusions that have both
heuristic and clinical value. A question that I think
needs to be addressed is what makes an effective
psychotherapist (I am disappointed that I won't be
able to attend the conference in Amsterdam). I've
appreciated the ideas on this matter, as this issue is
especially important to training psychologists. I
think we can operationally define internal processes
such as self-awareness, self-understanding,
dialectical thinking, tolerance for ambiguity, and
other abilities or characteristics that are likely to
be relevant to effective practice across orientations.
I have found that students in our current environment
tend to buy into the idea that there is one right
orientation for each disorder and in doing so leave
themselves out of the equation on their path to
developing a theoretical orientation (not to mention
all of the other factors that are being left out and
have been already discussed). I believe this is a
serious issue to the extent that there seems to be
consensus about the importance of relationship within
psychotherapy practice. Without an understanding and
good enough mastery of one's subjective processes it
seems to me that it would be quite difficult to be
flexibly responsive within the therapeutic
relationship. This is my hypothesis anyway, and I do
think it needs to be put to the test.
Another thought
is that I have found the body of observational data
that has accumulated on infant interpersonal
development to be particularly useful and helpful
toward putting some meat on rather abstract concepts
that we utilize in psychodynamic work in attempt to
explain the therapeutic process. I am referring to the
work of Stern, Trevarthen, Meltzoff and others. Not to
say that this research "proves" the existence of an
intersubjectivity that can be directly applied to the
therapeutic relationship, but that there is something
important going on within a mutually created process
between mother and infant that might be applicable to
identifying the what and how of studying the
therapeutic interpersonal process.
A final quick
note. I also agree that it is helpful to expose case
studies to dialogue and I would like to suggest
Psychoanalytic Dialogues: A Journal of Relational
Perspectives as a journal that does just that.
Hilde
Rapp, 17 March 2004
I am glad you
brought up how vital it is to resource students to
have a broad understanding of how research contributes
to our work as practitioners.(I have found- in the
context of some UK research I was involved in a few
years back, that it is very useful to match what is
taught in course curricula with what it is that
'employers' ( i.e. services that offer some form of
psychological treatment) think practitioners need to
know and what students actually find they need to
learn in order become competent practitioners - alas,
all too often- we found that there is an alarming
discrepancy between these three different knowledge
and skills bases).
I also agree
very much with your observation that we have much to
learn from academic psychology especially
developmental psychology and psychopathology.
I wonder whether you
might have some relevant writings to 'throw ' into the
integrative pot? I am happy for you to use anything I
may have contributed to the discussion and, by way of
seed grain for references, here is the reference for
the Sackett quote: Sackett D,
Rosenberg W, Gray J, Haynes R, Richardson W. Evidence
based medicine: what it is and what it isn't. Br.
Med. J. 1996; 312:71-72
Tullio
Carere-Comes, 20 March 2004
The research
debate calmed down too soon, to my taste. Let me try
to move the millpond by quoting what Levant said in
the New York Times 03/9 article, for those who
have not read it:
- Dr.
Ronald Levant, president-elect of the American
Psychological Association, said Dr. Lilienfeld and
others had gone overboard in their enthusiasm for
scientific vetting of therapeutic techniques.
- "Their
fervor about science borders on the irrational," Dr.
Levant, a professor of psychology at Nova
Southeastern University in Florida, said. "The
problem in clinical psychology is that we don't have
science to cover everything we do, and that's true
for medicine, as well." He added that psychologists
"recognize that we need to find a way to show we are
being accountable," but that many practitioners
"question the very narrow standards that are being
raised."
- In
fact, at an annual meeting of the psychological
association, a Canadian psychologist reportedly
began a session by asking, "How can I escape from
the clutches of the psychotherapy police?"
Levant makes two
points. First, "we don't have science to cover
everything we do". I don't think he is saying that we
don't YET have science. He is saying that we'll NEVER
have science to cover everything we do. To me he
reminds us that our caduceus has two serpents: science
and art. Neither of them should devour the other.
Probably in the past there was too much art and too
little science, but this is no good reason for science
now to bully art.
Second, many
practitioners "question the very narrow standards that
are being raised". These standards might be not just
narrow, but outdated and simply wrong, as Westen et al
have convincingly demonstrated (see their paper in a
next issue of the Psychological Bulletin,
recommended by Paolo Migone as a "a detailed critique
to the EST ideology"). In this paper (I could read a
former version of the manuscript, thanks to Paolo)
Westen et al suggest that we break with the Popperian
philosophy of science that guides most psychological
research, according to which the essence of science
lies in hypothesis testing (how we come up with our
hypotheses is our own business). As an alternative
way, they propose to use clinical practice as a
natural laboratory. Well designed observation of what
happens in a natural context should come first, and
experimental research should come in only later, to
work on observational data.
In Westen and
coll. opinion, the balance between observation and
experimentation should be redefined. The current
balance is very much near the experimental end of the
line. Westen et
al. seem to further a middle point. To me the
final balance should shift towards the observational
end (like Hilde, "I make a sustained plea for
reinstating observation at the center of our work so
that re-'search' becomes a more systematised aspect of
our general 'search' for truth..."). But for the time
being, I would endorse Westen's suggestion. Let us
begin with observational studies, and let experimental
studies follow. If the latter will be able to
significantly improve the data of the former, very
good (so far they have produced the Dodo bird verdict
and little more, but who knows). If they will not, we
shall be able to free more resources for the
observational research that has guided psychotherapy
practice since the beginning.
George
Stricker, 20 March 2004
In general, I
agree with much of what Tullio says, although I do
think science has produced more than the Dodo bird
effect. As examples, the central value of the
relationship has been demonstrated, and there are many
process relationships that we now know. Larry Beutler
has been very helpful in putting together
contingencies that can help in treatment planning, and
if Larry is still on the list, he may have something
to add. Also, if Drew is on the list, I wish he would
post a link to his paper, as it sounds like something
we all should read. Finally, as a philosophical
framework for Tullio's position, I can suggest my own
work on the local clinical scientist (see http://home.adelphi.edu/~stricker/LCS.html)
as a place to start.
Gerald
Davison, 20 March 2004
Dear
Tullio, you make some very good points. In contrast,
Levant's comments are shocking to me. I look forward
to reading the Westen paper.
The importance
of clinical observation in clinical research was
spelled out by Lazarus and myself in the first Bergin
& Garfield Handbook in
1971 and later updated and expanded in two more recent
publications. Your comment "our caduceus has two
serpents: science and art" reminds me that the
serpents are intertwined. Science and practice can fit
that metaphor nicely, as Arnold and I have argued.
- References:
- --
Lazarus A.A. & Davison G C. (1971). Clinical
innovation in research and practice. In: A.E. Bergin
& S.L. Garfield (Eds.), Handbook of
Psychotherapy and Behavior Change: An Empirical
Analysis. New York: Wiley, 1971, pp.
196-213.
- --
Davison G.C. & Lazarus A.A. (1994). Clinical
innovation and evaluation: Integrating practice with
inquiry. Clinical Psychology: Science and
Practice, 1: 157-168.
- --
Davison G.C. & Lazarus A.A. (1995). The
dialectics of science and practice. In: S.C. Hayes,
V.M. Follette, T. Risley, R.D. Dawes & K. Grady
(Eds.), Scientific Standards of Psychological
Practice: Issues and Recommendations. Reno,
NV: Context Press, pp. 95-120.
I would be glad
to snail-mail the third item above to anyone
interested. It's the latest and most clearly spelled
out iteration.
Paolo
Migone, 20 March 2004
Since Tullio
quotes the Psychological
Bulletin paper by Westen et al. that I
mentioned, I would like to clarify a possible
misunderstanding. I am not saying that Tullio imply
this, but from his words one could understand that
Westen et al. prefer observational (or
correlational) research over "Popperian philosophy of
science" and hypothesis testing by experiments. For
what I understand from Westen et al. paper,
the authors emphasize a dialectic or synergic
relationship between the two. In other words,
classical testing with experimental research at one
point of our research process could be a fundamental
step.
Paolo
Migone, 21 March 2004
- On 20/03/2004, Tullio Carere wrote:
- That's
right, the authors emphasize a dialectic or synergic
relationship between observation and experiment.
That is why they explicitly state that their
proposal represents "a break" with Popperian
philosophy of science, which is an utterly
non-dialectical philosophy. Indeed, Popper hated
dialectics maybe more than any other thing.
Dear Tullio, I
think that the concept of dialectics that Popper
"hated" has nothing
to do with the meaning of dialectics that we use here:
here we do not talk of the philosophical meaning of
dialectics, but simply a kind of scientific research
in which we perform not only "bottom-up" experimental
studies but also a sort of "top-down" research, in
order to arrive quicker at meaningful discoveries.
Rather than the term "dialectic", a better term here
would be "synergic".
Zoltan
Gross, 21 March 2004
Another way of
looking at the problem of empirical research in
psychotherapy is to attend to the paradigmatic
differences that exist between research and clinically
oriented researchers. I do not believe the research
problems encountered in the study of psychotherapy
will be solved solely by an integration of observation
and experiment. Behavioral researchers "see"
psychological phenomena differently than clinicians do
(for a more detailed discussion of this issue see my
article on "Two Languages, One Vocabulary" in the
Journal of Psychotherapy Integration}. A recent
presidential column by Roddy Roediger in the Observer
a journal of the American Psychological Society quotes
Endel Tulving as saying "It is quite clear in 2004
that the term "psychology" now designates at least two
rather different sciences, one of behavior and the
other of the mind… No one will ever put the two
psychologies together again, because their subject
matter is different… they do not talk to each other
(any more), and the members do not interbreed. This is
exactly as it should be." I do not agree with
Tulving's conclusion. However, in order, to solve the
problem of research in psychotherapy, I believe it is
necessary to make a paradigmatic shift that enables
scientists and clinicians to conceptually bridge the
mind/body chasm. I believe that this can occur when we
are able to think about psychological processes as
phenomena emerging from brain processes. Current
behavioral science is limited in its ability to
conceptualize the nonlinear, nonsensory movement of
the autoregulatory operation of the brain that give
rise to behavior and experience.
Tyler
Carpenter, 21 March 2004
For a delightful
philosophical interlude about Popper and Wittgenstein,
try "Wittgenstein's Poker." The authors' names escape
me, but it is quite informative about these giants'
passions, likely cheap at amazon.com, and fun to boot.
Tullio
Carere-Comes, 21 March 2004
Edmonds and
Eidinow's Wittgenstein's Poker is a delightful
reading indeed. More than that, I was impressed by the
way a contemporary event mirrored the
Wittgenstein-Popper debate: namely, the on-line
discussion preceding and following the I SEPI-Italy
Conference. (For those who read Italian: http://www.psychomedia.it/pm-lists/debates/sepi.htm.) A paper in which I
parallel the two events was published in Italian. An
English version of it, entitled "Wittgenstein and
Popper: The opposite of dialogue?", is still
unpublished. I would be glad to e-mail it to anyone
interested.
Tullio
Carere-Comes, 21 March 2004
Dear
Paolo, I would hardly draw a sharp line between
"philosophical meaning" and "scientific research". For
that matter, I would hardly draw a sharp line between
"psychoanalysis" and "psychoanalytic psychotherapy",
as between "psychoanalytic psychotherapy" and
"psychotherapy tout court", as between "psychotherapy"
and "counseling", as between "psychological" and
"philosophical counseling", as between different sorts
of "philosophical counseling".
To support my
soft-line plea, here is a couple of quotes from
today's New York Times article "The Socratic
Shrink":
- "Americans
are tired of psychologists dwelling on our every
painful feeling, we're sick of psychiatrists
prescribing a new drug every time we feel confused
and many of our most pressing problems aren't even
emotional or chemical to begin with - they're
philosophical." (Marinoff's crusade to make
philosophical counseling a mainstream profession).
- "As
in the early days of psychoanalysis, and the famous
rift between Freud and Jung, philosophical
counselors disagree on everything from the best name
-- philosophical practice? public philosophy? -- to
whether they should be trying to cure people,
empower them or guide them to self-understanding."
Stephan
Tobin, 21 March 2004
I browsed this
article on "The Socratic Shrink," which was sent to
the Div. 32 (APA Humanistic Psychology) listserv. It's
amazing how Div 32 is currently debating some of the
same issues as are being discussed here. Anyway, it
seems that these philosophical therapists are dealing
with some of the same issues on which the
existentialists and existential psychologists focus.
They're very
concerned on Div 32 with the state of psychology
today, the emphasis on Newtonian (I guess Popperian
rather than qualitative research), the emphasis on
short-term, manualized treatments, drug so-called
therapy, and the sorry state of the teaching of
psychotherapy in American graduate schools. I moved
last year from Los Angeles to Portland, Oregon, and am
having to study for the oral Oregon psychology
licensing exam and am amazed at the changes that have
taken place in the field since I was licensed in
California many years ago. And I see that new
graduates have to learn so much about diagnosis and
medications and what laws and ethical codes they must
learn in order to keep from being sued that they are
rather insidiously steered away from any kind of
humanistic or existential ways of viewing human
beings. It seems as if we've regressed back to the bad
old days of the 50's when the behaviorists held sway.
Even worse: we didn't have managed care or drug
companies spending billions to promote their instant
cures for depression, anxiety, shyness, etc.
Mardi
Horowitz, 22 March 2004
I agree that the
debate is worth continuing. One place to start is
where we agree or a bunch do. I suspect we agree with
this kind of statement: Choosing to eliminate a
technique set on the basis of the absence of
controlled clinical trials on its efficacy as
contrasted with another technique set, or as
contrasted with a wait list control is at this point
unjustified unless there is no other kind of data on
outcome (descriptive, case series, clinician
agreements on it) . There are probably about three
things a bunch agree on, then that which is debatable
can be set as topics. I think two topics can proceed
in parallel: a definition of objective pursuit of
truth in our field, and the second is the inferred
political agenda of different sides on what is
empirical and what HAS TO BE EMPIRICAL.
Hilde
Rapp, 22 March 2004
Dear all on this
thread, thank you, Paolo for clarifying the potential
confusion between different uses of the word
"dialectic". Also it is good to be reminded that the
debates about a proper logic of enquiry for the social
sciences (of which psychotherapy is one) go back a
very long way. After Wittgenstein there was a further
heated interchange in the sixties between Karl Popper
and Theodor Adorno (Frankfurt School, see also
Topitsch, Die
Logik der Sozialwissenschften [1972 ] [the logic
of the social sciences]). Then the debate was
reinvigorated by Ken Gergen and Rom Harre in the
seventies... After that, cognitive science and
advances in scientific methodology and multivariate
analysis turned the rota fortunae one
hundred and eighty degrees to bring experimentation to
the top of the wheel again, relegating interpretive
approaches to the bottom... until the next cycle when
all is reversed, until, as Paolo suggests, the time
has at last come for synergy between top down
enquiries (trickle down) and bottom up (bubble up)
investigations (szyzygy
and conjunctio
to the Jungians)and integration (for most of us on
this list)...
Surely the dodo
bird is extinct precisely because it did not have the
resources to adapt to our modern challenges? Perhaps
it is time for the phoenix verdict, where out of the
ashes comes the next possible integration we can
manage at this time- until that too goes up in flames,
ready for the emergence of a better one, consistent
with the intertwining of the snake of knowledge
(empirical) with the serpent of wisdom ( intuitive)
that Jerry speaks about? (I believe the phoenix, like
the secretary bird eats elderly or infirm snakes on a
regular basis).
As George says,
already we have learnt much from the new breed of
process outcome studies which have often been born of
a sustained cycle of case analysis, task analysis (à
la Rice and Greenberg), hypothesis generation,
followed by hypothesis testing at a micro level
leading to meta analyses which can then inform larger
scale more systematic trials which may issue in
practically useful empirically supported clinical
guidelines...
Do we not
already have a good ground swell of scientist-
practitioners who do actively combine intelligent
observation and the judicious interpretation of any
findings ( a constructivist and hermeneutically guided
activity) with the equally indispensable data analysis
and counting of numbers ( an empirically guided
activity) to advance our learning about the art and
science of bringing about intentional change?
I agree that we
do need to dialogue energetically with those
prestigious colleagues whose accounts do not in an
evenhanded way weigh up the relative contributions to
clinically relevant judgments of fact finding on the
one hand and theory driven observation and
interpretive evaluation on the other. From that
perspective it would seem that neither Dr Levant not
Dr Lilienfeldt tell an even handed story.
We might however
consider that perhaps they do not set out to do so,
but rather that both engage us in the original
enterprise of 'dialectical dialogue'- i.e. the
polemical Platonic tradition of pitting one thesis
against another so that we may come to a new and
measured synthesis by including and transcending both
viewpoints?
Our energies
could perhaps now go towards advocating for the novel
synthesis which issues in the kind of integrative and
synergistic research activity which proceeds by way of
the double helix of measurement AND interpretation so
that we may to judge wisely what is likely to work for
whom?
Without doubt,
critics will come to the fore who will alert us to any
shortcuts and premature conclusions we may have been
tempted to advance in our integrative enthusiasm, and
we should thank them for their vigilance and
scrutiny... May this
dialectical dialogue continue with the same vigor and
rigor!
Stanley
Messer, 24 March 2004
I am taking up
Hilde's suggestion that we refer SEPI list serve
members to our own work. I have recently completed a
paper entitled "Evidence-based Practice: Beyond ESTs"
that is quite relevant to the recent list serve
thread. I don't have a web site so I am including an
abstract of the paper below. If any one is interested
in receiving the paper as an attachment please email
me and I will be happy to send it. It was submitted
last month to Professional Psychology: Research
and Practice at Ron Levant's invitation. He is
editing a special section of the journal on
evidence-based practice, and this paper is currently
under review.
Abstract:
Must
the clinician choose between a practice that is
strictly objective and data-based and one that is
purely experience-based? After analyzing research
findings on "empirically supported treatments" (ESTs),
this article argues that there has been too much
emphasis placed on ESTs at the expense of traditional
forms of therapy, and on randomized controlled trials
to the neglect of other kinds of research evidence.
Ultimately, what needs to be brought to bear on
reflective practice is a model of evidence-based
practice that combines ESTs, empirically supported
therapy relationships, clinicians‚ accumulated
practical experience, and clinical judgment about the
case at hand. Two models are described that best
capture the clinician‚s role: Disciplined Inquiry and
Local Clinical Scientist. A new and valuable form of
evidence for practice is presented that entails the
accumulation of systematic case studies presented
within prescribed frameworks and available on line.
Your comments
are also welcome as I hope I will have the chance to
revise the paper. If you have problems printing the
figures, let me know and I'll send them separately.
Gerald
Davison, 24 March 2004
Stan,
I am curious as to whether there will be articles in
the series that take a strong pro-EST stance. As
balanced as I know yours will be, I would hope that a
series like this in an APA journal will include as
many points of view as possible, including extreme
ones.
Tullio
Carere-Comes, 12 April 2004
Thanks to Jerry
Davison and Stan Messer for sending me their articles.
Both authors agree (and I agree with both) on a
dialectical approach to the science and art of
psychotherapy, epitomized in Stan's words: "We cannot
manage without nomothetic and idiographic data,
quantitative and qualitative method, and a mixture of
scientific and humanistic outlooks, which are
psychology’s dual heritage." Yet it seems to me that
the meaning of science, inside this dialectic, is not
the same as it is outside (as in the hard sciences).
As Davison and
Lazarus rightly point out, "Perls' empty chair is not
Lazarus' empty chair". I would add that Lazarus' empty
chair with patient A is not Lazarus' empty chair with
patient B. Furthermore, Lazarus' empty chair with
patient A in session # 10 is not Lazarus' empty chair
with patient A in session # 20. We need
procedures--empty chair, interpretation, whatever--for
many reasons, but we cannot count on a relative fixed
and stable action of them as we can, for instance, of
antibiotics. Doctor X' penicillin is very much the
same as doctor Y's. Patient N can be allergic or non
responsive to penicillin, but on the average
penicillin's action is known and reliable. Placebo
effects are involved in antibiotic therapy, but not to
the point to blur the drug's specific action. This is
not the case in psychotherapy, where "common factors
and therapist variability far outweigh specific
ingredients in accounting for the benefits of
psychotherapy." (Messer & Wampold, 2002).
As D&L point
out repeatedly, "Techniques may... prove effective for
reasons that do not remotely relate to the theoretical
ideas that gave birth to them." This may be an
argument for technical eclecticism, a position
espoused by Lazarus. Eclecticism, on the other hand,
can be "equivalent to chaos, in which choices are made
on whim" (D&L), unless the therapist has a theory
of his/her own (like Lazarus' "social and cognitive
learning theory") that enables him/her to choose what
technique to apply in which case. This position is
surely coherent with the standard scientific question:
"What specific treatment is most effective for this
individual with that particular problem working with
this therapist of this orientation, and under which
set of circumstances?" (D&L). Yet this question
still underscores the "specificity of
treatment"--although tempered by the reference to the
individual situation--when mega-analyses show that
common factors are much more accountable for the
benefits of psychotherapy. If it is true--and both
Jerry and Stan seem to believe that it is true--that
the mode of action of any technique is largely
dependent on the meaning it is given by both the
therapist and the patient (above all the patient, I
would say) in a given context, why should we still
insist on specific ingredients or techniques, instead
of shifting emphasis on the study of contextual (i.e.
common) factors?
Such shifting
may be hard to realize in actuality, however, given
the medicalistic orientation of mainstream research in
psychotherapy (i.e., the hunt for specific ingredients
for specific disorders), so much that "one cannot
obtain federal funding without the use of a manual"
(D&L). These authors point out that "while DSM
diagnoses and the use of treatment manuals have a
definite place, they perform a disservice when taking
us away from the necessary search for controlling
variables in an idiographic assessment and tailored
treatment of the individual patient." In my view,
however, DSM diagnoses and treatment manuals will
inevitably keep their hegemonic position in our field,
as long as the field is medicalistically obsessed with
the hunt for specific psychotherapeutic procedures.
Tullio
Mardi
Horowitz, 12 April 2004
On
the art and science of psychotherapy issue addressed
by Tullio, I find it useful in teaching to say about
the art part that it is always based on science but
expands well beyond that base: that is we intend to
revise our artistic beliefs when sufficient objective
evidence CONTRADICTS it.
The
second issue, research funding , is the crucial one.
Two stories I hope briefly from my past:
1.
In the seventies I applied and was funded by NIMH for
a Center for the Study of Neuroses which at core was
psychotherapy brief and long. The brief therapy (12
sessions) part was fully funded with instructions to
me not to use any of the funds as center director for
the long term therapy designs , with the statement
that this judgment of theirs was "not based on
scientific considerations" since the designs were
equally robust. Such funding decisions lead to such
things as "no empirical support for...." long term
outcomes.
2.
Negative findings can be misleading. If for example
disposition of a sort is not included, and if that
dispositional variable means a technique is useful at
one end of a polarity and perhaps even harmful at
another end of the polarity, then the process of that
technique in relation to outcome will look null as the
relationships wash out to a mean level. We found that
in the study of the brief therapies for the
dispositional variable of organizational level of self
and other schematization in relation to more
supportive and more expressive type specific therapist
actions. Correlational and step wise regressions work
better than contrast group designs for finding out
about such essential complexities, and they tend not
to be funded because of state of the art
considerations such as manuals and DSM diagnoses which
practically never are Axis II in funded research.
Paolo
Migone, 13 April 2004
Dear Tullio,
regarding your well known emphasis on the importance
of what you call "dialectics" in psychotherapy, I came
across this article: Monroe Pray,
"The classical-relational schism and psychic
conflict". Journal
of the American Psychoanalytic Association,
2002, 50, 1: 249-280.
Although,
as you know, I do not agree on the way you at times
state the problem, I suggest you to read it, I am sure
you'll love it. It makes interesting points, it says
things in a way that gives fuel to your way of
thinking (the author uses the concept of "conflict"
instead of "dialectics", stresses the importance of
"complementarity" in scientific theories, e.g., in
physics, etc.).
Hilde
Rapp, 13 April 2004
Dear
Mardi, I much appreciate your points about the
influence exerted on the generation of knowledge by
essentially political decisions. John McLoed has
argued this case also in various ways over the last
decade.
In
addition, your point about the likelihood that
dispositional variables play a role seems to be
suggested also by research which found that within the
same DSM diagnosis ( depression) therapist orientation
and style differentially interacts in ways which
affect outcome with client variables such as a
preponderance of false beliefs and disordered thoughts
versus a bias towards maladaptive relational schemata.
For reasons you point out, most currently favored
research designs would loose such information as
scores balance out...
What
seems to emerge increasingly from our discussions is
that we need to look at research in the context of
different stakeholders arguing their case, making
judicious use of the art of asking certain questions
as well as the science of finding means for answering
them...
As
Habermas observed, there would seem to be no knowledge
that does not reflect the interests of the enquirer or
those who commission or fund their line of questioning
for some practical or political purpose.
My
contention is that this is inevitable, but that the
onus is on everyone to declare their cards, spell out
their assumptions, be frank about the inevitable
limitations and focus of any enquiry, pinpoint the
purpose of it and be vigilant about the use to which
any data will subsequently be put...
I
would also like to find out more about your thinking
regarding change, and I have two chapters in a book in
progress which specifically deal with issues
concerning research and change, and I would happily
send you both or either if you think they might be
useful for your writing project.
Tyler
Carpenter, 14 April 2004
Tullio,
I'd like to suggest that both the use of the terms
medicalization and chaos have rhetorical
implications that are neither integrative nor
indicative
of either real science or whatever it is some call
what we do (personally
I'm always a practitioner-scientist whatever someone
chooses to make
of it). It seems to me I remember a marvelous
discussion with Bernie Beitman at one of our
conferences when he mentioned to me the link between
the
patient's psychology and the therapeutic action of the
medication. It seems to me whether we
are talking mind-body immune function and antibiotic
action
on disease process or SSRI or mood stabilizer or
atypical tranquilizer and
treatment of violent criminals in a number of contexts
simultaneously,
it's all change and all integrative when done
attentively. The question of science
then, at least in terms of shared overarching
frameworks
of comprehensive and valid constructs, becomes a
matter of education and creative discussion rather
than science vs. art.
|
|
|
|
|