home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: bit.listserv.qualrs-l
- Path: sparky!uunet!uvaarpa!darwin.sura.net!spool.mu.edu!torn!nott!cunews!tgee
- From: tgee@alfred.carleton.ca (Travis Gee)
- Subject: Re: Coding in qualitative analysis
- Message-ID: <tgee.728144542@cunews>
- Sender: news@cunews.carleton.ca (News Administrator)
- Organization: Carleton University
- References: <9301240333.AA11087@titan.ucs.umass.edu> <1993Jan26.113241.1@vaxa.strath.ac.uk>
- Date: Wed, 27 Jan 1993 14:22:22 GMT
- Lines: 242
-
- First, let me warn you that this is a somewhat long posting. The
- discussion which the coding thread has engendered is a rich one,
- covering several topics. I have collated the ones which concern my
- previous postings in an effort to deal with what I see as key issues
- in one fell swoop. References are provided at the end.
-
- Monika; thank you for the example of objective hermeneutics (OH).
- However, it still seems to me rather like psychoanalysis without
- benefit of a psychoanalysand (at least, after the initial phase). I
- would have a good deal of cognitive dissonance applying it "as is" to
- data. The problem of generating as many stories as possible
- that fit the data may easily result in more than one theory that
- accounts for *all* of the data. To whom does the research team refer
- for the tie-breaking vote?
- Re: analysis, one *should* focus on data that seem to invalidate
- the theory under consideration. I agree wholeheartedly. However, the
- data are limited to those obtained by our method, and we too easily
- select a method which will suit the theory being examined. Should we
- not consider a "null method" which is as theory-free as possible, to
- maximize the potential for conflicting data? I suggest that
- "negotiating an understanding" in Harre's sense leaves at least part
- of the control out of the hands of the researcher, minimizing the
- potential for bias.
-
- Nora L. Ishibashi then joins in thus:
-
- >On the other hand, if we are doing a study that looks for
- >patterns in what the interviewee has stated, that compares aggregate kinds
- >of themes or patterns from several interviewees or that analyzes the
- >interview for the process rather than factual content, the interviewee may
- >not have more to add.
-
- This brings up the question of level-of-analysis. In qualitative
- research, are we willing to ignore the individual differences to come
- up with some kind of normative statement about "the population"? It
- is true that if we are abstracting away content in favour of process
- or syntactic/morphological items, then we are doing what statisticians
- do by using numbers. Indeed, the final report will probably contain a
- table or two. But much qualitative research does not ignore the richness
- of the single case. What it does seem to ignore is the fact that it is
- hard to capture all of the possible riches in one sitting.
- I'm not saying that abstracting away is an evil. Indeed, it is
- the essence of our ability to cope with the world. Rather, I'm saying
- that abstraction without proper validation is easily construed as
- fiction. The mode of validation is a key issue, and what I really want
- to deal with here. How can we figure out what people are on about with
- as few errors as possible (a discussion of Types of Errors is below).
- In addition, do our data have any use once we've counted up the number
- of (variable of choice)? Can we analyze *both* quantitatively across
- people and qualitatively within people? If we do both, do we get the
- same answers? If we don't get the same answers from both, what's wrong
- with our theory (or at least, what are its limitations)?
-
- I previously politicianed my way out of a question thus:
-
- > Secondly, let's turn that last question around, and ask "how
- >could the interviewee benefit from the researcher's evaluation of the
- >interviewee's perspective."
-
- On this topic, Nora continues:
- > If we introduce the question of whether it would be
- >helpful to the interviewee to hear the results of the study, that seems to
- >me to be a different issue altogether from a consideration of methods of
- >arriving closer to an understanding of answers to a research question.
-
- To which flames Nicholas Sturm adds fuel:
-
- >> That may be a morally useful thing to do, but it's not quite the same
- >>as the original "research goal".
- >Sharing some of the
- >findings could be of great benefit in such cases. A fair payback for
- >participation, don't you think?
- >> If the interview is being repaid, perhaps he should just be made
- >>co-author of the report. Seems rather like what one is desiring by
- >>requesting the 'validation.'
- >> But pardom my butting in. I should just remain a watcher here.
-
- Point taken! The rhetorical turn of phrase is too often the
- irresistible turn of phrase. Rather than deflecting the question of
- what the second phase of my proposed validation method adds to the
- origian research goal (and dragging up the whole messy business of
- *why* we do research), I should have tackled it head on. Here is my
- belated attempt to do so.
- The immediate research goal is to make a statement about what is
- going on. My main objection to the standard method is an
- epistemological one: "Can we make such a statement without reference
- to any constructions which the person to whom the statement refers may
- have with respect to said statement?" If that's only clear to the
- lawyers out there, it might be more simply put as "Who's the real
- expert?" In OH, there appears to be a contradiction in that every
- member of the culture is *in principle* able to come up with a
- coherent model based on the same data. However, the method excludes
- *in principle* the possibility of alternative constructions by not
- including the views of members of the culture who are not objective
- hermeneuticists.
- Brian Little (1972) discusses Us-Models and Them-Models.
- Us-models "describe the behaviors of those persons who share the
- perspective of the theorist proposing the model." Them-models
- "describe the behaviors of anyone who does not share the particular
- perspective of the theorist proposing the model <such as> subjects,
- patients, wives, the dog next door, and rival theorists."
- Psychoanalysis (and, so far as I understand it, OH) are particularly
- duplicitous in that their assumptions about humankind are such that
- Us-models apply to the analysts, while Them-models apply to everyone
- else. (L.W. Brandt carries this point through in detail in his book
- _Psychologists_Caught.) The reasons given for this unparsimonious
- subdivision of humanity are at best vague, and at worst, nonexistent.
- The key point to be made is that a theory about humans which
- reflexively accounts for both the theorist and the theorized-about is
- conceputally preferable to a non-reflexive theory which does not
- account for this. OH, as described by Monika, assumes a continuity
- between theorists and the population, then proceeds as if there were a
- disjunction. Thus it cannot provide such a comprehensive theory.
-
- On a related topic, Al Furtrell writes (quoting >me):
-
- > My use of "Type I" and "Type II" just referred to the *kind* of
- >errors that were being made, with no implication abut statistical
- >measures. (Some were used in my study, but I used a pretty minimalist
- >approach to stats, at least by today's number-crunching standards.)
- >The key point is that qualitative methods can be prone to such errors,
- >because we make assertions based on data. In the spirit of this group
- >I am loath to require quantification of such errors and am rather more
- >interested in the form of the errors we researchers can make.
-
- >>I don't want to let Travis off the hook so easily. Because a Type I or
- >>alpha error is committed when one rejects a null hypothesis when one should
- >>not have (in "regular" talk that means that we reject a true null
- >>hypothesis) and a Type II or beta error is committed when one fails to
- >>reject a false null hypothesis, statistics and quantification is inherent
- >>to any discussion of these types of errors. More important, the introduction
- >>of the possibility of these errors suggests a mind set at odds with the
- >>notion of a "qualitative" study. One cannot make a Type I or Type II error
- >>unless one has tested an hypothesis -- and many folks on this list have
- >>no problem with hypotheses as such except that the types of research questions
- >>that interest them do no lend themselves to hypothesizing or to quantification.
-
- >>Basically, I am surprised that the idea of Type I and Type II errors would
- >>be an issue in the study he describes. That is why I found it interesting
- >>in the first place. I thought he had developed an innovative way of
- >>merging the qual/quant dichotomy. His response suggests otherwise.
-
- No it doesn't! With all due respect, Al, your post is evocative
- of the extent to which the hypothesis-testing model has permeated the
- minds of psychosocial researchers. One *cannot* infer that a statistic
- has been perpetrated by the simple fact that a Type I or Type II error
- has been made when we look at the nature of these errors.
- To return to a point made above (by me), the immediate goal of
- the research is to make error-free statements about the state of the
- world. If we say "there is x" when there is not, we make an error of
- the first kind. If we say "there is not x" or fail to say "there is
- x" when there *is* an x out there, we commit an error of the second
- kind. Because the hypothesis- testing model offers clear decision
- rules for making such statements, Type I and Type II errors have come
- to be associated with statistical inference. However, this is a
- chance of historical development, not necessity. While in qualitative
- research, we may not be able to quantify the number of such errors as
- easily as we can in pure quantitative research, we can still make
- them! We don't even have to have a hypothesis to test, because it is
- the final *statement* which is or is not correct, irrespective of how
- it was derived.
-
- Finally, Dr. Stephen K. Tagg observes:
-
- >>I believe any research is an overlap of a series of samples (purposive sample
- >>not statistical) of various "universes of content"
- >>Including:
- >> Informants
- >> Things researchers might ask informants to do (including answer quest)
- >> Ways that researchers get informants to inform
- >> meanings for informants
- >> meanings for researchers
- >> meanings for the market for research publication
-
- Exactly! And when the researcher intrudes into that universe of
- content, he or she brings along residue from a different universe.
- However, small the difference may be, the only way to check for
- contamination is to validate against another sample. Indeed, in
- research we step lightly from one universe to the next, carrying along
- conceptual moondust from place to place on our clumsy feet. This in
- fact was my argument against extensive quantitative analysis of a
- series of interviews (in my M.A. thesis). Because I got better as an
- interviewer, and learned more about my population as I went along, the
- last interview would be incommensurable with the first. Therefore,
- the set of interview questions could be continually expanded as the
- study progressed (reducing those nasty Type II errors).
-
- This fact led me to my position of the non-privileged researcher.
- *I* was the student, and the interviewees were experts teaching me
- about what it's like and what it takes to become a top jazz musician
- (the topic of my thesis for those who just tuned in). My statements
- had to withstand *their* scrutiny as well as those of my committee.
-
- Finally, (Nicholas!) if validating those samples means more
- interviewing, so be it. We allow for that from the start and budget
- ourselves accordingly. And if we say that something is there and 5 of
- our experts say it's not, we have some explaining to do! In a
- subsequent posting, Dr. Tagg notes that :
-
- >>I don't think it is possible for the sampling of each universe to be perfect
- >>(ie there's reliability/validity issues): effectively there are cost/benefit
- >>optimizations to make.
-
- And we all make them. I'd have loved to have had *all* of my
- interviewees go over the report. But logistics are always a hassle,
- and at some point we have to draw a line. Sort of like selecting
- alpha=.05 over alpha=.01 .
-
- >> Unfortunately a lot of research design consists of
- >><<lets re-do famous-article's research except tweaking this attribute>>
- >>and so the whole body of research has no chance of being representative of the
- >>universes of interest because previous researcher's purposive sampling
- >>compromises are perpetuated..
-
- Unfortunately accurate! Well, back to slugging away at my stats
- courses! Thanks for reading this far, now go get a coffee :-)
-
-
- References:
-
- Brandt, Lewis Wolfgang, (1982). Psychologists caught. University of
- Toronto Press; Toronto.
-
- Little, B.R. (1972). Psychological man as scientist, humanist and
- specialist. Journal of Experimental Research in Personality, 6,
- 95-118.
-
-
-
- ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
- Travis Gee () tgee@ccs.carleton.ca ()
- () tgee@acadvm1.uottawa.ca () ()()()()
- () () ()
- () ()()()()()()()()()
- Recent government figures indicate that 43% of all statistics
- are utterly worthless.
-
-
-
-
-