home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: bit.listserv.qualrs-l
- Path: sparky!uunet!usc!howland.reston.ans.net!spool.mu.edu!torn!nott!cunews!tgee
- From: tgee@alfred.carleton.ca (Travis Gee)
- Subject: Re: Coding in qualitative analysis
- Message-ID: <tgee.727717727@cunews>
- Sender: news@cunews.carleton.ca (News Administrator)
- Organization: Carleton University
- References: <QUALRS-L%93011920442300@UGA.CC.UGA.EDU>
- Date: Fri, 22 Jan 1993 15:48:47 GMT
- Lines: 74
-
- In <QUALRS-L%93011920442300@UGA.CC.UGA.EDU> Monika Ardelt <UMONIA@UNC.BITNET> writes:
- >Travis:
- >yes, I think your interpretation of my explanation is in principle
- >correct.
-
- >> "Even though every person in the society is in principle able to
- >> conduct the analysis, we need other researchers from the same
- >> sociocultural background to to validate our analysis, because we
- >> cannot assume that the interviewee has done the analysis of latent
- >> structures himself or herself."
- >>
-
- That's a relief...this stuff is pretty abstract, and easy to
- misinterpret!
-
- >However, not only can we not assume that the interviewee has done the
- >analysis him- or herself, in general,it is very difficult to use the
- >method of structural hermeneutics to analyze one's own behavior.
- >The key is that one starts the analysis by trying to find as many
- >different meanings for a particular statement as possible.
-
- The point of my previous postings, on reflection, was to corrode the
- privileged position of the researcher with respect to what is
- happening in someone else's mind. This is a key presumption of any
- approach that purports to be "objective," and has traditionally
- resulted in the distinct possibility that the researcher's pet
- hypotheses find their way into the interpretation of the data.
- Grounding the data with respect to the people who generate it is an
- approach which I feel lessens the static discharge that commonly
- accompanies the publication of a qualitative study.
-
- My previous posts were not meant to say that interjudge reliability
- should be banished, because we *do* want other researchers to agree
- with us. The problem lies in the fact that we are looking for
- intersubjective agreement. We want other researchers to come to the
- same conclusions as us. But what is our standard for validity? You
- observe that
-
- >the interviewee, more often than not, is very likely to have already
- >some idea what his/her particular statement meant and, hence, is not
- >open enough to alternative interpretations.
-
- We each draw on our unique experiences to derive all possible
- meanings *for us*. Researchers share many experiences, so the domain
- of meanings which are generated will by and large be shared a-priori.
- This means that it will be *constrained*. People who do not share the
- training of a researcher are *more* likely to arrive at different
- meanings than people who do share that training. That is why in my
- study, I took my conclusions not only to some of the people that I
- interviewed, but some people who had *not* been interviewed. It
- controlled for the possibility of both Type I and Type II errors.
- Both of these types of error are enhanced when we do not refer back
- to the source of the data because a) our unique experiences as
- researchers can result in latent "meanings" that aren't really there,
- and b) the unique experiences of the interviewees cannot be brought to
- bear on our conclusions.
-
- > I agree that an example
- >would probably help to clarify matters. And even though I was a
- >little bit reluctant to give one since the method is very elaborate,
- >I'm currently working on one. It will just take me some time to
- >complete it.
-
- I look forward to the post!
-
- ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((
- Travis Gee () tgee@ccs.carleton.ca ()
- () tgee@acadvm1.uottawa.ca () ()()()()
- () () ()
- () ()()()()()()()()()
- Recent government figures indicate that 43% of all statistics
- are utterly useless.
-