home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!know!mips2!news.bbn.com!noc.near.net!news.Brown.EDU!qt.cs.utexas.edu!yale.edu!spool.mu.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!uwm.edu!rutgers!uwvax!meteor!stvjas
- From: stvjas@meteor.wisc.edu (Stephen Jascourt)
- Newsgroups: sci.geo.meteorology
- Subject: Re: STOP Re: mesoscale forecast model
- Message-ID: <1992Nov17.010750.26822@meteor.wisc.edu>
- Date: 17 Nov 92 01:07:50 GMT
- References: <1992Nov16.165209.9290@news.arc.nasa.gov>
- Distribution: na
- Organization: University of Wisconsin, Meteorology and Space Science
- Lines: 79
-
- In article <1992Nov16.165209.9290@news.arc.nasa.gov> westphal@sundog.arc.nasa.gov (Doug Westphal) writes:
- >Oh boy, here we go again: uninformed people making broad generalizations
- >about topics or models which they aren't sufficiently familar with,
-
- Uninformed? I am using one of the models I talked about and have colleagues
- who have used the others, I have seen model output, been to conferences and
- colloquia at which model results and problems were discussed. Also, I made
- specific comments about the models rather than just a vague assertion.
- If I recall correctly, Harold has also done some modeling and I'm sure at OU
- he has had many discussions with more experienced modelers.
-
- >causing the completely uninformed public to become confused. Great.
- >Below are two examples of this kind of subjective commentary.
- >None of their arguments are substantiated, nor can they be in this forum.
- >
- >>>
- >>>From hbrooks@uiatma.atmos.uiuc.edu:
- >>>
- >>>The best mesoscale model around now is the The Penn State/NCAR model,
- >>> ^^^^^^^^^^^^^^^^^^^^
- >>>MM4. It beats the life out of the second most common model, CSU-RAMS.
- >>>
- >>>Harold Brooks hbrooks@uiatma.atmos.uiuc.edu
- >>>National Severe Storms Laboratory/CIMMS (Norman, OK)
- >
- >
- >>>From: stvjas@meteor.wisc.edu (Stephen Jascourt)
- >>>
- >>>I have heard various rave reviews about the MM4, but my experience seeing the
- >>>output of people using it (I haven't used it myself) is that it is filled with
- >>> ^^^^^^^^^^^^^^^^^^^^^^^^
- >>>problems, worst being that it smooths the heck out of *everything* in the
- >>>horizontal, so you are left only with features that are strongly forced,
- >>> < blah, blah, blah >
- >>>Stephen Jascourt stvjas@meteor.wisc.edu
- >
- >The original posting and responses were reasonable and I would hope that
- >the poster will now thoroughly research the different models before
- >choosing which is best for him or her; but don't do it over Usenet.
- >
- >Dave Blanchard then says:
- >
- >>>Harold,
- >>>
- >>>That's a pretty strong statement. Could you elaborate on why you believe
- >>>PSU/NCAR (MM4?) outperforms RAMS?
- >
- >Noooooooo!!! The question of which is the 'best' model cannot be
- >decided with a Gallup opinion poll or on Usenet. Let's not continue
- >this, okay? The whole thing should be decided in an entirely different way:
- >IN THE PEER-REVIEWED LITERATURE.
-
- Dave asked a perfectly reasonable question-- he wanted Harold to provide
- some information on what the MM4 does so well.
-
- There is no "best" model. Each has certain things it does well and certain
- things that give the model problems. There is a "best" model for a particular
- application, but more likely the difference from one to another may be no
- worse than their errors. And, worst of all, model intercomparisons are next to
- impossible for a multitude of reasons. A particular model can perform very
- differently by just changing a few parameters, and the parameter values that
- work best for particular situations for one model may be different than the
- values best for another model. For generic, artificial tests like dropping a
- cold bubble in a box, reasonable comparisons can be made but the test itself
- is rather artificial. For real data cases or realistic idealized experiments,
- the initial numerically balanced fields will be different from one model to
- another, so you're not really even starting off the same! And, of course,
- most of these sophisticated models have many options for various
- parameterizations such as soil models, radiation, and sub-grid scale turbulence,
- by choosing different options for these you really have many different models
- in one program.
-
- As for whether these things should be "decided" on usenet, I think discussion
- of the strengths and weaknesses of different models, what machines they can
- run on and how much computational resources they require, etc. are good topics
- for discussion-- they are part of an exchange of scientific information that,
- after all, is supposed to be the function of usenet.
-
- Stephen Jascourt stvjas@meteor.wisc.edu
-