home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai:4381 rec.games.programmer:4802
- Newsgroups: comp.ai,comp.ai.genetic,rec.games.programmer
- Path: sparky!uunet!elroy.jpl.nasa.gov!swrinde!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!neuron.cis.ohio-state.edu!pja
- From: pja@neuron.cis.ohio-state.edu (Peter J Angeline)
- Subject: Re: Games and genetic algorithms
- In-Reply-To: PAULSON3@Applelink.apple.com's message of Sat, 21 Nov 1992 00: 51:52 GMT
- Message-ID: <PJA.92Nov23154818@neuron.cis.ohio-state.edu>
- Followup-To: comp.ai,comp.ai.genetic,rec.games.programmer
- Originator: pja@neuron.cis.ohio-state.edu
- Sender: news@cis.ohio-state.edu (NETnews )
- Reply-To: pja@cis.ohio-state.edu
- Organization: Ohio State Computer Science
- References: <1992Nov11.001553.12600@samba.oit.unc.edu>
- <1992Nov11.131739.19137@athena.mit.edu>
- <1992Nov16.175215.29411@versyss.com>
- <1992Nov18.210045.19530@Princeton.EDU>
- <PAULSON3-201192153909@kip2-11.apple.com>
- Date: Mon, 23 Nov 1992 20:48:18 GMT
- Lines: 56
-
-
- Finding good (let alone optimal) gaming strategies with GAs is pretty darn
- hard. The problem being how do you test the proposed strategy without biasing
- its play. You can't really test every possible situation so you need to set up
- a situation where reasonable generalization can happen. Good generalization is
- hard for most problems, includeing any non-trival game, as those who read about
- PAC learning results will atest. Its basically you playing "Guess which
- strategy I'm thinking" with the computer. So most of the time you have to be
- happy finding a reasonable strategy in an interesting way. I have looked at
- these problems in the context of learning overall and especially GAs in my
- paper "Learning in a Competetive Population" submitted to IJCAI-93.
- Instructions for getting this paper are below.
-
- One interesting approach to finding gaming strategies with GAs, other than
- using classifier systems, is to pattern after Hillis' 2 population GA where one
- population evolves strategies and the other evolves counter strategies.
- Actually, Hillis was learning sorting networks, but the ideas the same. So one
- population plays everyone in the other and the strategies increase their
- abilities only through self-competition. My paper looks at why this won't
- always work and describes a system which evolves modular computer programs to
- play Tic Tac Toe using a variant method which is better for theoretical
- reasons. Turns out that learning with GA using a predesigned "expert" isn't as
- good as having the population train itself, at least in the task I'm doing.
- This of course assumes each member of the population is judged on a small
- sample of the possible games against the "expert" or each other. Se the paper
- for details.
-
- "Learning with a Competetive Population" is available via anonymous ftp by do
- the following:
-
- unix> ftp nervous.cis.ohio-state.edu
- Name: anonymous
- Password: <your user id>
- ftp> cd pub/papers
- ftp> get 92-pa-compete.ps.Z
- ftp> quit
- unix> uncompress 92-pa-compete.ps.Z
-
- This creates a postscript file called 92-pa-compete.ps which can be printed on
- any postscript printer. If you have a problem mail tooo me and I'll try to
- help.
-
- -pete angeline
-
- -------------------------------------------------------------------------------
- Peter J. Angeline ! Laboratory for AI Research (LAIR)
- Graduate Research Assistant ! THE Ohio State University, Columbus, Ohio 43210
- ARPA: pja@cis.ohio-state.edu ! "Nature is more ingenious than we are."
-
-
-
- --
- -------------------------------------------------------------------------------
- Peter J. Angeline ! Laboratory for AI Research (LAIR)
- Graduate Research Assistant ! THE Ohio State University, Columbus, Ohio 43210
- ARPA: pja@cis.ohio-state.edu ! "Nature is more ingenious than we are."
-