home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.edu
- Path: sparky!uunet!elroy.jpl.nasa.gov!swrinde!news.dell.com!milano!milano.mcc.com!wshen
- From: wshen@nanook.mcc.com (Wei-Min Shen)
- Subject: CFP: AAAI93 Workshop on Learning Action Models
- Message-ID: <WSHEN.93Jan4094905@nanook.mcc.com>
- Sender: news@mcc.com
- Organization: MCC, Austin, TX 78759, U.S.A.
- Distribution: comp.ai
- Date: Mon, 4 Jan 1993 15:49:05 GMT
- Lines: 177
-
-
-
- The AAAI-93 Workshop on Learning Actions Models
- held at the
- Eleventh National Conference on Artificial Intelligence
-
-
- DESCRIPTION OF WORKSHOP:
-
- The goal of this workshop is to develop/communicate technologies that
- enable active learning systems, based on their own percepts and actions, to
- abstract a model from their environment and incorporate the model into
- their actions, thereby improving the long-term performance of the system.
-
- Learning action models has been a fundamental problem in fields such as
- adaptive control and system identification. Recent progress in
- reinforcement learning and robot learning shows clearly that the learning
- of such models is a very important topic within the AI learning community
- as well. It affords an opportunity for high-level representations for
- reasoning about the environment to interact usefully with the low-level
- action model. It seems the time has come for researchers from different
- fields to start working together to surmount the gap between the high-level
- cognitive models and the robotic hardware.
-
- The workshop is intended to bring several otherwise separated research
- groups together to share recent developments. In particular, we encourage
- contributed papers in reinforcement learning, adaptive control, robot
- learning, learning to predict, and control-oriented learning neural
- networks. A list of specific topics are listed below. Survey papers of
- selected field are also welcome.
-
-
- TOPICS:
-
- The topics of the workshop include, but are not limited to, the following:
-
- Model Representation:
-
- The representation of the model can be critical to the success of learning
- and effective using of the model. Examples of representations include State
- Machines with $Q$-values, Neuron Networks, Linear/nonlinear Functions, and
- Logical and Qualitative prediction rules. Questions related to model
- representation include: What are the pros and cons of each representation
- with regard to learning, generalization, abstraction, approximation, and
- prediction? How do the models scale? Can you model continuous actions?
- How can you balance the tradeoff between detail and generality? And is
- representation even one of the critical issues in designing such learning
- systems? (Most would say so, but anti-representationism is growing within
- AI.)
-
- The Utility of Models:
-
- When is modeling useful? For many (low-level) control tasks, it would be
- impossible or very expensive to learn a complete and accurate model, and it
- might be easier to learn to control without learning a model in the first
- place. On the other hand, for many (mostly high-level) tasks, a model is
- essential. A related question is how to measure the usefulness of a model.
- One choice is to be task-specific, the other may be the readiness of
- dealing with new tasks.
-
- Balancing Exploration and Planning:
-
- This problem is better known as the explore/exploit tradeoff, and it is
- modeled in the statistics and GA communities by k-armed bandit problems.
- Action models enable the agent to "mentally" plan its actions for the
- goals. However, since the environment and the goals may change, models
- cannot always be perfect and must be revised by exploring. How to balance
- these two activities is a challenging problem.
-
- Discovering Hidden States:
-
- Environments may have states that cannot be perceived directly by the
- learner. To learn an accurate and useful action model, these hidden states
- may need to be discovered and utilized in the learned model. Still, there
- is the likelihood that the action model will be incomplete. To what extent
- can an incomplete model be useful in achieving the agent's goals?
-
- Experiment Design and Learning from Experiments:
-
- Besides exploring the environment more or less randomly, how does the agent
- design experiments and learn from them? The design of experiments may be
- based on the status of the current model, on deficiencies found while using
- the model, on changes in the overall system goals, etc. The problem is
- closely related to action selection or active learning. How do the negative
- theoretical results---e.g., a theorem stating that neither membership
- queries nor equivalence queries alone are sufficient to learn the model
- effectively---impact on this practical problem?
-
- Reasoning about the Models:
-
- Techniques that can effective apply the models to the goals of the system,
- keeping models responsive to sudden changes in the environment.
-
- Comparison of Learning Methods:
-
- There are quite a few existing methods for learning action models.
- Comparison of them may yield inspiration for new methods. Questions related
- to this topic include: In what type of environment can a learning method
- function? How fast does it converge? Can it handle noise or incomplete
- state information? Does it support model abstraction? etc.
-
-
-
- FORMAT OF WORKSHOP:
-
- The research papers will be organized by topics and presented sequentially.
- Panels and discussion sessions for each topic will be organized after
- papers have been accepted.
-
-
- ATTENDANCE:
-
- The workshop will be attended by authors of accepted papers as well as
- researchers that are willing to contribute/participate in the discussions.
- All submitted papers will be included in the proceedings distributed to the
- participants, among which about 8 to 10 papers will be selected for
- presentation. The committee is working actively to publish the papers as
- citable ``AAAI Press Technical Reports.'' The workshop lasts one day and
- the number of attendees will be no more than forty (40). People who are
- interested are invited to submit a summary of their research and
- publications and on that basis will be invited to attend.
-
-
- SUBMISSION REQUIREMENT:
-
- Please send four (4) copies of a short paper or an extended abstract of the
- research. Neither abstracts nor papers may exceed five (5) pages in length.
- Standard LaTex or pure ASCII text may be sent by email. Hard copy and email
- must both arrive by the submission deadline.
-
-
- SUBMISSION DEADLINE: March 12, 1993.
-
- NOTIFICATION DATE: April 2, 1992
-
- FINAL DATE FOR PAPERS:
-
- Camera-ready full papers are due April 30, 1992. Beyond this date, they
- may not be published in the working notes.
-
-
- SUBMIT TO:
-
- Wei-Min Shen
- Information System Division
- Microelectronics and Computer Technology Corporation
- 3500 West Balcones Center Drive
- Austin, TX 78759
- TEL 512-338-3295
- FAX 512-338-3890
- wshen@mcc.com
-
-
- WORKSHOP COMMITTEE:
-
- Phil Laird
- NASA Ames Research Center
- Moffett Field, CA 94035
- laird@ptolemy.arc.nasa.gov
-
- Sridhar Mahadevan
- IBM T.J. Watson Research Center, Box 704
- Yorktown Heights, NY 10598
- sridhar@watson.ibm.com
-
- Wei-Min Shen (Chair)
- Microelectronics and Computer Technology Corporation
- 3500 West Balcones Center Drive
- Austin, TX 78759
- wshen@mcc.com
-
- Richard Sutton
- GTE Laboratories Incorporated
- 40 Sylvan Rd.
- Waltham MA 02254
- sutton@gte.com
-
-