home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!ucbvax!ADS.COM!Vision-List-Request
- From: Vision-List-Request@ADS.COM (Vision-List moderator Phil Kahn)
- Newsgroups: comp.ai.vision
- Subject: VISION-LIST digest 11.43
- Message-ID: <9212152214.AA20306@deimos.ads.com>
- Date: 15 Dec 92 21:55:20 GMT
- Sender: daemon@ucbvax.BERKELEY.EDU
- Reply-To: Vision-List@ads.com
- Distribution: inet
- Organization: The Internet
- Lines: 1579
- Approved: vision-list@ads.com
-
- VISION-LIST Digest Tue Dec 15 13:55:20 PDT 92 Volume 11 : Issue 43
-
- - ***** The List is moving sites soon: you will be notified ****
- - Send submissions to Vision-List@ADS.COM
- - Vision List Digest available via COMP.AI.VISION newsgroup
- - If you don't have access to COMP.AI.VISION, request list
- membership to Vision-List-Request@ADS.COM
- - Access Vision List Archives via anonymous ftp to FTP.ADS.COM
-
- Today's Topics:
-
- Re: Request for Road Images
- request for Kanji characters
- Non-Rigid Recognition
- Signal to Noise ratio Question
- Plausible pattern recognition (Info needed)
- Help! Energy Reduction Algorithm for SP
- Optical Music Recognition - Bibliography aviable
- LaboImage 4.0: new X11 version
- Studentships
- Doctoral Program in Philosophy-Psychology-Neuroscience
- Special issue of the machine vision and applications
- AAAI '93 : Call for Papers
- CFP: Geometric Methods in Computer Vision
- Sun Pix (long)
- References on automatic face recognition (long)
- Snakes: summary of responses (long)
-
- ----------------------------------------------------------------------
-
- Date: Fri, 04 Dec 92 09:20:27 +1100
- Subject: Re: Request for Road Images
- From: stevea@vast.unsw.edu.au
-
- >From Vision-List digest 11.42:
-
- I am interested to get sequence of road images (24 bits color) taken
- from a mo ving car. Road sign of any type should be visible along the
- road.
-
- There are several places you can get this sort of imagery.
- Probably the best site though is cicero.cs.umass.edu, which is the home
- site of Allen R. Hanson and Edward M. Riseman (of image segmentation
- fame). They have a great liking for roadside images in their
- experimental results, and there are many samples there. Please note
- though, that not all of them have road signs in them. The directories
- you probably want most are area_road_scenes and amherst_roads.
- Another site that has a set of images that may be helpful is
- isy.liu.se, which has a video sequence taken from inside a car. This
- sequence contains one prominent road sign, and possibly one or two more
- further away from the camera involving road works.
- I hope this info is useful for you.
-
- cheers
- -steve
-
- Steve Avery | VLSI and Systems Technology Laboratory,
- (PhD type student person) | School of Computer Science & Engineering,
- | University of New South Wales,
- stevea@vast.unsw.edu.au | P.O. Box 1, Kensington, 2033, Australia.
-
- "Why have you two eyes and just one mouth?" -Ryuchi Sakamoto
-
- Here is the content of the file README.image_files that is located
- in the home directory at this site:
-
-
- This directory contains the image library of the VISIONS group at
- UMass/Amherst. Each sub-directory contains a specific sort of image:
- Typically the name of the directory suggests the sort of images in the
- directory.
-
- There is also an interactive DB of image descriptions available. Telneting
- in as 'visimgdb' to cicero.cs.umass.edu will place you in an interactive
- program which will allow you to search a data base of image descriptions.
- The data base and the program to access are also available for FTP'ing
- in the directory imdb (the data base is vis-imagedb.ilb* and the program
- source is in image_db.tar.Z).
-
- Most of the image data is in what we call "Universal Plane File
- Format" - this is a variation of our LLVS plane file format. The
- directory 'universal_plane_file_format' contains documentation
- describing this format as well as code to read these files on several
- different platforms. These plane files always have an extension of
- '.plane'. Plane files should always be transfered in binary mode.
-
- If you have questions or can't find a particular sort of image, let us
- know by sending E-Mail to me Robert Heller (systems programmer)
- <heller@cs.umass.edu> or Val Conti (Lab Manager) <conti@cs.umass.edu>.
-
- Jan-15-1991 - by popular request, some of the motion data files have been
- collected into tar files and compressed. Also, all of the files in
- 'universal_plane_file_format' have also been collected into a tar file
- and compressed. This has been done to make FTP'ing easier. RPH.
-
- ------------------------------
-
- Date: Mon, 14 Dec 92 09:31 JST
- From: SHARONE%HADASSAH@VMS.HUJI.AC.IL
- Subject: request for Kanji characters
-
- A friend of mine needs for his research in Pattern Recognition
- a set of contours from KANJI characters.
- Email or pointers to a FTP site are welcomed.
-
- Thanks in Advance.
- Haim Karger
- Dept of Nuclear Medecine
-
- ------------------------------
-
- From: brian@ai.mit.edu (Brian Subirana)
- Date: Mon, 14 Dec 92 09:23:35 EST
- Subject: Non-Rigid Recognition
-
- I am interested in collecting references on non-rigid objects for a
- survey that I am writing. Particularly, but not exclusively, recent
- ones on the recognition of non-rigid objects.
-
- Pointers are most welcomed,
-
- Brian Subirana
- MIT AI Lab
- Email: brian@ai.mit.edu
-
- ------------------------------
-
- Date: Sat, 12 Dec 1992 19:19:51 -0500
- From: kwon lim <lim@silver.ucs.indiana.edu>
- Subject: Signal to Noise ratio Question
-
- Could someone tell me if the following phrase make sense ?
-
- "A signal to noise ratio of a SAR image is 1 or 2".
-
- My question is: Is it possible to assert the signal to noise ratio
- of a two dimension signal(say grey scale image) without considering
- some noise model. It seems like there is some way of obtaining the
- signal to noise ratio as a measure of how good or bad the signal is.
- In other words, my question boils down to the one of how to separate
- signal and noise component. Any ideas or references will be
- appreciated.
- Thanks in advance.
-
- ------------------------------
-
- Date: Fri, 4 Dec 1992 02:44:31 GMT
- From: yang@dante.cs.uiuc.edu (Der-Shung Yang)
- Organization: University of Illinois, Dept. of Comp. Sci., Urbana, IL
- Subject: Plausible pattern recognition (Info needed)
- Keywords: plausible pattern recognition
-
- Hi, I'm doing some research on pattern recognition. The domain I'm working
- on has a special feature that seems interesting (at least to me:-) in general.
- I'm looking for any special technique that handles this situation or any
- opinions on whether this feature exists in any other domains.
-
- I'm trying to develop a system to recognize what a user is drawing on a CAD
- (Computer-Aided Design) screen. This looks like a very general pattern
- recognition problem. However, in this domain, there's no need for "perfect
- recognition." That is, my system only needs to suggest a fixed number of
- objects that look similar to what the user is drawing, possibly ranked by
- the "similarity" between the suggested object and the user's input, with the
- most similar one output first. So, a good system should output the target
- object as soon as possible, but not necessarily get it right at the first place.
- I call this type of problem "plausible pattern recognition," meaning that the
- recognition only needs to make plausible suggestions, not the perfect one.
-
- To me, this type of problem seems to reduce the difficulty of recognition
- and is more doable than trying to find a "perfect" system. If you came
- across something similar to this type of problem or have some opinions on
- whether "plausible PR" is important theoretically or practically, please let
- me know. Both emails and followups are welcome. Comments, references, and/or
- critics are all greatly appreciated.
-
- DerShung Yang
- yang@cs.uiuc.edu
- Beckman Institute
- Univ. of Illinois at Urbana-Champaign
-
- ------------------------------
-
- Date: Fri, 11 Dec 92 19:30:52 EST
- From: ge@acsu.buffalo.edu (Wang Ge)
- Subject: Help! Energy Reduction Algorithm for SP
-
- Dear Colleagues,
-
- I once heard of the energy reduction algorithm
- for recovering a signal on an interval from
- its known values on other intervals.
- Would you please give me some pointers to
- recent papers and public software (C is preferred).
- Thank you very much!
-
- Ge
-
- ------------------------------
-
- Date: Mon, 14 Dec 1992 13:05:12 GMT
- From: roth@sitter.ips.id.ethz.ch (Martin Roth)
- Organization: ETH Zurich (Swiss Federal Institute of Technology)
- Subject: Optical Music Recognition - Bibliography aviable
-
- I compiled a bibliography for Optical Music Recognition (automatic
- reading a scanned-in page of music, pattern recognition).
-
- PostScript and BibTeX files are aviable via ftp from maggia.ethz.ch
- (129.132.17.1), login ftp, directory /pub/roth/omrbib.
-
- If you can't use ftp or can't uncompress *.Z files, drop me a mail.
-
- Comments, additions about new publications are welcome!
-
- _ Martin Roth Martin Roth ETHZ, ips, RZ F16
- |\ /|_) Mail: roth@ips.id.ethz.ch Sandacker 14 ETHZ, ti, IFW B45.2
- | \/ | \ Dipl. Eng. Comp. Sci. ETH CH-8154 Oberglatt g 01/256 55 68
- (F-)emails welcome! Switzerland p 01/850 32 75
-
- ------------------------------
-
- Date: Wed, 9 Dec 1992 14:39:01 +0100
- From: Alain Jacot-Descombes <jacot@cui.unige.ch>
- Subject: LaboImage 4.0: new X11 version
-
- [ A new version of LaboImage was just installed in the Vision List Archive
- SHAREWARE subdirectory. Thanks to the unige group!
- phil... ]
-
- LABOIMAGE
-
- Original notice
-
- March 8th, 1989: LaboImage 2.0 (SunView)
- August 24th, 1990: LaboImage 3.0 (SunView)
- March 19th, 1991: LaboImage 3.1 (SunView)
- December 1st, 1992: LaboImage 4.0 (X11 / OSF Motif)
- Computer Science Center, University of Geneva, Switzerland
-
- Thank you for your interest in LaboImage!
-
-
- GENERAL DESCRIPTION
- Labo Image is a window based software for image processing and analysis. It
- contains a comprehensive set of operators as well as general utilities. It
- is designed to be open-ended; new modules can easily be added. The software,
- written in C, is now based on X11 / OSF Motif. The current version has been
- developped and tested on a Sun SPARC station, with X11r4 and Motif 1.1.
- LaboImage has been extensively used by students as well as researchers from
- various domains: computer science (image analysis), medicine, biology, physics.
- It is distributed free of charge (source code).
-
- STATUS
-
- Version 4.0, 1st December 1992:
-
- - hosts: Sun SPARC station;
- - OS: 4.1.1;
- - window system: X11r4 / Motif 1.1;
- - language: C;
- - approx. code size: source 2.5MB (80'000 lines), executable 2.5MB;
- - documentation: interactive help (english)
-
- MEANS OF DISTRIBUTIONS
-
- LaboImage source code is available by anonymous ftp at:
- 1) ftp.ads.com, login name anonymous, in pub/VISION-LIST-ARCHIVE/SHAREWARE.
- 2) peipa.essex.ac.uk, login name anonymous, in ipa/proc-src.
- If you have no access to ftp, please contact the author.
- If you wish to be kept current with update, error reports, ..., please send
- us a mail with your full name, regular and electronic addresses.
-
- DISTRIBUTION POLICY
-
- In essence:
- - this is a non-profit software, but it is our property and the copyright
- notice must appear;
- - the reference to cite in case of published results obtained with Labo
- Image is: "A. Jacot-Descombes, M. Rupp, T. Pun: `LaboImage: A portable
- window-based environment for research in image processing and analysis',
- SPIE Symposium on Electronic Imaging Science and Technology, Image
- Processing: Implementation and Systems, San Jose, California, USA,
- Feb. 9-14, 1992.
- - no responsibility is assumed;
- - not to be used for profit making purposes;
- - bugs will usually be corrected since we use the software intensively ;
- - modifications should be communicated to us, with (normally) allowance
- for redistribution.
-
- PAYMENT
-
- Athough LaboImage has undergone many upgrades and suffered in the hands of
- many users, the current version is certainly not bug free. For the time being,
- we require NO prepayment, return postage or anything.
- We may however change this policy in the future, and ask for nominal fees to
- cover material expenditures. HOWEVER, if you are satisfied with the product,
- why not send us some "souvenir" (post card, drink, etc.)) from your country...??!!
-
- CAPABILITIES
-
- Labo Image is an interactive software, whose interface is menu, mouse and
- window based. Its main features are:
- - input-output: LaboImage format file, SUN raster file; postscript;
- - display: mono, RGB, dithering, color table editor;
- - preprocessing: filters (median, high pass, low pass: hamming, gauss, etc),
- background subtraction, histogram equalization;
- - processing: thresholding, Fourier transforms, edge extractions: various
- operators, ridge-riding, zero-crossing; segmentation into regions,
- binary and gray tone mathematical morphology;
- - measures: histograms, image statistics, power, region outlining,
- object counting;
- - auxiliary: conversions, arithmetic & logical operations, noise addition,
- image generation, magnification, convolution/correlation with
- masks, image; padding;
- - elementary interactive operations: region outlining, statistics and
- histogram computation, etc.;
- - special tools:
- - modify image at pixel level interactively,
- - one-dimensional gel analysis,
- - expert system for image segmentation (not implemented in LaboImage 4.0);
- - macros definitions, save and replay (not implemented in LaboImage 4.0);
- - on line documentation.
-
- IMAGE FORMATS
-
- Own format: descriptor file + data file (binary, byte, int, float, complex;
- mono or RGB). Supports also Sun raster format. Conversions to various other
- formats.
- Data structures:
- - iconic (pixel-based), with each image having its own parameter list;
- - vectors (histograms, look-up tables);
-
- MISCELLANEOUS REMARKS (answers to commonly asked questions)
-
-
- - FILE FORMAT: we decided to go for: 1) a machine independant format; 2) a
- simple, data (ie. signal) oriented format. At the beginning of the
- development (summer 1987), we were not aware of any image format used by
- the whole community. There seems now to be some progress on the matter
- (TIFF, etc.), but they are still not that widely used in the community.
- Also, due to development priorities we consider conversion routines a more
- secondary issue as long as our format is simple.
- In addition, the menu "ACQUISITION=>free byte format" is fairly versatile.
- Also, the SUN raster images can now be read into LaboImage and likewise
- images on system may be stored in SUN raster format.
- However.. we would welcome any software contribution!
-
- - 3D IMAGE PROCESSING: nothing special for such images.
-
- - ON LINE HELP: available.
-
- ACKNOWLEDGEMENTS
-
- More than 10 people have so far participated in this project, and their
- contribution is gratefully acknowledged.
- Staff: Pierre-Yves Burgi, Claudia Coiteux-Rosu, Ziping Hu, Alain Jacot-
- Descombes, Rene Lutz, Christian Pellegrini, Thierry Pun, Marianne Logean-
- Rupp, Krassimir Todorov.
- Students: Anne Bobillier, Alain Brunner, Markus Buchi, Christian Girard,
- Rene Perrier, Vrinda Shukla.
- Amongst them, A. Jacot-Descombes is responsible for general design issues,
- and is the keystone for implementation; R. Lutz is responsible for display
- manipulations (Color Table Editor,etc.); T. Pun is responsible for the
- original layout and general design issues;
- V. Shukla is responsible for the upgrade from LaboImage 2.0 to LaboImage 3.0;
- Marianne Logean-Rupp is responsible for the portability of LaboImage to X11
- (LaboImage 4.0).
- We are particularly grateful to Drs. D. F. Hochstrasser and O. Ratib, Digital
- Imaging Unit, Computer Center, University Hospital of Geneva, for their
- extended support. LaboImage 4.0 could not have been without their help.
-
- CONTACTS
-
- Particular problems will be redirected to relevant persons, but we prefer
- that all communications be made to the same address:
- e-mail: "pun@uni2a.unige.ch" or pun@cgeuge51.bitnet (if this fails,
- "pun@cui.unige.ch").
- tel.: +(4122) 705 76 27 (T. Pun), 705 76 30 (A. Jacot-Descombes).
- fax: +(4122) 320 29 27.
- postal address: Thierry Pun
- LaboImage
- Computing Science Center, University of Geneva
- 24, rue du General-Dufour
- CH - 1211 Geneva 4
- SWITZERLAND
-
- ------------------------------
-
- Date: Fri, 4 Dec 92 14:22:34 GMT
- From: M.Petrou@ee.surrey.ac.uk
- Subject: studentships
-
- UNIVERSITY OF SURREY
-
- Research Studentships - Image Processing/Computer Vision
-
- Three research studentships are available from 15 January 1993 to carry out PhD research
- in the Vision Speech and Signal
- Processing Research Group of the Department of Electronic and Electrical
- Engineering with extensive computing resources including SUN sparc stations
- as well as speciliased image processing facilities.
-
- The studentships are in the following areas:
- - 3D object recognition with emphasis on active vision
- - Image processing for remote sensing to study contextual multispectral
- image classification.
- - Automatic inspection of colour texture surfaces.
-
- Succesfull applicants will have a very good degree in Computer
- Science, Physics, Electrical Engineering or Mathematics with
- dedication to and apptitude for research. The value of the
- studentships will be set at or above the SERC rates depending on the
- circumstances.
-
- Further information and application forms may be obtained from Professor
- J Kittler on +44 483 509294, email: J.Kittler@ee.surrey.ac.uk, or at
- the Department of Electronic and Electrical Engineering,
- University of Surrey, Guildford, Surrey
- GU2 5XH United Kingdom.
-
- ------------------------------
-
- Date: Tue, 15 Dec 92 16:43:25 GMT
- From: Andy Clark <andycl@syma.sussex.ac.uk>
- Subject: Doctoral Program in Philosophy-Psychology-Neuroscience
-
- First Announcement of a New Doctoral Programme in
-
- PHILOSOPHY-NEUROSCIENCE-PSYCHOLOGY
- at
- Washington University in St. Louis
-
- The Philosophy-Neuroscience-Psychology (PNP) program
- offers a unique opportunity to combine advanced
- philosophical studies with in-depth work in Neuroscience
- or Psychology. In addition to meeting the usual requirements for
- a Doctorate in Philosophy, students will spend one year working in
- Neuroscience or Psychology. The Neuroscience option will draw
- on the resources of the Washington University
- School of Medicine which is an internationally acknowledged
- center of excellence in neuroscientific research. The
- initiative will also employ several new PNP related Philosophy faculty
- and post-doctoral fellows.
-
-
- Students admitted to the PNP program will embark
- upon a five-year course of study designed to fulfill all the
- requirements for the Ph.D. in philosophy, including an
- academic year studying neuroscience at Washington
- University's School of Medicine or psychology in the
- Department of Psychology. Finally, each PNP student will
- write a dissertation jointly directed by a philosopher and a
- faculty member from either the medical school or the
- psychology department.
-
- THE FACULTY
-
- Roger F. Gibson, Ph.D., Missouri, Professor and Chair:
- Philosophy of Language, Epistemology, Quine
-
- Robert B. Barrett, Ph.D., Johns Hopkins, Professor:
- Pragmatism, Renaissance Science, Philosophy of Social
- Science, Analytic Philosophy.
-
- Andy Clark, Ph.D., Stirling, Visiting Professor (1993-6) and
- Acting Director of PNP:
- Philosophy of Cognitive Science, Philosophy of Mind,
- Philosophy of Language, Connectionism.
-
- J. Claude Evans, Ph.D., SUNY-Stony Brook, Associate Pro-
- fessor: Modern Philosophy, Contemporary Continental
- Philosophy, Phenomenology, Analytic Philosophy, Social and
- Political Theory.
-
- Marilyn A. Friedman, Ph.D., Western Ontario, Associate
- Professor: Ethics, Social Philosophy, Feminist Theory.
-
- William H. Gass, Ph.D., Cornell, Distinguished University
- Professor of the Humanities: Philosophy of Literature,
- Photography, Architecture.
-
- Lucian W. Krukowski, Ph.D., Washington University, Pro-
- fessor: 20th Century Aesthetics, Philosophy of Art,
- 18th and 19th Century Philosophy, Kant, Hegel,
- Schopenhauer.
-
- Josefa Toribio Mateas, Ph.D., Complutense University,
- Assistant Professor: Philosophy of Language, Philosophy
- of Mind.
-
- Larry May, Ph.D., New School for Social Research, Pro-
- fessor: Social and Political Philosophy, Philosophy of
- Law, Moral and Legal Responsibility.
-
- Stanley L. Paulson, Ph.D., Wisconsin, J.D., Harvard, Pro-
- fessor: Philosophy of Law.
-
- Mark Rollins, Ph.D., Columbia, Assistant Professor:
- Philosophy of Mind, Epistemology, Philosophy of Science,
- Neuroscience.
-
- Jerome P. Schiller, Ph.D., Harvard, Professor: Ancient
- Philosophy, Plato, Aristotle.
-
- Joyce Trebilcot, Ph.D., California at Santa Barbara, Associ-
- ate Professor: Feminist Philosophy.
-
- Joseph S. Ullian, Ph.D., Harvard, Professor: Logic, Philos-
- ophy of Mathematics, Philosophy of Language.
-
- Richard A. Watson, Ph.D., Iowa, Professor: Modern Philoso-
- phy, Descartes, Historical Sciences.
-
- Carl P. Wellman, Ph.D., Harvard, Hortense and Tobias Lewin
- Professor in the Humanities: Ethics, Philosophy of Law,
- Legal and Moral Rights.
-
- EMERITI
-
- Richard H. Popkin, Ph.D., Columbia: History of Ideas,
- Jewish Intellectual History.
-
- Alfred J. Stenner, Ph.D., Michigan State: Philosophy of
- Science, Epistemology, Philosophy of Language.
-
- FINANCIAL SUPPORT
-
- Students admitted to the Philosophy-Neuroscience-Psychology
- (PNP) program are eligible for five years of full financial
- support at competitive rates in the presence of satisfactory
- academic progress.
-
- APPLICATIONS
-
- Application for admission to the Graduate School should be
- made to:
- Chair, Graduate Admissions
- Department of Philosophy
- Washington University
- Campus Box 1073
- One Brookings Drive
- St. Louis, MO 63130-4899
-
- Washington University encourages and gives full
- consideration to all applicants for admission and financial
- aid without regard to race, color, national origin,
- handicap, sex, or religious creed. Services for students
- with hearing, visual, orthopedic, learning, or other
- disabilities are coordinated through the office of the
- Assistant Dean for Special Services.
-
- ------------------------------
-
- Date: Tue, 15 Dec 1992 17:59:45 GMT
- From: sethi@usha.cs.wayne.edu (Ishwar K. Sethi)
- Organization: Wayne State University, Detroit
- Subject: special issue of the machine vision and applications
- Keywords: machine vision, neural networks
-
- CALL FOR PAPERS
-
- Special Issue of the Machine Vision and Applications on Neural
- Networks for Machine Vision
-
- The developments in the artificial neural network technology in
- recent years have provided machine vision researchers and developers
- with new tools and techniques to build machine vision algorithms and
- systems that exhibit human-like visual perception capabilities.
- The goal of the special issue is to capture these developments in
- neural network theory and its applications to machine vision and to
- provide the readers with an overview of the state-of-the-art. To meet
- this goal, papers are solicited for the special issue which is scheduled
- to appear in early 1995. Possible topics for the special issue, but not
- limited to, include the followings:
-
- * Learning and Self-Organization for Segmentation, Feature Extraction,
- and Recognition.
- * Motion Detection, Tracking, and Characterization using Neural Networks.
- * Hardware Implementations including Smart Vision Chips.
- * Neural Networks for Multisensory Processing.
- * Automated Visual Monitoring and Inspection using Neural Networks.
- * Reverse Engineering for Machine Vision using Neural Networks.
-
- Papers emphasizing technical details and theoretical background of machine
- vision systems having strong neural network components and currently in use
- are especially welcome.
-
- The papers should be prepared following the Machine Vision and Applications
- guidelines. All papers will be reviewed according to the guidelines of the
- Machine Vision and Applications . Please submit four copies of your manuscript
- to:
-
- Professor Ishwar K. Sethi
- Department of Computer Science
- Wayne State University
- Detroit, MI 48202
- U.S.A.
-
- The deadline for submission is July 31, 1993. For enquiries, send E-mail to
- sethi@cs.wayne.edu or fax to 313-577-6868.
-
- ------------------------------
-
- Date: Mon, 7 Dec 92 14:37:42 -0500
- From: carlson@titanic.cs.umass.edu (Adam Carlson)
- Subject: AAAI '93 : Call for Papers
-
- Call for Papers
- AAAI-93
-
- AAAI-93 is the eleventh national conference. The purpose
- of the conference is to promote research in artificial
- intelligence (AI) and scientific interchange among AI
- researchers and practitioners.
- Papers may represent significant contributions to all
- aspects of AI:
- a) the principles underlying cognition, perception, and
- action in humans and machines;
- b) the design, application, and evaluation of AI
- algorithms and intelligent systems; and
- c) the analysis of tasks and domains in which
- intelligent systems perform.
- In recognition of the wide range of methodologies and
- research activities legitimately associated with AI, we
- invite authors to submit papers describing both
- experimental and theoretical results from all stages of
- AI research. In particular, we encourage submission of
- papers that present promising research directions by
- describing innovative concepts, techniques, perspectives,
- or observations that are not yet supported by mature
- results. To be accepted, such submissions must include
- substantial analysis of the ideas, the technology needed
- to realize them, and their potential impact. In addition,
- because of the essential interdisciplinary nature of AI
- and the need to maintain effective communication across
- sub-specialties, we encourage authors to position and
- motivate their work in the larger context of the general
- AI community. While papers concerned with applications
- of AI are invited, those that describe working
- commercial systems should be submitted to the IAAI
- conference.
-
-
- Requirements for Submission
-
- Authors must submit six (6) complete printed copies of
- their papers to the AAAI office by January 13, 1993.
- Papers received after that date will be returned
- unopened. Notification of receipt will be mailed to the
- first author (or designated author) soon after receipt.
- All inquiries regarding lost papers must be made by
- January 27, 1993. Authors are also requested to send
- their paperUs title page in an electronic mail message to
- abstract@aaai.org by January 13, 1993. Notification of
- acceptance or rejection of submitted papers will be
- mailed to the first author (or designated author) by
- March 3, 1993. Camera-ready copy of accepted papers
- will be due about one month later.
-
- Paper Format for Review
- All six (6) copies of a submitted paper must be clearly
- legible. Neither computer files nor fax submissions are
- acceptable. Submissions must be printed on 8 1/2" x 11"
- or A4 paper using 12 point type (10 characters per inch
- for typewriters). Each page must have a maximum of 38
- lines and an average of 75 characters per line
- (corresponding to the LaTeX article-style, 12 point).
- Double-sided printing is strongly encouraged.
-
- Length
- The body of submitted papers must be at most 11 pages,
- including figures, tables, and diagrams, but excluding the
- title page and bibliography. Papers exceeding the
- specified length and formatting requirements are subject
- to rejection without review.
-
- Title page
- Each copy of the paper must have a title page (separate
- from the body of the paper) containing the title of the
- paper, the names and addresses of all authors, a short
- (less than 200 word) abstract, and a descriptive content
- area or areas. The title page sent via electronic mail to
- the AAAI office must be in plain ASCII text with each
- section of the title page preceded by the name of that
- section as follows:
- title: <title>
- author: <name of first author>
- address: <address of first author> author: <name of last
- author>
- address: <address of last author>
- abstract: <abstract>
- content areas: <first area>, I,
- <last area>
- To facilitate the reviewing process, authors are
- requested to select appropriate content areas from the
- list below. Authors are invited to add additional content
- area descriptors to their title page as needed.
- Artificial Life, Automated Reasoning, Behavior-Based
- Control, Belief Revision, Case-Based Reasoning,
- Cognitive Modeling, Common Sense Reasoning,
- Communication and Cooperation, Constraint-Based
- Reasoning, Computer-Aided Education, Connectionist
- Models, Corpus-Based Language Analysis, Deduction,
- Diagnosis, Discourse Analysis, Distributed Problem
- Solving, Expert Systems, Geometrical Reasoning,
- Information Extraction, Knowledge Acquisition,
- Knowledge Representation, Knowledge Sharing
- Technology, Large Scale Knowledge Engineering,
- Learning/Adaptation, Machine Learning, Machine
- Translation, Mathematical Foundations, Multi-Agent
- Planning, Natural Language Processing, Neural Networks,
- Nonmonotonic Reasoning, Perception, Planning,
- Probabilistic Reasoning, Qualitative Reasoning,
- Reasoning about Action, Reasoning about Physical
- Systems, Reactivity, Robot Navigation, Robotics, Rule-
- Based Reasoning, Scheduling, Search, Sensor
- Interpretation, Sensory Fusion/Fission, Simulation,
- Situated Cognition, Spatial Reasoning, Speech
- Recognition, System Architectures, Temporal Reasoning,
- Terminological Reasoning, Theorem Proving, Truth
- Maintenance, User Interfaces, Virtual Reality, Vision, 3-
- D Model Acquisition.
-
-
- Submissions to Multiple Conferences
-
- Papers that are being submitted to other conferences,
- whether verbatim or in essence, must state this fact on
- the title page. If a paper appears at another conference
- (with the exception of specialized workshops), it must
- be withdrawn from AAAI-93. Papers that violate these
- requirements are subject to rejection without review.
-
-
- Review Criteria
-
- Each paper will be carefully reviewed by experts
- specializing in the content areas on the paperUs title
- page. Questions that will appear on the review form have
- been reproduced below. Authors are advised to bear
- these questions in mind while writing their papers:
- Significance
- How important is the work reported? Does it attack an
- important/difficult problem or a peripheral/simple one?
- Does the approach offered advance the state of the art?
-
- Originality
- Has this or similar work been previously reported? Are
- the problems and approaches completely new? Is this a
- novel combination of familiar techniques? Does the
- paper point out differences from related research? Is it
- re-inventing the wheel using new terminology?
-
- Quality
- Is the paper technically sound? Does it carefully
- evaluate the strengths and limitations of its
- contribution? How are its claims backed up?
-
- Clarity
- Is the paper clearly written? Does it motivate the
- research? Does it describe the inputs, outputs and basic
- algorithms employed? Does the paper describe previous
- work? Are the results described and evaluated? Is the
- paper organized in a logical fashion?
-
- Publication
- Accepted papers will be allocated six (6) pages in the
- conference proceedings. Up to two (2) additional pages
- may be used at a cost to the authors of $250 per page.
- Papers exceeding eight (8) pages and those violating the
- instructions to authors will not be included in the
- proceedings.
-
- Copyright
- Authors will be required to transfer copyright of their
- paper to AAAI.
-
-
- Please send papers and conference registration inquiries
- to:
-
- AAAI-93
- American Association
- for Artificial Intelligence
- 445 Burgess Drive
- Menlo Park, CA 94025-3496
-
- Registration and call clarification inquiries (ONLY) may
- be sent to the CSNET address: NCAI@aaai.org. Please
- send program suggestions and inquiries to:
-
- Richard Fikes
- Knowledge Systems Laboratory
- Stanford University
- 701 Welch Road, Building C
- Palo Alto, CA 94304
- fikes@ksl.stanford.edu
-
- Wendy Lehnert
- Department of Computer Science
- University of Massachusetts
- Amherst, MA 01003
- lehnert@cs.umass.edu
-
- ------------------------------
-
- Date: Thu, 10 Dec 92 16:51:41 -0500
- From: "Baba Vemuri" <vemuri@scuba.cis.ufl.edu>
- Status: CFP: Geometric Methods in Computer Vision
-
- CALL FOR PAPERS
-
- Geometric Methods in Computer Vision
- (Part of SPIE's Annual International Symposium on Optoelectronic
- Applied Science and Engineering; 12-13th July 1993;
- San Diego, California,
- San Diego Convention Center, Marriott Hotel and Marina)
-
- Conference Chair: Baba C. Vemuri
- Dept. of Computer & Information Sciences, CSE326
- University of Florida, Gainesville, FL. 32611
-
- Co-chairs:
- Ruud M. Bolle, IBM T. J. Watson Research Center, Yorktown Heights, NY.
- Demetri Terzopoulos, Department of Computer Science, Univ. of Toronto, Canada.
- Richard Szeliski, Cambridge Research Labs, DEC, Cambridge, MA.
- Gabriel Taubin, IBM T. J. Watson Research Center, Yorktown Heights, NY.
- Alan Yuille, Division of Applied Sciences, Harvard University, MA.
- Ramesh C. Jain, Dept. of EECS, Univ. of Michigan, Ann Arbor, MI.
-
- Key Note Address:
-
- Professor Dr. Jan Koenderink
- Physics Lab, Department of Medical and Physiological Physics
- University of Utrecht, Netherlands.
-
-
-
- The theme of this conference is application of geometric methods in
- low-level vision tasks, specifically for shape and motion estimation.
- Over the past few years, there has been increased interest in the use
- of differential geometry, computational physics and probability theory
- for various vision tasks. Papers describing novel contributions in all
- aspects of geometric and probabilistic methods in vision are
- solicited, with particular emphasis on:
-
- Differential Geometric Methods for Shape Representation.
-
- Energy-based Methods for Shape Estimation.
-
- Probabilistic Techniques for Shape Estimation and Representation.
-
- Geometry and Shape Recognition.
-
-
- New Deadlines
-
- Abstract Due Date: DECEMBER 28, 1992
-
- Manuscript Due Date: April 19, 1993
- (Proceedings will be made available at the conference)
-
- Please FAX or airmail FOUR copies, or email ONE copy of your abstract
- by 14 DECEMBER 1992 to:
-
- SPIE, San Diego '93
- P.O. Box 10, Bellingham, WA 98227-0010
- Shipping Address: 1000 20th Street, Bellingham, WA 98225
- Telephone: 206/676-3290
- FAX: 206/647-1445
- email: abstracts@mom.spie.org (ASCII Files only)
- CompuServe 71630,2177
-
- Your submission should include the title of your abstract, the authors' names,
- affiliations, mailing addresses, phone/FAX numbers, and email addresses, as well
- as the abstract text of approximately 500 words. Please be sure to indicate that
- your abstract is intended for the conference on Geometric Methods in Computer
- Vision II (Vemuri).
-
- Applicants will be notified of acceptance by March 1993. A manuscript due
- date of 19 April 1993 must be strictly observed since the Proceedings of this
- conference will be published before the meeting and available on site.
-
- Note: Late abstract submissions may be considered, subject to program
- time availability and chairs approval.
-
- ------------------------------
-
- From: Mark Evans <mre1@it-research-institute.brighton.ac.uk>
- Date: Tue, 8 Dec 92 18:08:00 GMT
- Subject: Sun Pix
-
- I have summarised some reponses I got about Sun's video pix frame grabber. It
- might be of interest to others.
-
- Regards,
-
- Mark
-
- # Mark Evans mre1@itri.bton.ac.uk #
- # itri!mre1 #
- # ITRI, #
- # University of Brighton, #
- # Lewes Road, #
- # BRIGHTON, #
- # E. Sussex, #
- # BN2 4AT. #
- # Tel: +44 273 642904/642900 #
- # Fax: +44 273 606653 #
-
- >Does anyone have any experience of using the VideoPix card
- >by Sun ? It allows you to capture images and display them
- >on a sparc workstation. Does anyone have a recommendation
- >for a frame grabber for a sparc costing 500-700 pounds ?
-
- Thank you for everyone that supplied me with information. Special thanks to
- MS and EM of Sun Microsystems (both worked on the videopix card project) for
- all the their help.
-
- *** Users Comments ***
-
- 1. I have used the videopix here in the USA. It seems to work OK for me. I
- don't know how high a video quality that you need but the videopix only has
- 7-bit analogue to digital converters. I don't know of anything else in the
- same price range but I haven't looked for about 1.5 years.
-
- 2. We have three or four VideoPix boards and are very happy with them.
- They're not full-motion though, so if this is a requirement for you then the
- VideoPix may not be what you want.
-
- The boards can capture and display about 15fps of greyscale video in a small
- window. As you increase the size of the window (and hence the amount of data
- which needs to be blitted through the window system) and add color, the
- capture and display rate drops and eventually bottoms out at about 5fps for a
- fairly large color window.
-
- The board comes with a nice software library and a GUI interface, and seems
- to be a well thought-out product. We've had no complaints at all.
-
- 3. I've used the Sun VideoPix card. When it comes to grabbing single frames
- (motionless) it's OK. I use the PAL-format, which gives about 720x575
- pixels. NTSC gives, I think, 640x480. Also, it stores 7 bits of luminosity
- per pixel, and 2*7 bits of chrominance per every four pixels. So, the
- color-information appears a bit slower than the intensity. Maybe this is only
- natural. The eye might be better at intensity differences than those of
- color.
-
- To grab a movie in realtime is harder. I've managed to grab about 6 frames/s,
- on a Sparc 2, each frame having a resolution of 320x240 full color. Full
- frame-rate would be 25-30 frames/s.
-
- 4. I have messed around with the videopix card quite a bit, and it seems to
- be a viable tools for conferencing, image-oriented mail and multimedia, etc,
- but does not produce the quality required for high-end image processing
- applications.
-
- 5. We have one of these boards here at ANU, but we just got the thing so we
- don't have much experience with it so far. It can display about a frame a
- second from live video or TV in color, but the quality isn't as good as I
- expected. That may be a result of the poor quality signal from the TV though.
- It can do a few frames a second in black and white. The frame grabs work
- pretty good, but it is hard to get the exact frame that you want.
-
- *** Technical Specification ***
-
- 1. Colour (8 and 24 bit)
-
- Yes.
-
- 2. Greyscale (256 levels)
-
- Yes.
-
- 3. Monochrome
-
- Yes. 1bit per pixel file saves are available. All functions work on 1bit
- frame buffers, also.
-
- 4. A resolution of 512x512 or better
-
- Square pixel PAL resolution is what you get from the VFCtool software, but we
- also have an API that lets you get the full non-square data. So you can
- customize your software to control VideoPix directly. The API isn't a
- specific tool per say, but a list of defined driver calls.
-
- 5. Video input
-
- 2 Composite (NTSC or PAL), 1 S-Video (NTSC or PAL) all software selectable.
-
- 6. Continuous Frame Gabbing - (what is the max frames grabbed per sec ?)
-
- Ah. This depends on which mode (B&W or Colour) and what display resolution
- you selected from the software. Basically it's:
-
- mode resolution (displayed) FPS
-
- Colour Full (640x480 NTSC) 1
- Colour Half (320x240 NTSC) ~3.5
- Colour Quarter (160x120 NTSC) ~5
-
- B&W Full (640x480 NTSC) ~3.5
- B&W Half (320x240 NTSC) ~6.5
- B&W Quarter (160x120 NTSC) ~8.5
-
- The fastest library routine that is supplied might be able to grab 15 fields
- per second, NTSC, in raw YUV format.
-
- *** Technical Explanation ***
-
- The ADC's are a full 8 bits, But the SAA9051 Multi-Standard Decoder operates
- only on 7 bits. The output of the decoder is 4:1:1 digital YUV data that is
- stored in the boards FRAM. There is a full frame of FRAM on the board for
- either NTSC or PAL.
-
- Basicaly the board looks like this:
-
- input conn -> SWITCH-AGC-FILTER-ADC -> SAA9051 decoder-> FRAM ->SBUS
- input conn -| | |
- input conn -| --------I2C UART -----
-
-
- What happens along this decode path is this:
-
- 1. The video source is supplied via the input connectors. Which input is
- selected via the software. Commands are send through I2C bus to tell the
- decoder which input has been requested.
-
- 2. Now the video has been selected, the ADC now gain adjusts the video levels
- and low pass filters the signal to trap out un-wanted noise above 6.5Mhz.
- Then the video is buffered and sent into the ADC. The ADC digitizes the CVBS
- signal with 8 bit resolution. The data is then sent to the decoder.
-
- 3. Now the data is taken into the SAA9051 decoder. Although the part has an
- 8bit hardware interface to the input ADC's, the internal resolution that the
- chip operates at is only 7bits in the (y) luminance path. The color
- information (c) is decoded and sub-sampled and then outputed with the (y)
- information to produces a 4:1:1 digital YUV output data stream.
-
- 4. The information is now stored in the on-board 1 Meg FRAM (Field RAM)
- buffer at video speed. FRAM is a memory specially designed for use in video
- frame store applications. It was first used in televisions that had
- Picture-In-Picture (PIP) processors. It now can be found in most high end
- VCR's being used for time base correction applications or in digitizers like
- VideoPix. FRAM looks like a big FIFO, and is in fact read like a FIFO. But
- it's storage cell can hold an entire field of PAL (or NTSC) video. Now that
- this information is held in the FRAM, The application can now access it from
- the SBUS. This is where the speed bottleneck is.
-
- 5. SBUS. Once the data is requested, it is sent in it's raw YUV data form
- into the host memory via VideoPix's slave interface. The data transfer speeds
- are fast, but not fast enough for real-time transfers (30FPS). This amount of
- data is in the order of ~20 MBytes/sec which requires an expensive (at the
- time) DMA interface. This also swamps SBUS disallowing of transactions to
- occur from other SBUS cards. Since SBUS is considerd a general purpose I/O
- bus and not a high speed video bus, it would be wrong for any SBUS card to
- take more than 50% of the bandwith at anytime.
-
- The net results is that we have an upper limit as to how fast we can transfer
- the data from the card to the host ram.
-
- 6. Once the data is in the host, the software does several things to it. The
- data arrives as 4:1:1 coded YUV non-square (4:3 aspect) pixels. In order to
- display this on the console, the data is transcoded to RGB and dithered if
- the framebuffer is only 8bits. It is then sub-sampled to render a square
- image. This basicaly means dropping every fith pixel.
-
- 7. The data is now bit-blited to the display for you to enjoy!.
-
- *** My thoughts ***
-
- I will be buying one. If your main frame grabber specification is
- real time frame grabbing you will have to purchase a different, more
- expensive frame grabber. For the price it seems good value for money.
-
- I have two sample jpeg images (90%) grabbed from TV using the videopix card
- if anyone is interested.
-
- Thank you again for everyone that sent info to me.
-
- Regards,
- Mark
-
- ------------------------------
-
- Date: Wed, 9 Dec 92 15:28:07 EST
- From: bedard@robocop.NYU.EDU (Patricia Bedard)
- Subject: References on automatic face recognition
-
- Following my previous request for references on automatic face recognition,
- many people expressed an interest in the compilation, so I thought it might
- be useful to post it to the vision list.
-
- Here is the list of references that I have compiled. If you notice that
- some references that you know about are missing, please let me know.
- Cheers,
-
- Patricia
-
- Patricia J. Bedard (bedard@robocop.nyu.edu)
- Courant Institute of Mathematical Sciences
- New York University
- 251 Mercer St.
- New York, NY, 10012 U.S.A.
-
- ***************
-
- survey (with extensive bibliography):
- ====================================
- Ashok Samal and Prasana A. Iyengar: "Automatic Recognition and
- Analysis of Human Faces and facial Expressions: A Survey"
- Pattern Recognition _25(1)_, 65--77, 1992.
-
-
- references:
- ==========
- Andrew C. Aitchison and Ian Craw. "Synthetic Images Of Faces - An Approach
- to Model-Based face recognition" in "BMVC91, Proceedings of the British
- Machine Vision Conference", 1991, (Peter Mowforth, editor) Springer-Verlag.
-
- Shigeru Akamatsu and Tsutomu Sasaki and Hideo Fukamachi and Nobuhiko Masui
- and Yasuhito Suenaga. "An Accurate and Robust Face Identification Scheme"
- in ICPR'92, 1992.
-
- Shigeru Akamatsu and Tsutomu Sasaki and Hideo Fukamachi and Yasuhito Suenaga.
- "A robust face identification scheme --- KL expansion of an invariant feature
- space" in Intelligent Robots and Computer Vision X: Algorithms and Techniques",
- SPIE #1607, 1991. pp. 71-84.
-
- Robert Baron, Mechanisms of human facial recognition
- International Journal of Man-Machine Studies (1981) 15, 137-178
-
- Alan Bennett and Ian Craw. "Finding Image Features Using Deformable Templates
- And Detailed Prior Statistical Knowledge" in "BMVC91, Proceedings of the
- British Machine Vision Conference", 1991, (Peter Mowforth, editor),
- Springer-Verlag.
-
- Brunelli, R. and Poggio, T. "Face Recognition through Geometrical Features"
- in "Proc. 2nd European Conf. on Computer Vision, Lecture Notes in Computer
- Science #588" (G. Sandini, editor), Springer Verlag, 1992. pp. 792-800.
-
- R. Brunelli and T. Poggio, ``HyperBF networks for gender classification,''
- Proc. DARPA IU Workshop, 1992, 311--314.
-
- Buhmann, Joachim and Lades, Martin and von der Marlsburg, Christoph. "Size
- and Distortion Invariant Object Recognition by Hierarchical Graph Matching",
- IJCNN, 1990, V.2. pp. 411-416.
-
- Burt, Peter J. "Smart Sensing within a Pyramid Vision Machine" in
- "Proceedings of the {IEEE}", 1988, vol 76, no 8, pp. 1006-1015.
-
- Craw, Ian and Cameron, Peter. "Face Recognition by Computer" in
- "Proc. British Machine Vision Conference", 1992, (David Hogg and Roger
- Boyle, editors), Springer Verlag.
-
- Ian Craw and Peter Cameron. "Parameterising Images for Recognition and
- Reconstruction" in "BMVC91, Proceedings of the British Machine Vision
- Conference", 1991,(Peter Mowforth, editor), Springer-Verlag.
-
- I. Craw, H. Ellis and J.R. Lishman, Automatic extraction of face-features
- Pattern Recognition Letters, 5, 1987, 183-187
-
- Ian Craw and David Tock and Alan Bennett. "Finding Face Features" in ECCV'92,
- Lecture Notes in Computer Science #588, Springer Verlag. pp.
-
- S. Edelman and D. Reisfeld and Y. Yeshurun. "Learning to recognize faces
- from examples" in "Proc. 2nd European Conf. on Computer Vision, Lecture Notes
- in Computer Science #588", (G. Sandini, editor), Springer Verlag, 1992.
- pp. 787-791.
-
- Fleming, Michael K. and Cottrell, Garrison W. ""Categorization of Faces
- Using Unsupervised Feature Extraction" in IJCNN, 1990, Vol 2, pp. 65-70.
-
- R Gallery and T I P Trew. "An Architecture for Face Classification" in
- "Colloquium: Machine Storage and Recognition of Faces. IEE Digest 017, 1992.
-
- Goldstein, A. Jay and Harmon, Leon D. and Lesk, Ann B. "Identification of
- Human Faces" in "Proceedings of the {IEEE}", 1971, Vol 59, No 5, pp. 748-760.
-
- G.G. Gordon, ``Face recognition based on depth and curvature features,''
- Proc. IEEE CVPR, 1992, 808--810.
-
- Govindaraju, Venu and Srihari, Sargur. N. and Sher, David B. "A Computational
- Model for Face Location" in ICCV'90, pp. 718-721.
-
- Harmon, L. D. and Khan, M. K. and Lasch, Richard and Ramig, P. F.
- "Machine Identification of Human Faces" in "Pattern Recognition", 1981,
- Vol 13, No. 2, pp. 97-110.
-
- Hong, Zi-Quan. "Algebraic feature extraction of image for recognition" in
- "Pattern Recognition", Vol 24, March 1991, pp. 211-219.
-
- Jia, Xiaoguang and Nixon, Mark S. "On developing an extended feature set
- for automatic face recognition" in "Colloquium: Machine Storage and
- Recognition of Faces, IEE Digest 017", 1992.
-
- Kanade, Takeo. "Computer Recognition of Human Faces" in volume 47 of
- "Interdisciplinary Systems Research", Birkhauser, Basel, Stuttgart, 1977.
-
- Kaya, Y. and Kobayashi, K. ""A Basic Study on Human Face Recognition" in
- "Frontiers of Pattern Recognition", 1972, pp. 265-289.
-
- Kirby, M. and Sirovich, L. "Application of the Karhunen-Lo\`{e}ve
- Procedure for the Characterization of Human Faces" in PAMI-12, 1980,
- V.12, no 1, pp. 103-108.
-
- J.C. Lee and E. Milios, ``Matching range images of human faces,''
- Proc. ICCV, 1990, 722--726.
-
- B.S. Manjunath, R. Chellappa, and C. von der Malsburg, ``A feature based
- approach to face recognition,'' Proc. IEEE CVPR, 1992, 373--378.
-
- Nakamura, Osamu and Mathur, Shailendra and Minami, Toshi. "Identification
- of Human Faces Based on Isodensity Maps" in "Pattern Recognition", 1991,
- Vol 24, no 3, pp. 263-272.
-
- A. Pentland and S. Sclaroff, ``Closed-form solutions for physically
- based shape modeling and recognition,'' IEEE PAMI, Vol.\ 13,
- 1991, 715--729.
-
- C.S. Ramsey and K. Sutherland and D. Renshaw and P.B. Denyer. "A Comparison
- of Vector Quantisation Codebook Generation Algorithms Applied to Automatic
- Face Recognition" in "Proceedings of BMVC-92", (David Hogg, editor),
- Springer-Verlag, 1992.
-
- Anne-Caroline Schreiber et. al., Facenet: A Connectionist Model of
- Face Identification in Context
- European Journal of Cognitive Psychology, 1991, 3 (1), 177-198
-
- Ken Sutherland, D. Rensham, and P.B. Denyer. "A novel automatic face
- recognition algorithm employing vector quantization" in "Colloquium:
- Machine Storage and Recognition of Faces, IEE Digest 017", 1992.
-
- D. Terzopoulos and K. Waters, ``Analysis of facial images using
- physical and anatomical models,'' Proc. ICCV,
- 1990, 727--732.
-
- David Tock and Ian Craw and Roly Lishman. "A Knowledge Based System for
- Measuring Faces" in "BMVC90, Proceedings of the British Machine Vision
- Conference", 1990. pp. 401-407.
-
- M. Turk, "Interactive-Time Vision: Face Recognition as a Visual
- Behavior", Ph.D. Thesis, MIT Media Lab, August 1991.
-
- M. Turk and A. Pentland, ``Eigenfaces for recognition,''
- {\sl Journal of Cognitive Neuroscience}, Vol. 3, No. 1, pp. 71-86, 1991.
-
- M. Turk and A. Pentland, ``Face Recognition Using
- Eigenfaces,'' {\sl Proc. CVPR}, Maui, Hawaii, pp. 586-591, 1991.
-
- M. Turk and A. Pentland, ``Recognition in face space,''
- {\sl Intelligent Robots and Computer Vision IX}, SPIE Vol. 1381,
- Boston, MA, 1990. (Reprinted in
- H. Nasr (ed.), {\sl Selected Papers on Automatic Object Recognition},
- SPIE Optical Engineering Press, Washington, 1991.)
-
- Wong, K. H. and Law, Hudson H. M. and Tsang, P. W. M. "A system for
- recognising human faces" in "Proceedings of the International Conference
- on Acoustics, Speech and Signal Processing", 1989, pp.1638-1642.
-
- Y. Yacoob and L. Davis, Qualitative Labeling of Human Face Features from
- Range Data, Technical Report CS-TR-2971, Center for Automation Research,
- University of Maryland, College Park, MD, Oct. 1992.
-
- A.L. Yuille, D.S. Cohen, and P.W. Hallinan, ``Feature extraction
- from faces using deformable templates,'' Proc. IEEE CVPR, 1989,
- 104--109.
-
- Alan Yuille, Deformable Templates for face recognition
- Journal of Cognitive Neuroscience, 1991, Vol 3, No. 1, p59-70
-
-
- papers soon to be released:
- ==========================
- work by Martin Lades, C. von der Malsburg (et al.?) to appear in IEEE Trans.
- on Computers.
-
- report on face recognition prepared by Alex Pentland, Terry Sejnowski and
- others soon to be publicly available.
-
-
- ------------------------------
-
- From: nde@scs.leeds.ac.uk
- Date: Fri, 11 Dec 92 12:57:10 GMT
- Subject: Snakes: summary of responses (long)
- Status: R
-
- The response to my request for references on snakes and active contour
- models was terrific: my thanks go to all who contributed information. I've
- collated the various contributions into the list below.
-
- Nick Efford
- School of Computer Studies
- Leeds University, Leeds, UK
-
- @inproceedings{amini88:iccv,
- author = "Amini, A. and Tehrani, S. and Weymouth, T",
- year = 1988,
- title = "Using dynamic programming for minimizing the energy of
- active contours in the presence of hard constraints",
- booktitle = "Proceedings of the Second International Conference on
- Computer Vision, Tampa, Florida",
- pages = "95--99"}
-
- @inproceedings{berger90:icpr,
- author = "Berger, M. O. and Mohr, R.",
- year = 1990,
- title = "Towards autonomy in active contour models",
- booktitle = "Proceedings of the Tenth International Conference on
- Pattern Recognition"}
-
- @article{brzak91:cvgip,
- author = "Brzakovic, D. and Liakopoulos, A. and Hong, L.",
- year = 1991,
- title = "Spline models for boundary detection/description:
- formulation and performance evaluation",
- journal = "CVGIP: Graphical Models and Image Processing",
- volume = 53,
- number = 4,
- pages = "392--401"}
-
- @incollection{calbom91,
- author = "Calbom, I. and Terzopoulos, D. and Harris, K. M.",
- year = 1991,
- title = "Reconstructing and visualizing models of neuronal
- dendrites",
- booktitle = "Scientific Visualization of Physical Phenomena",
- publisher = "Springer-Verlag",
- pages = "623--638"}
-
- @article{cohen91:cvgip,
- author = "Cohen, L. D.",
- year = 1991,
- title = "Note on active contour models and balloons",
- journal = "CVGIP: Image Understanding",
- volume = 53,
- number = 2,
- pages = "211-218"}
-
- @inproceedings{cohen90:iccv,
- author = "Cohen, L. and Cohen, I.",
- year = 1990,
- title = "A finite element method applied to new active contour
- models and {3D} reconstructions",
- booktitle = "Proceedings of the Third International Conference on
- Computer Vision, Osaka, Japan, December 1990",
- pages = "587--591"}
-
- @article{cohen92:cvgip,
- author = "Cohen, I. and Cohen, L. D. and Ayache, N.",
- year = 1992,
- title = "Using deformable templates to segment {3D} imags and
- infer differential structures",
- journal = "CVGIP: Image Understanding",
- volume = 56,
- number = 2,
- pages = "242--263"}
-
- @inproceedings{cohen92:cvpr,
- author = "Cohen, L. D. and Cohen, I.",
- year = 1992,
- title = "Deformable models for {3D} medical images using
- finite elements and balloons",
- booktitle = "IEEE Computer Society Conference on Computer Vision
- and Pattern Recognition, Champaign, Illinois, June
- 1992"}
-
- @techreport{davatz92,
- author = "Davatzikos, C. A. and Prince, J. L.",
- year = 1992,
- title = "An active contour algorithm for thick curves",
- institution = "Johns Hopkins University",
- number = "JHU/ECE 92-07"}
-
- @inproceedings{davatz92:icassp,
- author = "Davatzikos, C. A. and Prince, J. L.",
- year = 1992,
- title = "Segmentation and mapping of highly-convoluted
- contours with applications to medical images",
- booktitle = "Proceedings of ICASSP '92, IEEE Conference on
- Acoustics, Speech and Signal Processing"}
-
- @unpublished{davatz93:cvpr,
- author = "Davatzikos, C. A. and Prince, J. L.",
- year = 1993,
- title = "Adaptive active contour algorithms for extracting and
- mapping thick curves",
- note = "Submitted to CVPR '93, IEEE Conference on Computer
- Vision and Pattern Recognition"}
-
- @article{kass88:ijcv,
- author = "Kass, M. and Witkin, A. and Terzopoulos, D.",
- year = 1988,
- title = "Snakes: active contour models",
- journal = "International Journal of Computer Vision",
- volume = 1,
- number = 4,
- pages = "321--331"}
-
- @article{leymarie92:pami,
- author = "Leymarie, F. and Levine, M. D.",
- year = 1992,
- title = "Simulating the grassfire transform using an active
- contour model",
- journal = "IEEE Transations on Pattern Analysis and Machine
- Intelligence",
- volume = 14,
- number = 1,
- pages = "56--75"}
-
- @unpublished{leymarie93:pami,
- author = "Leymarie, F. and Levine, M. D.",
- year = 1993,
- title = "Tracking deformable objects in the plane using an
- active contour model",
- note = "To appear in IEEE Transactions on Pattern Analysis
- and Machine Intelligence"}
-
- @inproceedings{menet90:darpa,
- author = "Menet, S. and Saint-Marc, P. and Medioni, G.",
- year = 1990,
- title = "B-snakes: implementation and application to stereo",
- booktitle = "Proceedings of the DARPA Image Understanding
- Workshop, Pittsburgh, Pennsylvania, September 1990"}
-
- @article{poggio85,
- author = "Poggio, T. and Torre, V. and Koch, C.",
- year = 1985,
- title = "Computational vision and regularization theory",
- journal = "Nature",
- volume = 317,
- pages = "314--319"}
-
- @inproceedings{richens92:ipa,
- author = "Richens, D. and Rougan, N. and Bloch, I. and
- Mousseaux, E.",
- year = 1992,
- title = "Segmentation by deformable contours of {MRI} sequence
- of left ventricle for quantitative myocardial
- analysis",
- booktitle = "IEE Proceedings of the Fourth International
- Conference on Image Processing and its Applications,
- Maastricht, April 1992",
- pages = "393--396"}
-
- @article{rougan91:spie,
- author = "Rougan, N. and Preteux, F.",
- year = 1991,
- title = "Deformable markers: mathematical morphology for
- active contour model control",
- journal = "Proceedings of the Society of Photo-Optical
- Instrumentation Engineers",
- volume = 1568,
- pages = "78--89"}
-
- @inproceedings{rougan92:embs,
- author = "Rougan, N. and Preteux, F.",
- year = 1992,
- title = "Oriented smoothness constraints for adaptive active
- contour models",
- booktitle = "Proceedings of the Fourteenth Conference of the IEEE
- Engineering in Medicine and Biology Society",
- pages = "1916--1917"}
-
- @article{samad91:spie,
- author = "Samadani, R.",
- year = 1991,
- title = "Adaptive snakes: control of damping and material
- parameters",
- journal = "Proceedings of the Society of Photo-Optical
- Instrumentation Engineers",
- volume = 1568}
-
- @article{sinha92:pami,
- author = "Sinha, S. S. and Schunk, B. G.",
- year = 1992,
- title = "A two-stage algorithm for discontinuity-preserving
- surface reconstruction",
- journal = "IEEE Transactions on Pattern Analysis and Machine
- Intelligence",
- volume = "PAMI-14",
- number = 1,
- pages = "36--55"}
-
- @article{snyder91:pami,
- author = "Snyder, M. A.",
- year = 1991,
- title = "On the mathematical foundations of smoothness
- constraints for the determination of optical flow and
- for surface reconstruction",
- journal = "IEEE Transactions on Pattern Analysis and Machine
- Intelligence",
- volume = "PAMI-13",
- number = 11,
- pages = "1105--1114"}
-
- @article{staib92:pami,
- author = "Staib, L. H. and Duncan, J. S.",
- year = 1992,
- title = "Boundary finding with parametrically deformable
- models",
- journal = "IEEE Transactions on Pattern Analysis and Machine
- Intelligence",
- volume = "PAMI-14",
- number = 11,
- pages = "1061--1075"}
-
- @article{terzop86:pami,
- author = "Terzopoulos, D.",
- year = 1986,
- title = "Regularization of inverse visual problems involving
- discontinuities",
- journal = "IEEE Transactions on Pattern Analysis and Machine
- Intelligence",
- volume = "PAMI-8",
- number = 4,
- pages = "413--423"}
-
- @article{terzop87:cg,
- author = "Terzopoulos, D. and Platt, J. and Barr, A. and
- Fleischer, K.",
- year = 1987,
- title = "Elastically deformable models",
- journal = "Computer Graphics",
- volume = 21,
- number = 4,
- pages = "205--214"}
-
- @article{terzop88:cga,
- author = "Terzopoulos, D. and Witkin, A.",
- year = 1988,
- title = "Physically-based models with rigid and deformable
- components",
- journal = "IEEE Computer Graphics and Applications",
- month = "November",
- pages = "41--51"}
-
- @article{terzop88:ai,
- author = "Terzopoulos, D. and Witkin, A.",
- year = 1988,
- title = "Constraints on deformable models: recovering shape
- and non-rigid motion",
- journal = "Artificial Intelligence",
- volume = 36,
- pages = "91--123"}
-
- @article{waite90:bttj,
- author = "Waite, J. B. and Welsh, W. J.",
- year = 1990,
- title = "Head boundary location using snakes",
- journal = "British Telecom Technology Journal",
- volume = 8,
- number = 3,
- pages = "127--136"}
-
- @article{will92:cvgip,
- author = "Williams, D. J. and Shah, M.",
- year = 1992,
- title = "A fast algorithm for active contours and curvature
- information",
- journal = "CVGIP: Image Understanding",
- number = 55,
- volume = 1,
- pages = "14--26"}
-
- @article{yuille92:ijcv,
- author = "Yuille, A. L. and Hallinan, P. W. and Cohen, D. S.",
- year = 1992,
- title = "Feature extraction from faces using deformable
- templates",
- journal = "International Journal of Computer Vision",
- volume = 8,
- number = 2,
- pages = "99--111"}
-
- ------------------------------
-
- End of VISION-LIST digest 11.43
- ************************
-