The following SPIE conference call, and those of related conferences being
held within the same overall symposium (see information* at end of article),
may interest readers -
_____________________
Final Call for Papers
_____________________
SPIE SENSOR FUSION VI
September 7-10, 1993
Boston, Massachusetts
Hynes Convention Center
Chair: Paul S. Schenker, Jet Propulsion Lab.
PROGRAM COMMITTEE:
Terrance E. Boult, Columbia Univ.
Theodore J. Broida, Hughes Aircraft Co.
Su-Shing Chen, National Science Foundation
Gregory D. Hager, Yale Univ.
Martin Herman, Nat. Inst. of Standards and Technology
Terrance L. Huntsberger, Univ. of South Carolina
Ren C. Luo, North Carolina State Univ.
Suresh B. Marapane, Univ. of Tennessee/Knoxville
Gerard T. McKee, Univ. of Reading (UK)
N. Nandhakumar, Univ. of Virginia
Bobby S. Y. Rao, UC Berkeley
Michael Seibert, MIT/Lincoln Lab.
Faina Shtern, M.D., NIH/National Cancer Institute
Michael J. Swain, Univ. of Chicago
Charles V. Stewart, Rennselaer Polytechnic Institute
Stelios C. A. Thomopoulos, The Pennsylvania State Univ.
The Sensor Fusion conference presents new techniques for robustly integrating
and interpreting data from multiple sources. The main topic area is automation
and robotic systems; such systems often include multiple-and-moving cameras,
range and proximity detctors, force and touch feedback, etc. A typical system
requirement is to use the sensors, plus prior knowledge, to efficiently locate,
identify, and track objects; more advanced applications may require a detailed
inspection and recognition of the environment, and/or global determination of
robot position and state of task completion. Research challenges include
multi-sensor registration and calibration, combining sensor information over
space-and-time, 3-D shape modeling and shape recovery, 3-D object recognition
and localization et al. An exciting open problem is how to intelligently
control sensors to achieve a task-specific sensing objective, in the system
operational context of maximizing information and minimizing computation.
For example, "active vision" addresses how to purposefully direct camera gaze
and focus activity, analagous to human viewing. "Exploratory sensing" expands
this paradigm to cooperative fusion of vision, range, touch, and other sensory
modes, and may include the use of multiple distributed robot agents, e.g. to
develop environmental maps, and perform cooperative work. Collectively, these
problems have foundations in both machine and biological behavior, and both
perspectives are welcome. Another area of fundamental interest is techniques
for distributed detection & decision, as applies to data fusion in spatially
dispersed sensor arrays, decision-making in human organizations, and command-
control-communication within distributed information networks. In general,
the Sensor Fusion conference is characterized by disciplinary breadth. Speakers
of past years include researchers from applied mathematics, artificial
intelligence, computer science, engineering, psychology, neuroscience, and
theoretical biology. We continue to foster this diversity, encouraging papers
that contrast and compare multi-disciplinary approaches to sensor fusion, and/
or synthesize fresh theoretical viewpoints across disciplines.
In summary, we invite papers on multi-sensory fusion and its applications;
topics of interest include, but are not limited to:
o modeling and calibration of multiple sensors
o 3-D object modeling-and-recognition from multiple sensor views
o recovery of scene structure from time-sequence sensor data
o fusion of passive-active sources: vision-range, IR-microwave, etc.
o remote sensing, automated inspection, and target recognition
o distributed detection & decision networks and their applications
o robotic sensor fusion: visual, range, force, tactile, & kinematic data
o robot control based on multi-sensor inputs
o active vision, task-driven sensing, and sensor planning
o multiple robot agents and cooperative sensing strategies
o medical imaging, 3-D stereotaxy & visualization, and surgical aids
o man-machine systems and fused multisensory operator interfaces
o novel computing architectures and programming environments
***** Abstract Due Date: 8 Feb 1993 *****
Manuscript Due Date: 14 June 1993
Format for abstract submission:
- paper title
- authors' full names and affiliations
- complete addresses for all authors
- phone, FAX, and e-mail for all authors
- 100-200 words text
- 50-100 word principal author biography
Submit abstract by email or FAX to:
EMAIL: schenker@telerobotics.jpl.nasa.gov
FAX: 818-393-5007
Dr. PAUL S. SCHENKER
(attn: SPIE/Sensor Fusion VI)
Jet Propulsion Laboratory
4800 Oak Grove Drive/ MS 198-219
Pasadena, CA 91109
* NOTE:
Sensor Fusion VI is part of SPIE's International Symposium on Optical Tools
for Manufacturing and Advanced Automation. Within this large symposium, over
15 conferences are devoted to the areas of robotics, factory automation, and
machine perception; examples include: Intelligent Robots and Computer Vision,
Mobile Robots, Telemanipulator Technology, Sensors and Controls for Automated
Manufacturing, Model-Based Vision, Machine Vision Applications, Architectures and Systems, Vision Geometry, Applications of Fuzzy Logic, and Sensor Fusion itself. Other related activities include a joint SPIE/IEEE one-day workshop on "Intelligent Robotic Systems - Design and Applications," and the yearly meeting
of the SPIE Technical International Working Group on Robotics and Machine
Perception. For further information (related paper calls, advance programs, registration, and accomodations, et al.), please contact: