home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!world!srctran
- From: srctran@world.std.com (Gregory Aharonian)
- Subject: Neural network patent abstracts posting
- Message-ID: <C0rEou.H94@world.std.com>
- Organization: The World Public Access UNIX, Brookline, MA
- Date: Tue, 12 Jan 1993 21:07:41 GMT
- Lines: 391
-
-
- The following is my periodic posting of the abstracts and stat data
- for new neural network patents. The full text and diagrams of any of these
- patents can be ordered from the patent office for $3 in printed form. Also,
- for $20, I will provide a machine readable version of the abstract, text,
- claims and references for any neural network patent.
-
- Greg Aharonian
- Source Translation & Optimization
- srctran@world.std.com
- 617-489-3727
- =============================================================================
- For a very good article on the legal issues involving neural network
- patents, see the article "Intellectual Property Protection for Neural
- Networks" by Donald Wenskay, Neural Networks, 3, pp. 229-236, 1990.
- =============================================================================
- For those interested in applying for a neural network patent, I freely
- provide a LaTeX template for a patent application. Please email a request,
- and I'll send it to you. Contains most of the boilerplate you need.
- =============================================================================
- To date, I have yet to see any articles in the trade journals on either
- companies signing cross-licensing agreements to share their neural network
- patents, or patent infringement lawsuits dealing with neural network patents.
- To me, this means that no one is making much money with neural networks (other
- than selling software), and that the financial return on acquiring a patent
- is less than the cost of applying for and maintaining a patent.
- If anyone hears of either occurrence, please let me know.
- =============================================================================
-
- 5,168,352 [IMAGE AVAILABLE] Dec. 1, 1992
-
- Coloring device for performing adaptive coloring of a monochromatic image
-
- INVENTOR: Motohiko Naka, Kawasaki, Japan
- Mie Saitoh, Kawasaki, Japan
- Takehisa Tanaka, Tokyo, Japan
- Kunio Yoshida, Kawasaki, Japan
- ASSIGNEE: Matsushita Electric Industrial Co., Ltd., Osaka, Japan
- APPL-NO: 07/480,456
- DATE FILED: Feb. 15, 1990
- FRN-PRIOR: Japan 1-36779 Feb. 16, 1989
- INT-CL: [5] H04N 9/02
- US-CL-ISSUED: 358/81, 82, 75; 382/15
- US-CL-CURRENT: 358/81, 75, 82; 382/15
- SEARCH-FLD: 358/81, 82, 75; 364/513; 382/14, 15
- REF-CITED:
- U.S. PATENT DOCUMENTS
- 4,760,604 7/1988 Cooper 364/715.01
- 4,926,250 5/1990 Konishi 358/81
-
- OTHER PUBLICATIONS
- "A Wafer Scale Integration Neural Network Utilizing Completely Digital
- Circuits" by Moritoshi Yasunaga et al., IEEE International Joint Conference
- on Neural Networks, Washington, D.C., Jun. 1989.
- Richard P. Lippmann, An Introduction to Computing with Neural Nets, IEEE ASSP
- Magazine Apr. 1987.
-
- ART-UNIT: 262
- PRIM-EXMR: James J. Groody
- ASST-EXMR: Sherrie Hsia
- LEGAL-REP: Lowe, Price, LeBlanc & Becker
- ABSTRACT:
- A coloring device includes an image sampling device for sampling an input
- signal block representing a group of n.times.m pixels of a monochromatic
- image and for outputting first signals representing the sampled pixels of the
- input signal block of the monochromatic image; and artificial neural network,
- a connection for providing to the artificial neural network, substantially
- simultaneously, pattern information on patterns to be contained in the
- monochromatic image and color information on first data indicating colors
- given to the patterns indicated by the pattern information prior to
- generation of a color image signal, the artificial neural network having
- internal state parameters which are adaptively optimized by using a learning
- algorithm prior to the generation of a color image, the artificial neural
- network operating for receiving data representing the first signal, for
- determining which of colors preliminarily and respectively assigned to
- patterns to be contained in the group of pixels of the monochromatic image
- represented by the input signal block is given to a pattern actually
- contained in the group of pixels represented by the input signal block and
- for outputting second signals representing second data on three primary
- colors which are used to represent the determined colors given to the
- patterns actually contained in the group of pixels represented by the input
- signal block; and a color image storing device for receiving the second
- signals outputted from the artificial neural network, for storing the
- received second signals in locations thereof corresponding to the positions
- of the pixels represented by the input signal block and for outputting third
- signals representing the three primary color component images of the pixels
- represented by the input signal block; wherein the image sampling device
- further functions for scanning the whole of the monochromatic image by
- generating successive input signal blocks representing successive groups
- n.times.m pixels to be sampled, thereby outputting third signals for all
- pixels of the monochromatic image.
- 10 Claims, 4 Drawing Figures
- EXMPL-CLAIM: 1
- NO-PP-DRAWING: 3
- ==============================================================================
- 5,168,550 [IMAGE AVAILABLE] Dec. 1, 1992
-
- Neutral network with plural weight calculation methods and variation of
- plural learning parameters
-
- INVENTOR: Shigeo Sakaue, Takarazuka, Japan
- Toshiyuki Kohda, Takatsuki, Japan
- Yasuharu Shimeki, Suita, Japan
- Hideyuki Takagi, Kyoto, Japan
- Hayato Togawa, Tokyo, Japan
- ASSIGNEE: Matsushita Electric Industrial Co., Ltd., Osaka, Japan
- APPL-NO: 07/481,330
- DATE FILED: Feb. 20, 1990
- FRN-PRIOR: Japan 1-43730 Feb. 23, 1989
- Japan 1-47610 Feb. 28, 1989
- INT-CL: [5] G06F 15/18
- US-CL-ISSUED: 395/23
- US-CL-CURRENT: 395/23
- SEARCH-FLD: 364/513; 395/23
- REF-CITED:
-
- OTHER PUBLICATIONS
- Improving the Learning Rate of Back-Propagation with the Gradient Reuse
- Algorithm; Hush et al; IEEE Inter. Conf. on Neutral Networks; Jul. 24-27,
- 1988; pp. I-441 to I-446.
- Learning Internal Representations by Error Propagation; Rumelhart et al.;
- Parallel Distributed Processing, vol. 1, Foundations; MIT Press; 1986; pp.
- 318-362.
- "Learning representations by back-propagating errors", David Rumelhart et
- al.; Nature, vol. 323, Oct. 1986, pp. 533-536.
- DUTTA, "Bond Ratings: A non-conservative application of neutral networks",
- IEEE International Conference on Neutral Networks, vol. 2, pp. 443-450,
- Jul. 24, 1988.
- Watrous, "Learning algorithms for connectionist networks: applied gradient
- methods of nonlinear optimization", IEEE First National Conference on
- Neutral Networks, vol. 2, pp. 619-628, Jun. 21, 1987.
- JACOBS, "Increased rates of convergence through learning rate adaption",
- Neutral Networks, vol. 1, No. 4, pp. 295-307, 1988.
-
- ART-UNIT: 238
- PRIM-EXMR: Allen R. MacDonald
- LEGAL-REP: Stevens, Davis, Miller & Mosher
-
- ABSTRACT:
- An iterative learning machine uses, as a direction of changing iterative
- weight, a conjugate gradient direction in place of the conventional steepest
- descent direction, thereby saving time. Learning rates are set dynamically.
- Error calculation for plural learning rates, with respect to a certain weight
- changing direction, are accomplished by storing a product-sum of the input
- signals and weights in a hidden layer and a product-sum of the input signals
- and the weight changing direction in the hidden layer. When the learning
- falls into a non-effective state where further iteration does not effectively
- reduce an error, the weights are adjusted in order to restart the learning.
- 31 Claims, 18 Drawing Figures
- EXMPL-CLAIM: 1
- NO-PP-DRAWING: 17
- ==============================================================================
- 5,168,551 [IMAGE AVAILABLE] Dec. 1, 1992
-
- MOS decoder circuit implemented using a neural network architecture
-
- INVENTOR: Ho-sun Jeong, Taegu, Republic of Korea
- ASSIGNEE: Samsung Electronics Co., Ltd., Kyunggi-do, Republic of Korea
- APPL-NO: 07/573,408
- DATE FILED: Aug. 28, 1990
- FRN-PRIOR: Republic of Korea 90-4172 Mar. 28, 1990
- INT-CL: [5] G06F 15/18
- US-CL-ISSUED: 395/27, 24; 364/602, 807
- US-CL-CURRENT: 395/27; 364/602, 807; 395/24
- SEARCH-FLD: 307/201, 494, 498, 529; 395/24, 27; 364/602, 807
- REF-CITED:
- U.S. PATENT DOCUMENTS
- 4,876,534 10/1989 Mead et al. 340/825.95
- 4,904,881 2/1990 Castro 307/201
- 4,956,564 9/1990 Holler et al. 307/201
- 4,962,342 10/1990 Mead et al. 307/201
- 4,978,873 12/1990 Shoemaker 307/498
- 4,988,891 1/1991 Mashiko 307/201
-
- OTHER PUBLICATIONS
- McClelland et al., Explorations in Parallel Distributed Processing: A
- Handbook of Models, Programs, and Exercises, The MIT Press, 1988, pp.
- 83-99.
- Walker et al., "A CMOS Neural Network for Pattern Association", IEEE Micro,
- Oct. 1989, pp. 68-74.
- Salam et al., "A Feedforward Neural Network for CMOS VSLI Implementation",
- Midwest Sympos. on Cir. Syst., 1990, pp. 489-492.
- Graf et al., "VLSI Implementation of a Neural Network Model", Computer, Mar.
- 1988, pp. 41-49.
- Tanenbaum, A. S., Structured Computer Organization, Prentice-Hall, Inc.,
- 1984, pp. 121-122.
- ART-UNIT: 238
- PRIM-EXMR: Michael R. Fleming
- ASST-EXMR: Robert Downs
- LEGAL-REP: Cushman, Darby & Cushman
-
- ABSTRACT:
- A decoder circuit based on the concept of a neural network architecture has a
- unique configuration using a connection structure having CMOS inverters, and
- PMOS and NMOS bias and synapse transistors. The decoder circuit consists of M
- parallel inverter input circuit corresponding to an M-bit digital signal and
- forming an input neuron group, a 2.sup.M parallel inverter output circuit
- corresponding to 2.sup.M decoded outputs and forming an output neuron group,
- and a synapse group connected between the input neuron group and the output
- neuron group responsive to a bias group and the M-bit digital original for
- providing a decoded output signal to one of the 2.sup.M outputs of the output
- neuron group when a match is detected. Hence, only one of the 2.sup.M outputs
- will be active at any one time.
- 6 Claims, 6 Drawing Figures
- EXMPL-CLAIM: 1
- NO-PP-DRAWING: 2
- ==============================================================================
- 5,170,071 [IMAGE AVAILABLE] Dec. 8, 1992
-
- Stochastic artifical neuron with multilayer training capability
-
- INVENTOR: Gregory A. Shreve, Redondo Beach, CA
- ASSIGNEE: TRW Inc., Redondo Beach, CA (U.S. corp.)
- APPL-NO: 07/716,717
- DATE FILED: Jun. 17, 1991
- INT-CL: [5] G06F 15/18
- US-CL-ISSUED: 307/20; 395/27
- US-CL-CURRENT: 307/201; 395/27
- SEARCH-FLD: 395/27; 307/201
- REF-CITED:
- U.S. PATENT DOCUMENTS
- 3,341,823 9/1967 Connelly 307/201 X
- 3,691,400 9/1972 Askew 307/201
- 3,950,733 4/1976 Cooper et al. 340/172.5
- 4,518,866 5/1985 Clymer 307/201
- 4,591,980 5/1986 Huberman et al. 364/200
- 4,773,024 9/1988 Faggin et al. 364/513
- 4,807,168 2/1989 Moopenn et al. 364/602
- 4,809,193 2/1989 Jourjine 364/513
- 4,893,255 1/1990 Tomlinson 364/513
- 4,918,618 4/1990 Tomlinson 364/513
- 4,989,256 1/1991 Buckley 395/27 X
-
- OTHER PUBLICATIONS
- Tomlinson, Jr., Walker and Sivilotti, "A Digital Neural Network Architecture
- for VLSI", IJCNN 1990, Jun. 1990, San Diego, Calif.
- Walker and Tomlinson, Jr., "DNNA: A Digital Neural Network Architecture",
- INNC, Jul. 9-13, 1990.
- P. C. Patton, "The Neural Semiconductor NU32/SU3232 Chip Set," the
- Superperformance Computing Service Brief No. 36, Feb. 1990.
- R. Colin Johnson, "Digital Neurons Mimic Analog," Electronic Engineering
- Times, Feb. 12, 1990.
- Nguyen, Dziem and Holt, Fred, "Stochastic Processing in a Neural Network
- Application," IEEE First International Conference on Neural Networks, San
- Diego Calif., Jun. 21-24, 1987, pp. 281-291.
- Gaines, Brian R., "Uncertainty as a Foundation of Computational Power in
- Neural Neworks," IEEE First International Conference on Neural Networks,
- San Diego Calif., Jun. 21-24, 1987, pp. 51-57.
- Van den Bout, David E. and Miller, T. K., "A Stochastic Architecture for
- Neural Nets," pp. 481-488.
- Rumelhart, David E. et al., "Learning Representations by Back-Propagating
- Errors," Nature, vol. 323, Oct. 9, 1986, pp. 533-536.
- Rumelhart, David E. et al., "Learning Internal Representations by Error
- Propagation," Institute for Cognitive Science (ICS) Report 8506, Sep. 1985.
-
- ART-UNIT: 259
- PRIM-EXMR: David Hudspeth
- LEGAL-REP: James M. Steinberger, G. Gregory Schivley, Ronald L. Taylor
-
- ABSTRACT:
- A probabilistic or stochastic artificial neuron in which the inputs and
- synaptic weights are represented as probabilistic or stochastic functions of
- time, thus providing efficient implementations of the synapses. Stochastic
- processing removes both the time criticality and the discrete symbol nature
- of traditional digital processing, while retaining the basic digital
- processing technology. This provides large gains in relaxed timing design
- constraints and fault tolerance, while the simplicity of stochastic
- arithmetic allows for the fabrication of very high densities of neurons. The
- synaptic weights are individually controlled by a backward error propagation
- which provides the capability to train multiple layers of neurons in a neural
- network.
- 8 Claims, 3 Drawing Figures
- EXMPL-CLAIM: 1
- NO-PP-DRAWING: 3
- ==============================================================================
- 5,170,463 [IMAGE AVAILABLE] Dec. 8, 1992
-
- Neuro-computer
-
- INVENTOR: Yoshiji Fujimoto, Nara, Japan
- Naoyuki Fukuda, Nara, Japan
- Toshio Akabane, Tenri, Japan
- ASSIGNEE: Sharp Kabushiki Kaisha, Osaka, Japan
- APPL-NO: 07/885,239
- DATE FILED: May 20, 1992
- REL-US-DATA: Continuation of Ser. No. 456,649, Dec. 27, 1989, abandoned.
- FRN-PRIOR: Japan 63-330971 Dec. 29, 1988
- Japan 1-24307 Feb. 1, 1989
- Japan 1-127274 May 19, 1989
- INT-CL: [5] G06F 15/16
- US-CL-ISSUED: 395/11, 24, 27; 364/DIG.2, DIG.1
- US-CL-CURRENT: 395/11; 364/DIG.1, DIG.2; 395/24, 27
- SEARCH-FLD: 395/11, 24, 27; 364/200, 900
- REF-CITED:
- U.S. PATENT DOCUMENTS
- 4,514,807 4/1985 Nogi 364/200
- 4,633,472 12/1986 Krol 364/200
- 4,644,496 2/1987 Andrews 364/513
- 4,660,166 4/1987 Hopfield 364/807
- 4,709,327 11/1987 Hillis et al. 364/513
- 4,739,476 4/1988 Fiduccia 364/513
- 4,766,534 8/1988 De Benedictis 364/513
- 4,796,199 1/1989 Hammerstrom et al. 364/513
- 4,809,193 2/1989 Jourjine 364/513
- 4,811,210 3/1989 McAulay 364/513
- 4,858,147 8/1989 Conwell 364/200
- 4,891,782 1/1990 Johnson 364/786
- 4,908,751 3/1990 Smith 364/513
- 4,912,647 3/1990 Wood 364/513
- 4,918,617 4/1990 Hammerstrom et al. 364/513
- 4,918,618 4/1990 Tomlinson, Jr. 364/513
- 4,920,487 4/1990 Baffes 364/200
- 4,942,517 7/1990 Cok 364/200
- 4,951,239 8/1990 Andes et al. 364/807
-
- OTHER PUBLICATIONS
- The Computer Journal, vol. 30, No. 5, Oct. 1987, pp. 413-419; B. M. Forrest
- et al: "implementing neural network models on parallel computers".
- Proceedings of the 1983 International Conference on Parallel Processing,
- Columbus, Ohio, Aug. 1983, pp. 95-105. IEEE, New York US; T. Hoshino et al:
- "Highly parallel processor array PAX for wide scientific applications".
- 1988 Cern School of Computing, Oxford, Aug. 1988, pp. 104-126; P. C.
- Treleaven: "Parallel architectures for neurocomputers".
- IEEE Communications Magazine, vol. 26, No. 6, Jum. 1988, pp. 45-50; T. G.
- Robertazzi: "Toroidal networks".
- Proceedings of the IEEE 1988 National Aerospace and Electronics Conference
- NAECON '88, Dayton, May 1988, vol. 4, pp. 1574-1580; B. C. Deer et al:
- "Parallel processor for the simulation of adaptive networks".
- D. A. Pomerleau et al, "Neural Network Simulation at Warp Speed: How We Got
- 17 Million Connections Per Second," Proceedings of the IEEE ICNN, San
- Diego, Calif., Jul. 1988, vol. II, pp. 143-150.
- S. Y. Kung and J. N. Hwang, "Parallel Architectures for Artificial Neural
- Networks," Proceedings of the IEEE ICNN, San Diego, Calif., Jul. 1988, vol.
- II, pp. 165-172.
- S. Y. Kung and J. N. Hwang, "Ring Systolic Designs for Artificial Neural
- Nets," Abstracts of the First Annual INNS Meeting, Boston, Mass., Sep.
- 1988, p. 390.
- G. Blelloch and C. R. Rosenberg, "Network Learning on the Connection
- Machine," Proceedings of the IJCAI, Milano, Italy, Aug. 1987, pp. 323-326.
- A. Johannet et al, "A Transputer-Based Neurocomputer," Parallel Programming
- of Transputer Based Machines: Proceedings of the 7th OCCAM User Group
- Technical Meeting Grenoble, France, Sep. 1987, pp. 120-127.
- D. Suter and X. Deng, "Neural Net Simulation on Transputers," Australian
- Transputer and OCCAM User Group Conference Proceedings, Royal Melbourne
- Institute of Technology, Jun. 1988, pp. 43-47.
- Jim Baily and Dan Hammerstrom, "Why VLSI Implementation of Associative VLCNs
- Require Connection Multiplexing," Proceedings of the IEEE ICNN, San Diego,
- Calif., Jul. 1983, vol. II, pp. 173-180.
- M. Rundnick and D. Hammerstrom, "An Interconnect Structure for Wafer Scale
- Neurocomputers," Abstracts of the First annual INNS Meeting, Boston, Mass.,
- Sep. 1988, p. 405.
- T. Beynon and N. Dodd, "The Implementation of Multi-Layer Perceptron on
- Transputer Networks," Parallel Programming of Transputer Based Machines:
- Proceedings of the 7th OCCAM User Group Technical Meeting Grenoble, France,
- Sep. 1987, pp. 108-119.
- ART-UNIT: 238
- PRIM-EXMR: Allen R. MacDonald
- ASST-EXMR: George Davis
- LEGAL-REP: Nixon & Vanderhye
-
- ABSTRACT:
- A neurocomputer connected to a host computer, the neurocomputer having a
- plurality of processor elements, each of the processor elements being placed
- at each of node of a lattice respectively, the neurocomputer includes a
- plurality of first processor elements, each of the first processor elements
- being placed at a node of the lattice, capable of transmitting data from and
- to the host computer and capable of transmitting the data to one of adjacent
- processor elements, a plurality of second processor elements, each of the
- second processor elements being placed at a node of the lattice, capable of
- receiving the data from one of adjacent processor elements, and capable of
- sending the data to another adjacent processor elements from which the data
- is not outputted. The neurocomputer also includes a plurality of rectangular
- regions, each of the rectangular regions including a plurality of the
- processor elements, a plurality of physical processors, each of the
- processors being placed in each of the rectangular regions and connected with
- adjacent processors each other, each of the processors being capable of
- inputting and outputting to and from the host computer and having all
- functions of the processor elements included in the rectangular region, and a
- device for distributing the physical processors to one or a plurality of
- divided sections formed in the rectangular regions in such a manner that each
- of the sections is substantially equally assigned to each of the physical
- processors by permutation.
- 33 Claims, 55 Drawing Figures
- EXMPL-CLAIM: 1
- NO-PP-DRAWING: 45
- ==============================================================================
- --
- **************************************************************************
- Greg Aharonian
- Source Translation & Optimiztion
- P.O. Box 404, Belmont, MA 02178
-