home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!destroyer!wsu-cs!wsu-eng.eng.wayne.edu!uds
- From: uds@wsu-eng.eng.wayne.edu (Seetamraju Udaybhaskar)
- Subject: Re: Learning what COULD be learned
- Message-ID: <1992Jul22.005147.2391@cs.wayne.edu>
- Sender: usenet@cs.wayne.edu (Usenet News)
- Reply-To: uds@wsu-eng.eng.wayne.edu (Seetamraju Udaybhaskar)
- Organization: Wayne State University, Detroit
- References: <1992Jul7.074650.27125@aber.ac.uk> <13uievINN1mp@iraul1.ira.uka.de> <arms.711663417@spedden> <1992Jul21.082035.8898@aber.ac.uk> <arms.711759321@spedden>
- Date: Wed, 22 Jul 1992 00:51:47 GMT
- Lines: 20
-
- In article <arms.711759321@spedden> arms@cs.UAlberta.CA (Bill Armstrong) writes:
- >
- >If a tree isn't learning the required task, as shown by a lack of
- >further improvement, then you can double the size of the tree for that
- >particular output and try again. Does this make it clear how the
- >technique of independent trees would be used?
- >
- >--
- >***************************************************
- >Prof. William W. Armstrong, Computing Science Dept.
- >University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- >arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-
-
- could you clarify, about how, with the number of inputs constant, the
- size of the tree could be varied ?
-
-
- Seetamraju Udaya Bhaskar Sarma
- (email : seetam @ ece7 . eng . wayne . edu)
-