home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
OS/2 Shareware BBS: 35 Internet
/
35-Internet.zip
/
yep16.zip
/
yepU.doc
< prev
next >
Wrap
Text File
|
1997-04-09
|
8KB
|
184 lines
The YEPu REXX Scripts -- for working with Yep URL Capture Logs
==============================================================
All of the following scripts are REXX programs. This means they are plain
text command files which you can edit or modify. All of the scripts (except
YEPuFOLD) all you to specify a "default" Url Log in the script. This would
save you typing the path and filename of the Url Log which you want to
process on the command line each time.
The way to change the "default" Url Log is to load the REXX script into
a text editor. Near the beginning of the file you will see a line which
looks like this:
UrlLog = 'c:\osu\yarn\url.log'
You may modify the above line to specify any default Url Log you want that
script to use. Some scripts have other values (always immediately beneath
the "UrlLog" value) which can be edited. At the beginning of each script
there are some comments which give you tips about how to use that script.
Some specific instructions and descriptions of the scripts follow:
If you write or improve any scripts for working with Yep Captured URLs,
please let me know!
Note: the yepU scripts may not work with Object REXX as expected.
YEPuSTAT
========
This program will run through the specified Yep URL Capture Log and report
to you the number of records (unique and duplicate) in it. As well it will
tell you the message date on the first and last record. With this script
you can quickly see an overview of your URL Capture Log.
You can specify a default Capture Log filename near the beginning of
the rexx script, or specify one on the command line.
YEPuDUPE
========
This program runs through the specified Yep URL Capture Log and extracts
all the unique URLs, therefore eliminating duplicates.
You can specify a "default" Capture Log filename near the beginning of the
rexx script, or specify one on the command line. Also you will want to
edit the path and filename to the backup file on the line that looks like
this:
UrlLogBackUp = 'c:\osu\yarn\url.bak'
The original Url Log will be backed up and then the Url Log updated
(over written) automatically. Example:
YEPuDUPE urlcap.Log
This is a good script to run in your Yarn batch file (if you have one)
after you exit Yarn each time. It's pretty quick, and will keep dupes
trimmed. If you are capturing a lot of URLs perhaps YEPuMAX would be
better for you though.
YEPuMAX
=======
This script will output only the latest specified number of entries in the
Yep URL Capture Log. This could be useful to keep the log trimmed down
to a certain size, deleting the oldest captured URLs.
You may change the "default" values for the Url Log filename and number of
URLs to keep near the beginning of the script. To edit the default number
of URLs to keep, and the filename and path to the backup file look for
these lines near the beginning of the script:
UrlLogBackup = 'c:\osu\yarn\url.bak'
Keep = 100
Otherwise you can use the command line, first specifying the URL Log
filename, and then the number of entries to keep.
YEPuMAX url.log 50
YEPuMAX also filters out duplicate URLs. So if you run this script you
should not need to run YEPuDUPE as well.
This would work good run in a batch file each time yarn exits (if you are
set up that way).
YEPuHTML
========
This script will take the data from the specified Yep URL Capture Log and
convert it into a formatted HTML document. The HTML document will list
all the captured URL information in a numbered (ordered) list, and have
links to each URL. This HTML file can be loaded into any WWW Browser and
used to explore your captured URLs.
You can specify a default Capture Log filename near the beginning of
the rexx script, or specify one on the command line.
Use Standard Output Redirection to create a new file with only unique
URLs (duplicates are filter out of the HTML file).
YEPuHTML > NewUrls.html
If you are using Netscape/2, or other browser with bookmarks, you could
make a bookmark to your "NewUrls.html" and have quick access to it
all the time. If you are using WebExplorer you can load the NewUrls.html
into the program and then drag and drop the page (URL object) into
A folder for quick access.
*YEP2HTML*
========
This is a script written by Phil Crown (http://web2.airmail.net/pcrown)
to convert YEP Url logs into HTML in a slightly different way. You may
want to try it out!
YEPuFOLD
========
This is by far the most complex script. It will create a folder (if it
doesn't already exist) on your OS/2 desktop, and populate it with Web
Explorer URL Objects. These objects can be then opened with Web Explorer,
by dragging and dropping them on a running Web Explorer (to load the URL
Web page into it), or a Web Explorer icon (to open a new instance of Web
Explorer with the URL of the URL object).
You can specify a default Capture Log filename near the beginning of
the rexx script, or specify one on the command line.
All the URL information from the Yep URL Log is moved to the COMMENT area
of the Web Explorer URL, for reference. The COMMENT area is on "page 3" of
the object's Settings|File notebook. The data is a bit of a mess due to the
small text area being word-wrapped, but it is readable, and can be copied
to the clipboard and pasted elsewhere with it's correct format, if need be.
Duplicate URL's are automatically filtered from the URL Capture Log during
the Object creation process.
This is all not extremely elegant. It is somewhat of a make-shift hack job.
But here are some tips that may help you deal with Yep URL Folder Objects.
(If anyone wants to rewrite the script to make it more flexible and better,
by all means, don't let me stop you!)
Tips for URL Objects:
~~~~~~~~~~~~~~~~~~~~
Each object is created using the URL address for it's name. This can be
messy and confusing. However, if you rename an URL in the folder then the
next time YEPuFOLD is run it will create another instance of the object.
One way to perhaps deal with this is to leave the objects in the Yep Urls
folder alone. Make copies of the URLs you want to keep to another folder
and rename them to something more easily identifiable there; or after
access an URL from the Yep Urls folder which you want to keep, drag a copy
of the URL from Web Explorer to another folder.
Remember you can sort your Yep Urls folder by "creation date" and thereby
view them in the order they were captured to the Yep URL Capture Log. The
newest captured URL will be last on the list.
"Details View" of the Yep Urls folder may be preferable to some people as
well; as the Object titles tend to be long and can appear confusing when
scattered all over the place.
You can create a desktop/folder shadow object for your actual Yep URL
Capture Log and open it with E, or another editor, along side the Yep URL
folder. In this way you can edit the actual log file and delete URLs you
have checked out, or write Comments to go with them, for future reference.
(there is no special format for the Yep URL Capture Log file: entries can
be any length and in any order, just as long as at least one blank line
separates each entry, and there is at least one blank line at the bottom of
the file). Of course now that you have the text file open you can just cut
and paste the URLs into Web Explorer's "open URL" dialog box. <-:
Good luck with these toys. <-;