home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!pipex!unipalm!uknet!doc.ic.ac.uk!doc.ic.ac.uk!not-for-mail
- From: dds@doc.ic.ac.uk (Diomidis D Spinellis)
- Newsgroups: comp.lang.perl
- Subject: Re: Using perl for databases - experiences requested
- Keywords: database implementation experience
- Message-ID: <18b01tINNiji@swan.doc.ic.ac.uk>
- Date: 5 Sep 92 18:59:09 GMT
- References: <1992Sep5.040819.2039@nugget.rmNUG.ORG>
- Organization: Department of Computing, Imperial College, University of London, UK.
- Lines: 83
- NNTP-Posting-Host: swan.doc.ic.ac.uk
-
- In article <1992Sep5.040819.2039@nugget.rmNUG.ORG> scott@corybant.rmnug.org (Scott Meyer) writes:
- >What sort of experience have people had using perl to implement databases?
-
- I have used Perl to implement two databases.
-
- The first database is a small system I use for my thesis to keep track
- of various language characteristics. I save the data using the same
- approach as the one you describe in your article. Using single
- character field identification strings allows me to parse each record
- into an assiociative array of strings using the following loop:
-
- @rec = split(/\n\%/);
- $rec[0] =~ s/^\%//;
- foreach $r (@rec) {
- ($key, $val) = ($r =~ m/(.) (.*)$/);
- $rec{$key} = $val;
- }
-
- So for a record like:
-
- %N Perl
- %A Larry Wall
- %I Interpreted
-
- $rec{N} will be Perl, $rec{A} will be 'Larry Wall', and $rec{I} will be
- 'Interpreted'. Records are separated by single blank lines, so I can
- simply use $/=''; $*=1 to parse the input. You mention in your article
- that you use filters to process complex queries. I tried to write the
- query processing in the database script, but found myself to prefer the
- filter approach, which I am now using. I feel uneasy about using
- pipelines of Perl scripts (some complex queries clock up the process id
- counter up by as much as 400 in an otherwise idle machine), but it
- seems to work fine up to now. Most query scripts use the dbgrep Perl
- script which greps patterns in the database. It is a very small script
- whose engine consists of the following lines:
-
- $/=''; $*=1;
- $pattern = shift;
- eval "
- while (<>) {
- print if ($pattern);
- }
- ";
-
- The initial Perl query processing code was based on queries in Perl syntax
- of the type Field-letter = value. These were changed into Perl by the
- command:
-
- s?([$REC])\s*=\s*'([^']+)'?(((\$i) = m/%$1([^%]*)/) && \$i =~ m/$2/)?g;
-
- and then put into an array using the command:
-
- $x = $_; eval "\@selected = grep($x, \@db);";
-
- For example the query:
-
- select N='C' && C='Functional'
-
- will generate the following expression to be passed through grep:
-
- ((($i) = m/%N([^%]*)/) && $i =~ m/C/) &&
- ((($i) = m/%C([^%]*)/) && $i =~ m/Functional/)
-
- ($i is first assigned the whole field needed, and then is matched
- against its contents). The approach worked without a problem, but I
- found scripts much more convenient. I wish Perl had the capability to
- pipe through control structures like the Bourne Shell.
-
- The second database is a medical information system used in a hospital
- to track patient data. Its user-interface is implemented in C. The
- data is stored in gdbm files. Perl scripts are used to handle query
- handling and report generation. Queries are parsed from the `natural
- language' (do not ask me for it, you don't want it: it parses commands
- expressed in Greek ) syntax into a Perl expression, within a loop,
- which is then eval'ed. The records are variable sized, with a key
- generated from the name together with a unique number. Last time I
- checked there were more than 800 records in the database, and Perl was
- handling it without a problem.
-
- Diomidis
- --
- Diomidis Spinellis Internet: <dds@doc.ic.ac.uk> UUCP: ...!uknet!icdoc!dds
- Department of Computing, Imperial College, London SW7 #include "/dev/tty"
-