When migrating from WardsOriginalWiki to some of the WikiWikiClones, this perl script may be helpful. Please don't bother if you think it's too trivial or too long. Just edit. -- DierkKoenig
#!/usr/bin/perl
if (not (defined @ARGV[0]))
{
print <<HERE ;
SYNOPSIS perl extractDB.pl <db> [filter] [newDir]
<db> is the location of the db, as absolute or relative path, without the .db extension
[filter] optional regex to limit the extracted titles, default is "."
[newDir] is the dir where the extracted files get stored, absolute or relative,
[newDir] is optional, default is <db>/ ,i.e. the db name serves as new subdir.
The dir is assumed to exist, script dies otherwise.
WHAT IT DOES
Extract a Berkely-DB created by the "original wiki" into flat files for e.g. later use
with file-based wikis.
Files are named by wiki title, extension will be ".wiki". You may want to rename later.
Meta info is not preserved (rev, host, date, agent).
Example
perl extractDB.pl db/wiki Java
extracts all pages containing "Java" in the <title> from db/wiki.db to db/wiki/<title>.wiki
HERE
return;
},
$dbName = @ARGV[0];
$Filter = @ARGV[1] || '.';
$newPath = @ARGV[2] || $dbName.'/';
$Ext = ".wiki";
print "Settings:\ndb\t$dbName.db\nFilter\t$Filter\nnewDir\t$newPath\nExt\t$Ext\n";
dbmopen (%db, $dbName, 0666) || die ("can't open $dbName");
while (($key, $value) = each %db)
{
if ($key =~ /$Filter/)
{
%hash = split(/\263/,$value); #FieldSeparator
print STDOUT "$key\t";
$targetFileName = "$newPath/$key$Ext";
open F, ">$targetFileName" or die "cannot open $targetFileName: $!";
print F $hash{text},;
close F;
},
},
dbmclose (%db);