An interesting solution, and one that I think should be carefully
In light of the (hopefully!) ongoing discussions about cgi-bin etc. on
mole/moose, and the possibility of a solution to that issue, there may
well be a very strong relationship to this as well.
I can think of - and "see" - some problems. A few are minor, some are
Serious problem: no backup for the server end. Should it go down for any
reason, we're toast.
not-as-serious-but-still-pretty-serious problem: Nirvana:common:staff.
This volume fills VERY fast. Most, if not all of us, can recall a time
in the very recent past when it filled right up. I'll freely admit my
ignorance of the software mechanics behind the index/search engine, but
I'd bet that temporary files play a role somewhere. It is likely, very
likely, that we will either hit the wall and the faq-serve will break,
or, far worse, we'll hit the wall and the entire server will break! This
would have potentially dire consequences for those in the lab, something
that I don't care to repeat!
semi-serious problem: Yes, all staff have write access to the above
named volume/directory. This can be bad. Deletions can occur. This is
not a good thing.
minor (but very visibly annoying) problem: The formatting of the faqs
pulled from the Helpline repository is pretty well hosed. Not a big deal,
but, let's face it, when you are a beginner trying to read a technical
document, run on's like:
Macintosh to a UVM host computer. Start MacKermit --------------- 1.
Start the Macintosh. Make sure you have a copy of MacKermit on your
startup disk or your hard drive. 2. Start
i.e. everything runs on into one huge mushy block of text. Not a lot of
beginners will even make it through the first sentence.
minor problem: overwhelming indexes. This is something that I suspect is
endemic to the www/httpd software. An index consisting of 45 references
to Eudora, all but 7 of which are identical, doesn't offer much. In this
case, I'd submit that indexing is almost useless. Far better might be a
simple listing of titles. As I said, this is a function of the way such
software works, but given the need to have a manual index update, I'd vote
not to have it at all, but instead something that simply created a listing
of titles. A beginner seeing the aforementioned 45 entries would be
likely be lost.
minor problem: manual process of "fetching" all the faqs. I presume that
there is no "server crawler" that is quietly sneaking about gathering
things that look like FAQ's. The need for another manual process poses
challenges: Who does this? When? Should authors of new FAQ's alert the
"server master" that a new one is born? Should these authors have to
manually submit them in addition to another copy?
Which leads to one of the major cruxes of the problem under consideration
- single sourcing for masters.
Clearly, the process of creating a rich repository of HTML'able documents
is a crude and frustrating process. While text files are nice -- and
certainly afford the opportunity to avoid some of the pitfalls -- text
loses the rich aspects of the environment. As the software for web
_delivery_ improves, graphics will become far more commonplace than they
are now. It makes no sense to exclude this aspect. But that means HTML,
and that often means a difficult process, made more difficult if multiple
sources are required. I have no answer for this one: it's just a pain
that needs to be thoroughly considered before we jump on any solution as
the be-all end-all.
I do think this approach merits a close and well tested evaluation. I
also think that the problems noted need to be addressed as completely as
possible. I would hope that in conjunction with the exploration of the
cgi-bin solution here, that we could arrive at a reasonable 20.5th century