I think it was in error to go for a lock on the data and clear it —
even if it's data the client had set — when we timeout waiting for
the named pipe to appear. If the named pipe never appeared, then
something is likely really wrong with the server anyway, and as such,
we might as well expect the server to do the cleanup.
If the server never creates the named pipe, then this loop originally
hung forever. The count it now waits for is arbitrary, and probably
could be smaller, since the current implementation of the server
creates the pipe within 3-6 seconds under normal conditions.
Because the client defaulted to dying, this left the server in a
strange state as the query question wasn't cleared and the
“fake failure” named pipe was never opened and processed, since it
couldn't be.
The server did eventually recover when the client fully died;
however, the client has enough information to give up its query when
it knows failure has occurred, so that now happens.
The format option on bean-query command-line is now an option that
the client can pass along. The setting defaults to whatever
`bean-query` was going to default to.
On the a classic “concurrency is hard” front, I am slightly proud of
myself that I didn't hit this race condition but anticipated that it
might occur later. Giving entire control to fifoName to the server
makes more sense here, and the client should not submit its query any
time either fifoName or question is set.
As an extra safeguard, the client will die if it gets the lock and
the state for submitting a query isn't right.
Finally, added some fifoName removal debugging on the server.
This is a very basic daemon to run bean-query so that other Perl programs
can call it. The speed savings is not really there yet, as to get that, it
would need to leave bean-query running, perhaps timing out and reloading
files as needed. That's the long term plan.
Right now, all that this is useful for is to run another Perl process that
wants to submit and receive answers to bean-query.