PokeyCrawl version 0.1.1a5 is now available in the Python Package Index or PyPi! See the Github page for details on installation and usage, and the detailed post for an explanation of the mechanisms used.
Spiders and webs
Web Spiders can be very useful to the Website Administrator, in addition to indexing your sites, you can load test the server and an intelligent web crawler can even simulate normal -> moderate -> high levels of web traffic, allowing you to benchmark your website and server performance information.
While there are potentially malicious applications for any web technology, the utility of the Web Crawler makes having one handy a good idea. Here is my implementation, using Python multiprocessing, urllib, and socket.
The website backups utility is functioning in a limited capacity, there are few a still bugs to iron out but runs are succeeding without error and the results are as expected.
While I was intending to use the MySQLdb module to handle all database activities, I opted to use it instead only for the connectivity and diagnostic testing portion of the application. When it comes to actually dumping the data, mysqldump is a simpler (and faster in some cases) solution.