id author title date pages extension mime words sentences flesch summary cache txt planet-infomotions-com-3359 planet-infomotions-com-3359 .xml application/rdf+xml 389347 61656 74 [15] Next steps include: calculating an integer denoting the number of pages in an item, implementing a Web-based search interface to a subset’s full text as well as metadata, putting the source code (written in Python and Bash) on GitHub. After that I need to: identify more robust ways to create subsets from the whole of EEBO, provide links to the raw TEI/XML as well as HTML versions of items, implement quite a number of cosmetic enhancements, and most importantly, support the means to compare & contrast items of interest in each subset. The next steps are numerous and listed in no priority order: putting the whole thing on GitHub, outputting the reports in generic formats so other things can easily read them, improving the terminal-based search interface, implementing a Web-based search interface, writing advanced programs in R that chart and graph analysis, provide a means for comparing & contrasting two or more items from a corpus, indexing the corpus with a (real) indexer such as Solr, writing a "cookbook" describing how to use the browser to to "kewl" things, making the metadata of corpora available as Linked Data, etc. ./cache/planet-infomotions-com-3359.xml ./txt/planet-infomotions-com-3359.txt